Search the archive:
YaBB - Yet another Bulletin Board
 
   
 
Pages: 1 2 3 
Send Topic Print
Virtual Memory (Read 2266 times)
Reply #15 - Feb 6th, 2006 at 5:45pm
Nick N   Ex Member

 
Virtual memory:
This is a method of extending the available physical memory on a computer. In a virtual memory system, the operating system creates a pagefile, or swapfile, and divides memory into units called pages. Recently referenced pages are located in physical memory, or RAM.
If a page of memory is not referenced for a while, it is written to the pagefile. This is called "swapping" or "paging out" memory. If that piece of memory is then later referenced by a program, the operating system reads the memory page back from the pagefile into physical memory, also called "swapping" or "paging in" memory.
The total amount of memory that is available to programs is the amount of physical memory in the computer in addition to the size of the pagefile. An important consideration in the short term is that even 32-bit applications will benefit from increased virtual memory address space when they are running in Windows x64 Editions. Applications that are compiled with the /LARGEADDRESSAWARE option, as would be required to take advantage of the /3GB switch in 32-bit Windows, will automatically be able to address 4 GB of virtual memory without any boot time switches or changes to x64 Windows. Plus, of course, the operating system does not have to share that 4 GB of space. Therefore, it is not constrained at all.


http://support.microsoft.com/kb/294418/#XSLTH3122121123120121120120

http://support.microsoft.com/kb/889654/en-us

http://support.microsoft.com/kb/555223/en-us


The best pagefile layout is as follows:

1.
Set the system for "small memory dump" in the Startup and Recovery settings. Set the pagefile size to a static 2mb on the WindowsXP install directory. This must be done to prevent system crash when the main pagefile is located on another HDD I/O.

2.
Using a separate HDD I/O that is equal to or greater in speed (greater is better) than the WindowsXP install HDD, place a pagefile on it which its size has been determined by using the correct performance counter and memory I/O monitoring method. If the user does not know how to properly use the counter and memory I/O monitoring system, (and most don’t.. that is NOT the Windows task manager counters) set the page file to a static size that is 2x-3x the physical memory installed. It is critical that the page file be located on the first partition of the separate HDD I/O

examples: If the system has 2 RAID-0 controllers ( for example Promise and VIA), make sure both controllers are active and the page file is located on the controller which the WindowsXp boot directory is not located on. In this example configuration make sure if you use a 2xSATA RAID-0 (on the VIA Raid controller) for the WindowsXP boot directory, the pagefile is located on the other RAID-0 (Promise) controller in a minimum of 2xSATA RAID-0 configuration but  keep in mind SATAx 3, 4, or 5 RAID-0 is better. The pagefile MUST be on the FIRST partition and it is always recommended the pagefile be defragmented offline after creating it.

NOTE: The pagefile HDD's must be of equal or greater specification (type, access time and RPM) of the WindowsXP boot HDD's or the setup is useless. Also, if you do not have 2 completely separate HDD I/O controllers, (not the primary and secondary ATA controller) don’t bother with the above method as you will not see any gain by moving the pagefile to another drive, doing so will slow performace. Those who may be running single ATA on the primary or secondary IDE controllers and have a RAID-0 controller available, but do not use it because they are apprehensive about using RAID-0 for their data security or lack of RAID experience can benefit greatly by setting up a RAID-0 (2xSATA or ATA) pagefile for their systems using that unused RAID controller, ESPECIALLY for game use.


RAID-0, SCSI HDD pagefile setups are the most efficient and the highest performance pagefile there is. I use the RAID-0 SCSI method on my dual Xeon system which is 14gb in size and makes video editing and other heavy pagefile usage programs, fly. I have never seen a 3D application or game slow down because of that setup, nor does Windows run slow... The internet boogyman sites which posts those rumors and XP tweaks are wrong and need to get a masters degree/PHD from a real university in computer science before posting such nonsense.




« Last Edit: Feb 6th, 2006 at 7:36pm by N/A »  
IP Logged
 
Reply #16 - Feb 6th, 2006 at 9:37pm

Mick_C   Offline
Colonel
formerly unomernot
AZ

Gender: male
Posts: 80
*****
 
Awesome Nick N! You covered ALL the bases on that one! Nice..  ;)  BTW, I'd hate to ask what that system of yours set you back (well, I can always dream of a system like that!)  :o ;D

Congo, to answer your question, the difference between static and dynamic page file should be determined by disk I/O. If your system is slow, and very fragmented, a static pagefile might make sense, but with todays powerfull systems, it's not such a concern. Here is a link to tweaking the pagefile if you wish. But if your not running a server where space and disk I/O is a premium, it's not worth the headaches that Win~ can throw at you.

http://support.microsoft.com/kb/314482/en-us

Changing the pagefile can wreak havoc with some imaging programs as well. So make sure you do a total restore and not incremental from an image file or you'll find issues.

Hope this helps guys,
Mick
PS: I love these forums cause it's a pleasure to chat with folks who KNOW their stuff and are willing to help others.  Thanks
 

...&&Athlon 64 X2 (T) 4200+2.2Ghz,1GB PC2-3200 DDR SDRAM,250 GB WD SATA HDD&&Ati Radeon Xpress 200 Integrated, DL DVD_RW Drive, DVD Drive&&Front Panel 9 in 1 Digital Reader Drive&&Logitech Attack3 Joystick, yada yada... in debt agin!
IP Logged
 
Reply #17 - Feb 6th, 2006 at 10:37pm
Nick N   Ex Member

 
[quote author=unomernot  link=1138734675/15#16 date=1139279831]Awesome Nick N! You covered ALL the bases on that one! Nice..  ;)  BTW, I'd hate to ask what that system of yours set you back (well, I can always dream of a system like that!)  :o ;D

Congo, to answer your question, the difference between static and dynamic page file should be determined by disk I/O. If your system is slow, and very fragmented, a static pagefile might make sense, but with todays powerfull systems, it's not such a concern. Here is a link to tweaking the pagefile if you wish. But if your not running a server where space and disk I/O is a premium, it's not worth the headaches that Win~ can throw at you.

http://support.microsoft.com/kb/314482/en-us

Changing the pagefile can wreak havoc with some imaging programs as well. So make sure you do a total restore and not incremental from an image file or you'll find issues.

Hope this helps guys,
Mick
PS: I love these forums cause it's a pleasure to chat with folks who KNOW their stuff and are willing to help others.  Thanks[/quote]

I have about $9200 invested in my super tower... What’s really cool is I can encode a video, watch TV or a movie (send the signal to LCD projector or watch on system LCD monitor) and web surf heavy flash media websites at the same time without a hiccup.


As for pagefile setups.. My personal preference has always been to set a static size. It has been my experience, with my usage, a dynamic page file tends to fragment more than a static one. A pagefile will typically not fragment if it is placed on its on partition however I always run a defrag from time to time just to be sure.

There are allot of internet tweaks out there which are complete nonsense. One of my favorites was to compress the FS9 folder to get rid of stutters.... Then when I went back and read the thread again I saw that in the process the person posted that defragmenting before and after was necessary... I almost fell off my chair laughing at that point because allot of people were posting back about how it also worked for them....

.... what most people do not realize is that even good professional defragmenter software does not completely defragment the hard drive in one pass. It is limited by many things (available memory, HDD space, resources, etc) and in one pass will usually display a graphic representation of a completed HDD defrag which looks like everything is nice and in order. The truth is, if a system is fragmented like after a complete OS and software reinstall or has been thrashed around by huge files, such as FS scenery files… especially anything photo-real and huge, it normally takes 2-4 full defragment passes to totally defragment a HDD to 100% optimal performance no matter what the graphic display shows in the defrag software.

Those guys that were compressing their FS9 folders were seeing the performance results of running multiple defragmentations, not compressing FS9 files.

The same goes for those who run FS9 on a separate HDD w/dedicated FS9 boot O/S. If before going to all the trouble to set up a separate boot drive for FS9 they were to have acquire a professional defrag program such as O&O Software and run a COMPLETE\ACCESS defrag 4 times, rebooting in between each pass, they would find ALL their FS files nicely placed on the HDD in blocks EXACTLY the way FS9 will access them as a flight progresses, therefore eliminating allot of stutter issues which are file load related. A COMPLETE\NAME defrag work wonders as well if you use allot of photoreal scenery which is normally file named in numerical or alphabetical order.






« Last Edit: Feb 6th, 2006 at 11:40pm by N/A »  
IP Logged
 
Reply #18 - Feb 7th, 2006 at 12:17am

Mick_C   Offline
Colonel
formerly unomernot
AZ

Gender: male
Posts: 80
*****
 
Thanks Nick N, I wondered how FS9 would behave if moved to a dedicated partition, now I know. Sounds like you built one awesome server! I've built similar systems that were used in Assistive Technology Applications (home and office control / ECU) for customers with certain disabilities (IE speech in and out interfaces) linked to home control devices. Most of what I do with PC's is functional, gaming is new to me. You could use that unit to control your entire house, alarm, lights, intercom, phone system, whatever with a (relatively cheap) add on and it would still never hickup! Nice!  Grin
Mick
 

...&&Athlon 64 X2 (T) 4200+2.2Ghz,1GB PC2-3200 DDR SDRAM,250 GB WD SATA HDD&&Ati Radeon Xpress 200 Integrated, DL DVD_RW Drive, DVD Drive&&Front Panel 9 in 1 Digital Reader Drive&&Logitech Attack3 Joystick, yada yada... in debt agin!
IP Logged
 
Reply #19 - Feb 7th, 2006 at 10:05am

kipman725   Offline
Colonel
out of the shadows..
Bedroom

Gender: male
Posts: 904
*****
 
if you have enough ram you don't need a page file (virtual memory) and use only memory.  I think with windows XP this is about 4gb because thats the max amount of memory it can alocate to a single proccess. to get around the problem of some aplications needing a page file (I have found non that do if you have enough ram though) you could always put your (small) page file on a ram drive partitioned from the main system memory.

If you get out of memory mesages that means that your page file is too small and/or you don't have enough memory for the task your trying to acomplish.

The best setting for most users with virtual memory is the auto mode and allowing windows to resize virtual memory if needed.

Vitual memory has nothing to do with graphics cards.  The only graphics cards that take system memory are the chepo onboard ones (Intell Extreme) or the low end ati hyper memory cards or nvidia turbo cache.  This is because system memory opertates at much lower frequancys that graphics memory on the card and is further away from the graphical proccesing unit so higher latencys would be incured thurther impacting on perfomance if it was accessed.

Currently I have 1gb of ram and a 1.5gb page file and have never had an out of memory mesage and have good performance.

 

5900xt/2800+/280GB/1GB PC3200/Cyborg Evo Force/ABIT NF7&&Gpu clock: 475mhz core, 800mhz mem&&CPU at: 12.5x175 = 2187.5 &&memory: 2.5, 3, 3, 8 Duel channel on &&Os: windows xp pro, ubuntu 5.10 breazy badger
IP Logged
 
Reply #20 - Feb 7th, 2006 at 11:09am

congo   Offline
Colonel
Make BIOS your Friend
Australia

Gender: male
Posts: 3663
*****
 
Thankyou for your input gentlemen, I hope you don't mind if I cut and paste this for later reference.

I've seen the witchdoctors casting their curses too Nick N, seems they vasty outnumber PHD's.

Intriguing, informative. Good stuff.

Under memory dump settings, I've always set it to NONE, just because, no reason............ did I do bad?
 

...Mainboard: Asus P5K-Premium, CPU=Intel E6850 @ x8x450fsb 3.6ghz, RAM: 4gb PC8500 Team Dark, Video: NV8800GT, HDD: 2x1Tb Samsung F3 RAID-0 + 1Tb F3, PSU: Antec 550 Basiq, OS: Win7x64, Display: 24" WS LCD
IP Logged
 
Reply #21 - Feb 7th, 2006 at 11:27am

Delta_   Offline
Colonel
Woah!
London, UK

Gender: male
Posts: 2032
*****
 
Quote:
Thankyou for your input gentlemen, I hope you don't mind if I cut and paste this for later reference.

I've seen the witchdoctors casting their curses too Nick N, seems they vasty outnumber PHD's.

Intriguing, informative. Good stuff.

Under memory dump settings, I've always set it to NONE, just because, no reason............ did I do bad?

If you really want to trawl through the "dump" the computer just had to find out why your comp reset itself, then use a memory dump.  There really is no use for the average and advanced home user to use memory dumps.

Quote:
if you have enough ram you don't need a page file (virtual memory) and use only memory.  I think with windows XP this is about 4gb because thats the max amount of memory it can alocate to a single proccess. to get around the problem of some aplications needing a page file (I have found non that do if you have enough ram though) you could always put your (small) page file on a ram drive partitioned from the main system memory.

I've seen a few people do that with very good results.  You do need a lot of RAM though.
 

My system:Intel Q6600@3.6GHz, Corsair XMS2 4GB DDR2-6400 (4-4-4-12-1T) , Sapphire 7850 OC 2BG 920/5000, X-Fi Fatality, Corsair AX 750, 7 Pro x64
IP Logged
 
Reply #22 - Feb 7th, 2006 at 1:22pm
Nick N   Ex Member

 
Quote:
Under memory dump settings, I've always set it to NONE, just because, no reason............ did I do bad?


That memory dump is the reason Microsoft defaults to OS to a dynamic 1.5xPM. That amount is designed to cover all the bases and is not set in stone, it’s just a starting point. If you do not need diagnostic information the dump is not needed.

No matter how large or small the pagefile, Windows will always use PM first, meaning, the existence of the pagefile makes no difference in performance. The pagefile acts as:

1. An extension of PM (only if needed)
2 as a repository for chunks of memory (even if PM is not full) which have not been accessed in a period of time or to make room in PM (if it is full) for new chunks as they are called up.

....In other words if I (at 2gb PM, WindowsXP x64) am eating up 1.6gb  of PM booting a flight (not uncommon for my setup with PMDG 747, UT-USA/Canada, BEV, FlyTampa KSEA, lots of other extras + 100% on all sliders, etc, etc, etc) and during that flight I am accessing an estimated 2gb+ of additional scenery,... Each block of PM which is cleared to make room for more scenery files will get moved to the pagefile in a non fragmented complete set of files (at that point it is a swapfile). The advantage is that the blocks or chunks of stored scenery are waiting to be recalled (if needed) and since the system does not need to search the HDD for all the individual files the chunks (if needed) are reloaded with less I/O and therefore much better performance in the sim. It may ony need 250-500mb of that prefecthed information, but none the less it is faster (much less I/O) to retrieve it from a properly set up pagefile than it is to scower the HDD for all the individual files again. Even with the best HDD defrag the recalled files are 10x better organized for recall on a properly set up pagefile.

That happens if you have 128, 256, 512, 1024, 2048, etc, of physical memory installed and an established pagefile. The pagefile is not a waste, a performance dog, nor is it ignored. It is used as a swapfile no matter how much PM is installed.

WindowsXP x32 maxes out at 4GB PM and will allow a pagefile size up to 16 Terabytes in size.

WindowsXP x64 maxes out at 16 Terabytes PM  Shocked and will allow a pagefile size of 512 terabytes  Shocked Shocked


Being WindowsXP x64 memory access is different it is recommended the pagefile be much, much larger to do the same work as an XPx32 system.

Games do not run any faster with or without a pagefile however the pagefile can cut down on stutters from scenery loads. It depends on the game and the use.

Not everyone needs a large pagefile. The size of the pagefile is determined by individual system use, however, there is no harm or performance loss in setting the file to a 1-2 or 3gb size unless there is restricted disk space. It does nothing but take up space until needed.


After reading what I posted yesterday I realize I may have come off a bit harsh... it was not my intention to do so and I apologize if I offended anyone.  Wink






« Last Edit: Feb 7th, 2006 at 3:12pm by N/A »  
IP Logged
 
Reply #23 - Feb 7th, 2006 at 1:40pm
Nick N   Ex Member

 
Quote:
Thanks Nick N, I wondered how FS9 would behave if moved to a dedicated partition, now I know. Sounds like you built one awesome server! I've built similar systems that were used in Assistive Technology Applications (home and office control / ECU) for customers with certain disabilities (IE speech in and out interfaces) linked to home control devices. Most of what I do with PC's is functional, gaming is new to me. You could use that unit to control your entire house, alarm, lights, intercom, phone system, whatever with a (relatively cheap) add on and it would still never hickup! Nice!  Grin
Mick  



I am not saying FS9 would not benefit from a dedicated HDD... I just do not think it requires an entire boot OS for its own use. A proper virtual machine can be set up in the WindowsXP boot for FS9 use. If FS9 is installed on the first partition of a dedicated HDD and defragmented as mentioned above then I can see advantages to that setup. It will not give any better frames but scenery access would be further improved.

As for Big Ben (dual Xeon)…. It is also my house entertainment system with remote DVD drives, 7.1 audio entertainment video signal for 3 rooms. It has 5 remote terminals throughout the house and I have been toying with running the lighting sprinkler system and answering the phone as well. It has been on 24/7 for several years now and never flinched. I also use if for autocad and other engineering applications. I do not use it for 3D games because my wife, who is also an engineer, uses it extensively for her projects. I did try FS9 on it a while back.

Did someone say 70+FPS with all sliders, including weather, maxed??  Grin  Shocked Shocked

 
IP Logged
 
Reply #24 - Feb 7th, 2006 at 3:06pm
Nick N   Ex Member

 
One last item I did not address…

I understand the theory behind why it is said the PF can cause slow performance and shutting it down (if you have allot of PM) or reducing its size will fix the issue… The thought is by reducing disk I/O the system has more resources, however, all that does is put a finger in a hole of a leaky dam. It does not resolve the issue or make the system ‘truly’ faster. It creates a temporary placebo performance increase which is seen as an immediate improvement and therefore was considered good to do.

A few years ago I was having some stutter problems and experimented with pagefile size, including setting the system to ‘no pagefile’, which immediatly solved most of the stutter issues but caused other problems when the PF was needed, including intermittent stutters (not as bad as the original stutter issue) and other I/O slowdowns. Back then I was using a dynamic pagefile. With the help of an MS engineer who lived in my neighborhood (and the cost of a a few rum-and-cokes) I learned how to use the system performance counters to monitor where the I/O problems (disk and memory) were occurring. I learned that by setting a static size pagefile, then running an offline defrag on it was the key to resolving all the disk I/O issues. Moving the PF to another I/O on its own HDD further reduced I/O.

The fact of the matter is that if the PF is fragmented and dynamically changing the disk I/O is greatly increased. All of a sudden the blocks of stored memory chunks are 10x more fragmented and take more I/O to retrieve than simply recalling the files back up from their original location on the HDD. Over time as the PF becomes severely fragmented the system will start to really bog down. I defrag the PF once a month and also delete the PF every 90 days, reboot and rebuild it, and defrag it offline. Regular monthly maintenance is critical for proper PF performance.


 
IP Logged
 
Reply #25 - Feb 7th, 2006 at 11:33pm

congo   Offline
Colonel
Make BIOS your Friend
Australia

Gender: male
Posts: 3663
*****
 
By offline defrag, you mean defrag while the PF is not active I presume. How do you achieve this?
 

...Mainboard: Asus P5K-Premium, CPU=Intel E6850 @ x8x450fsb 3.6ghz, RAM: 4gb PC8500 Team Dark, Video: NV8800GT, HDD: 2x1Tb Samsung F3 RAID-0 + 1Tb F3, PSU: Antec 550 Basiq, OS: Win7x64, Display: 24" WS LCD
IP Logged
 
Reply #26 - Feb 8th, 2006 at 12:05am
Nick N   Ex Member

 
Quote:
By offline defrag, you mean defrag while the PF is not active I presume. How do you achieve this?



A good disk defrag software such as O&O v8 or Perfectdisk v7 will have that option. Once set you reboot the system and as it boots back up it will default to an offline defrag screen. It is also good to run a Windows disk check offline before defrags are run. This is done by:

My Computer,
Right click the boot drive, select properties
Select Tools
On the Error Checking window, click 'check now'
Put a check in 'Automatically correct errors' leave the 'scan and recover bad sectors' unchecked
click Start

Windows will prompt for a disk check at next reboot, select OK

reboot

the rest is automatic





As a side note, if you shut down the memory dump completely I do not think you will get an error blue screen which displays the file and memory address where the error occurred. I have always set mine to "small memory dump" just in case a BSOD decides to pop up. By doing I get the BSOD error message which can definitely help determine what caused the crash. A small memory dump only requires a 2mb pagefile be present on the boot directory. That is why I posted the need for the 2mb PF on the boot drive if you should decide to move the main PF to another HDD.
 
IP Logged
 
Reply #27 - Feb 8th, 2006 at 1:09am

congo   Offline
Colonel
Make BIOS your Friend
Australia

Gender: male
Posts: 3663
*****
 
Here is what I just tried:

I did the disk check first.

I then turned off my page file and re-booted.

Next I opened my page file to 1534mb fixed.


My idea was this, instead of maintaining and defragging the PF, I just killed it and started over, would that work once in a while?
« Last Edit: Feb 8th, 2006 at 4:25am by congo »  

...Mainboard: Asus P5K-Premium, CPU=Intel E6850 @ x8x450fsb 3.6ghz, RAM: 4gb PC8500 Team Dark, Video: NV8800GT, HDD: 2x1Tb Samsung F3 RAID-0 + 1Tb F3, PSU: Antec 550 Basiq, OS: Win7x64, Display: 24" WS LCD
IP Logged
 
Reply #28 - Feb 8th, 2006 at 1:46am
Nick N   Ex Member

 
Quote:
Here is what I just tried:

I disk the disk check first.

I then turned off my page file and re-booted.

Next I opened my page file to 1534mb fixed.


My idea was this, instead of maintaining and defragging the PF, I just killed it and started over, would that work once in a while?


Yep... it sure will!

That is another way to defrag a pagefile
 
IP Logged
 
Reply #29 - Feb 8th, 2006 at 1:36pm

Mick_C   Offline
Colonel
formerly unomernot
AZ

Gender: male
Posts: 80
*****
 
Gentlemen, this has been an excellent series of posts. Explained exceptionally well!  Grin  Since were on the subject of system performance, and almost everything has been covered, I would like to add one more item that I've found is often overlooked even by advanced users, the subject of disk logical error checking and repair. This little tutorial is designed for the novice to intermediate user (bulk of the market).

Often, when a unit is in for repair, the disk will show a simple lack of maintainance over an extended period of time. The customer complains that system gets slower over period of time till it dies.  Nick and Congo, you already see where this is going, but for the sake of others reading this post, I will expand on the fix a bit.  8)

"Check Disk" (chkdsk.exe) has been bundled with most MS systems for a while. I have heard folks tout horror stories about this program and how it "destroys" your drive etc.. yada yada. If your talking Win98 and below, yes. If WinX and 2K not so. I've used this program thousands of times on extremely damaged file and logical structures and never has a drive not booted after running it. It also features a "basic" recovery mode built in that saves (salvageable) data back to the drive so you can copy paste it back into it's proper place. For a "freebie" utility, it's come a LONG way.

Running Check Disk (chkdsk.exe) in READ ONLY mode (the default) will show you if there are errors and nothing happens to your system. Run it in REPAIR mode only if it suggests it.  If you have ever been forced to do a hard reset on your system (power off button or reset button) cause it locked tight as drum. There is a good chance your disk had a bit of logical damage done to it.

Usually, this shows up in the Volume Bitmap, and Logical structures. Lost clusters, and incorrect free space are two issues that will slow your drives performance as much as 400+ %! Slow boot times (if no virus is found) can indicate need for logical disk repair as well. To find out, follow this procedure.

Click START, then RUN.
Type CMD into the box
Press OK

A black "DOS" (although it's NOT a true DOS window, but this is a whole seperate issue) window appears. At the blinking cursor type

chkdsk c:  (then press RETURN)
you will see a lot of information. 1st a warning that checkdisk is in READ ONLY MODE. This is a good thing so ignore it.  When all is finished, you will see a synopsis of the results. If check disk wants you to FIX the disk, it will tell you to run it again in /F (fix mode).

That's easy enough, just press F3 to call up the last line you typed and add /f or, type the following.

chkdsk c: /f   (switch F for fix) press RETURN or ENTER

Now, you will see another warning that chkdisk can't run in windows mode, and you are asked if you wish to run it when the SYSTEM REBOOTS (Y).... Answer (Y)es
close the window and reboot. Once your system makes it back to the desktop, your done!  Cheesy  Depending on the type of logical issue repaired, you may note as much as a 200% increase in performance and decrease of boot time. Remember this is only when the disk is pretty damaged, not for you folks who do your regular maintainance.

On last note for ADVANCED users. You might see situations where the directory structure seems hosed. Analysis testing on this utility surprised me a bit. I had a directory structure so damaged it wouldn't boot. Checkdisk actually repaired over 84% of the directory construct so that an advanced user could effect a complete repair. This amazed me considering the utes past history!  Wink

Mick
 

...&&Athlon 64 X2 (T) 4200+2.2Ghz,1GB PC2-3200 DDR SDRAM,250 GB WD SATA HDD&&Ati Radeon Xpress 200 Integrated, DL DVD_RW Drive, DVD Drive&&Front Panel 9 in 1 Digital Reader Drive&&Logitech Attack3 Joystick, yada yada... in debt agin!
IP Logged
 
Pages: 1 2 3 
Send Topic Print