ZX Spectrum stuff International youth work Porsche 924 Misceallous

View My Stats
PC optimatization

Magyarul/In Hungarian · Angolul/In English ·

The swap file

Swapping or paging is the process, when the operating system writes out the temporarily unused data from the memory, and when needed, reads them back.

As there are (more) ten times difference the speed of the RAM and the hard disk, it is very important to optimize the process of paging ('swapping').

Recommended steps:

Buy and fit the necessary memory. Take a look, how many memory occupied by the operating system, when no another programs loaded. Recommend to fit in minimum twice as much, but also we can triple this amount. A bit strange situation, that older memory types (SDR, DDR1, DDR2) are more expensive than the new ones (DDR3). As it is a single time investment, very worth to spend the money for it. If our materiel status allows, fill the machine with full amount of possible memory and when buying a new computer, do not hesitate to do this.

Optimizing the size and location of page file. In the 'old days' optimizing of page file was critically important. With machines 512 or less memory, this became a priority among another optimizations. If we have enough money, fill it up with full memory. When buying a new configuration, do not hesitate to do this. But the following steps will surely improve the speed of old and new machines (as well extend the lifetime of hard disks).

By default, the swap file is managed by the operating system. This covers the size and also the location.

This is not optimal. As the operating system dynamically changes the size of the file, it uses extra processing power, and in the other hand, possible making the hard drive fragmented. So change it. Minimum size: 3900MB, maximum: 3900MB. This setting is ideal for 32 bit operating system, who are using 64 bit versions, must know the correct values.

About the location of page file: experts are swearing for the separate FAT partition (or even for a different hard drive). As the modern configurations equipped with huge memory, page file if only for safety reasons, so changing the location is only recommend for very experts.

If the machines used for office and web surfing only, try to switch off the swap file. If no problems occur, than we can left it turned off.



File cache

The method of file cache (puffer, temporary storage) also using the huge speed difference between RAM and hard disk. This is separated from the psychical memory. It is a temporary puffer, in which placed the most recent used data. What is the sense of it? When writing the data, it is trivial: the data is not written directly to the slower hard drive, but to the cache first. When exiting programs, and saving documents seen the most dramatically speed change. But during the operation, it also improves the reaction of the machine (besides extending the lifetime of the winchester and lowering the consumption also). When reading (for example double opening a program) also can seen the clear speed improvement.

What is the ideal size of this puffer?

For first it seems trivial: as big as it can be. This is not true for at least two reasons. First, it is separated from the psychical memory, so too little operative memory would be left. So may begin the above mentioned swapping mechanism. On the other hand, as must to 'catalogue' the files in the puffer, it this puffer is too big, too many files are in - so cataloguing those files again decreases overall performance.

The ideal size is the 1/12-1115 fraction of the psychical memory, but maximum 100MB. Of course machines working as servers or fulfilling special tasks excepted, here of course bigger cache can be set.



Prefetch, Superfetch

The Windows XP introduced the so-called prefetch technology, the pre-loading of data. The essence of this 'fastening' method, that sequential reading of data is much more faster than random reading.

During the boot, the system creates a database, which files loading when the system starts, so data will be pre-loaded into the free memory of the computer. It includes (?) the system files, virus scanner, firewall and so on. The prefecth mechanism also looking for two minutes after the boot, which program were launched, and they also loaded into the RAM.

It has four options: only boot prefetch; application prefetch (after the boot); perfetching of boot and apps; and finally turn off both of them.

This sound nice so far, but look closer, what is against the system!

First, must be made a database about the files, which slows down the system. With the speed of nowaday's CPUs and the correct cache settings, it will cause a unnoticable speed degrade.

But the hard disk is used unnecessarily more, and the battery life of the laptops also will be shortened. With such machines (for example the teacher's room), where more people using different programs, totally useless.

Perfetch is good or not? As the source code of the Microsoft programs are closed, we can not analyse their behaviour, so the answer is not clear.

Boot prefetching is better to left switched on, but with correct defraging files will be placed the right order.

When only one person is using a computer for mainly a single task (for example launching a web browser) maybe better to left it switched on.

Superfetch is a tuned-up Prefetch introduced in Vista. It makes a database, on which day, which hour what programs were used. So the recommended settings are the same as with the Prefetch technology.



L2 cache (second level cache)

There are also very fast temporary buffers between the processor and RAM, the L1, L2 and sometimes L3 caches. Now we will study the L2 cache deeper.

Its purpose the same as the file cache, but here we accelerate the data flow between the CPU and RAM, not the hard disk and RAM.

How important the L2 cache is? Until computers did not handle huge data quantities (as music, high-res pictures, films) processors without cache or with very small cache sizes could cope with their job nicely.

Nowadays, in the age of multimedia, it is important to have 'big' L2 cache. When buying new computer, it is surely contains the necessary 1 megabytes of cache. When buying used computers, avoid the machines with less than 512K L2 cache!

If we are using special memory hungry applications (such as professional press-work, CAD/CAM, film editing etc.), even under a 64 bit operating system, prefer CPUs with the biggest L2 cache size. For average users this extra cache size will give a marginal, under 10% speed improvement.

The handling of L2 cache of our favourite operating system is a mystery. Some sources are telling, that it is automatically detects the right size, others says, that it uses only 256K as default. To be sure, set the right size manually, the number is in kilobytes (with synthetics benchmarks I did not recognized any significant performance differnce -being in the usual measuring error rate- between 'automatic' and 'manual' settings).



Antivirus software

Nowadays it is very risky to using a computer without an antivirus software. But it can slow down the machine, because it uses memory and processor resources. How can it be minimized?

The answer: using a cloud-based antivirus software. The method is the following. Opposite of the conventional antivrius program, the cloud-based one is not downloads the virus definition database, and scans the programs and files on the machine itself, but this things are scanned though a central server online. Its advantage, that the processor resources are minimized, because scanning is done on this server. On the other hand, the virus definition database is always updated. Disadvantage: without Internet, the machine is totally vulnerable. For this case worth to hold a spare offline antivirus version, even with an outdated virus database for the worst case.

Better antivirus programs contains both the cloud-based and the conventional protection possibility. So worth to choose among them. For everyday work use the cloud based, and when the Internet connection is absent, switch on the regular method.



Internet-tuning

Nowadays the Internet is used by the most average (and not so average) users. The Internet-tuning has more steps, take some examples among them.

TCP/IP settings:

RWIN (Receive WINdow):

The reliability of the TCP/IP protocol relies on the continuous feedback. When we are downloading data from a server (for example a webpage), it arrives in data packets to us. When the packet arrives without any problem, then our machine gives feedback to the server with a checksum number generated by a special algorithm, that sending was successful. If it did not arrived, or it was faulty then tells to the server, and asks to resend the corrupt packet.

It is important, how big to set the size of the data packet, after we send this feedback.

This is the so-called RWIN value (Receive WINdow). Why this setting is so important? With some thinking easy to figure out to anyone.

If the size of RWIN value is set too large, then on bad quality network (Wi-Fi, company Ethernet) we must ask to resend too big amount of data packets.

When RWIN is set too small, the continuous verifying slows down the browsing (and it is unnecessary on good quality networks).

The appropriate RWIN value is based on the average latency time and bandwidth of the network. Some sources only take in account the available maximum bandwidth when calculating. This also can be fine, for example when browsing width laptop different Wi-Fi networks, the latency time is continuously changing. And in company and school networks with lot of computers, also this value can vary. RWIN value always calculated by the (MTU-40) * integer value formula.

Without thinking too much about it, use the values between 30.000 (Wi-Fi) and 60.000 (cable) for average connection, these are good in the most case.

Checksum Offloading:

This setting is in tight relation with the previous. If enabled, the verifying checksum number of the packets are calculated by the network hardware, freeing up processor resources. So worth to switch on.

TCP Window Auto Tuning:

Windows is able to resize the RWIN value, when detects good quality network. The options of this settings: disabled, highlyrestricted, restricted, normal, experimental.

First is used when does not need to go above 64K. The last one can grow up to 16MB, sources does not recommend to use it.

Selecting the value is done by the speed of the network. Worth to note, that changing the RWIN value surely demands CPU resources, so suggested to set disabled with slow machine and restricted network, and use the above mentioned two values.

Windows Scaling heuristics:

Mysterious and inconsistent option. Some sources said, that when enabling it, Windows will choose itself from the five methods mentioned earlier. Another states, that it can set to fix the RWIN value, and does not set it back to the user defined one. If just any of these is true, must be disabled!

Congestion Control Provider

CTCP (Compound TCP) is similar to the previous, but can change the RWIN sizes more quickly when testing, which is good for broadband. Suggested: turn it on.

DCTCP (Data Center TCP) with ECN (Explicit Congestion Notification) method is only worth to use in server environment.

Path MTU Discovery (PMTUD):

Path MTU Discovery (PMTUD) is an another important setting. When setting the size of data blocks, PMTUD can be modify this settings on the actual path of the host and the server.

For example, we set the size of block (the MTU) to 1500 bytes on out machine. But the maximum MTU may vary in the path from the host to the server. It goes out in 1500 byte packets from us. Then it reaches to a station , where the value is 800 bytes. After it reaches the destination through a 1200 bytes server. If Path MTU Discovery activated, the actual MTU size will be 800 in this case. We avoided the fragmentation, so saved some bandwidth and processing time (it is necessary to stick new header to each fragment)

Our machine stores in the cache the actual MTU value, and it can be decreased and increased after a period of time.

SackOpts (Selective ACKs, Selective Acknowledgement, SACK):

SackOpts is a very useful enhancement for the TCP/IP protocol, which can accelerate the Internet speed. When switched on, only must ask to resend the faulty data packets from the server, not the whole packet.

With large RWIN value (fast, reliable connection) it is necessary to activate. In the case of small RWINs (slow, unreliable networks) may useful to switch it off. Beside this, if the function is not supported by the server, also slows down the browsing.

Time to Live (TTL, Default TTL, Hop limit):

The outgoing data packets are going through numerous machines (hubs, routers) until then reaching their destination.

The value of TTL determines the maximum limit, after how many seconds and hubs the data regarded as live, and not must have to ask to retransmit it.

The optimal value is between 32 and 64. For example, if we set it to 64, and our machine reaches its destination over 10 points, the initial value will be decrease by 1 on each points (value is 54).

When the journey takes 2 seconds between two hops, TTL will decrease by 2, if it is 3 seconds, then it will be lowered by 3. The value will go to 0 continuously.

If the value is set too small, data packets can not be goes out from our machine at all. If it is too large, may happen, that data packets will be in journey too much in the Internet, without any feedback (and ask for retransmit).

Receive Side Scaling:

Receive Side Scaling is such a technology that distributes the processing of incoming data on multi-core machines to the different cores. To do this, of course needs an adapter (with the appropriate driver) what supports this mode. The incoming data is sliced, which parts are processed in the separate cores.

In the case of HyperThreading virtual cores this does not apply, equally ignored as if it were a single-core.

Direct Cache Access (DCA), NetDMA, TCP Chimney Offload:

We are describing the Direct Cache Access and NetDMA technologies together. The DCA-capable network card can copy the data directly to the CPU-cache. The NetDMA engine uses the netcard's own 'processor' instead of the CPU. It will speed up in the case of older processor (Core2 and earlier?). Must be supported both by the motherboard and network card. Windows 7 was the last version was supporting it.

With the TCP Chimney Offload also can send the TCP tasks from the CPU to the network card, in this way is similar to NetDMA. From Windows 8 only this supported, because said it is more efficient.

Suggested settings: Vista/Windows 7: if the machine supports, DCA and NetDMA together or TCP Chimney Offload (but not the both at same time). If only the last one is work, of course use that. In the case of Win8 is naturally the last one.

Receive Side Coalescing (RSC):

The essence of this method is to join the smaller data packets arriving from the Intertnet into the card own cache by the card's CPU itself. And then transfers it to as a single large packet into the CPU. It frees up I/O and CPU resources. Turn it on.

Large Send Offload (LSO):

It is the opposite of the previous, joining the outgoing more smaller packets into a single bigger one. In theory improves speed, but the realization is far from the perfect. Mainly Gigabit and Intel drivers are buggy, so worth to disable yet.

Timestamps:

Adding the timestamps to the data packets can help to identify faster the bad packets. This is a more efficient method, than looking up by TTL.

Its disadvantage, that it adds extra 12 bytes to each packets.

So necessary to switch it on in the case of bad quality network or broadband connection with large RWIN value.

Changing the name servers

When entering a website address into a browser, this is originally a bunch of point separated numbers. The 'translation' between the two formats is done by Domain name servers (DNS). These are connected to each others in hierarchical system, but describing it is not belongs to the tasks of the page.

By default, probably is not set the optimal name servers in the operating system. With measuring we can set the optimal DNS server order, achieving speedier Internet.

DNS Error Caching

From Windows 2000/XP there is a DNS client service, which stores the data got from the DNS servers in the cache and reuse them. But this feture brings bugs also. Because the faulty DNS data also can stay in the cache, if we do not manage to delete them. Let's see what to do.

MaxNegativeCacheTtl: how long remain in the cache the faulty DNS data, until it deleted? What is the best solution: set it to 0 (Windows 2003 és XP).

NegativeCacheTime: the same as the previous in the case of Windows 2000/2008/Vista and 7.

NetFailureCacheTime: it determines, how long our machine send queries, after it clear, that this part of the network is dead. Of course, will be zero. Then the second and third DNS server will come.

NegativeSOACacheTime: the SOA (Start of Authority) stores the data of the DNS server. For how long we have to keep them, even when the DNS sever is down? The answer again zero.

Internet Protocol version 6 (IPv6)

The Internet Protocol version 6 (IPv6) was released, because its predecessor, the IPv4 only can handle approximately 4,3 milliard IP-addresses, opposite of the 340 sextillion possibility of the successor. For now, it slows down the network, for more reasons.

For example, the above mentioned DNS server access is 2x3 slower, because they are using the IPv4 addresses, and then the IPv6 (serial data addressing). Secondly, just because it is more widespread, mainly IPv4 addresses are stored in the DNS-cache.

Checksum validation is done only once, if only the v4 activated. In addition, network devices are mostly optimized for v4. Another time delay, when translating IPv4 to IPv6.

They are only some examples, certainly have more. So if no real reasons, turn off IPv6 both in the operating system and in the browser.

Host Resolution Priority

The essence of this tweak, that for the services taking part in the DNS and hostname resolution we give higher priority. These services are the following (in brackets, the default values): LocalPriority (499), HostPriority(500), DNSPriority(2000), NetbtPriority(2001).

Giving them 4, 5, 6, 7 or 5, 6, 7, 8 values again we can improve Internet speed.

Max SYN Retransmissions

The three way protocol of TCP works as the following. Our machine sends a SYNchronize packet to the server, which accepts it. For this, as an answer the server sends back SYNchronize-ACKnowledgement pack, with which showed, that our request was accepted. For answering this, our machine sends an ACKnowledge packet, which is accepted by the server, and the connection is established.

This is the ideal case. But can happened, for example because the faulty server or routers, that this can not done. The value of Max SYN Retransmissions indicates, that how many times try our machine resend the SYN packet in the case of errors.

Non Sack Rtt Resiliency

This novelty was released in Windows 8.1, and when if enabled, it allows the previous SACK process to elongation. It means, that the feedback can be not so limited in time. Turning off is strongly recommended.

Retransmit Timeout (RTO)

Determines (in milisecundums) how long the server have to wait, when does not get feedback from the arriving of a packet.

It has two values, Initial RTO and Min RTO. First one is adjustable, the last is fixed in the operating systems, seems it plays role only for compatibility reasons (available from Win 8).

Quality of Service

QoS, the Quality of Service is reserves a dedicated bandwidth for the 'critical' applications, for example the Windows Update.

It is not clear, that this 20% is reserved only, when these applications run, or always. But the description of the program says, if it not configured, it uses the default 20%. So worth to let in on, and set to 1-5%.

Network gaming tweaks

Network Throttling Index:

Beginning form Vista, Windows is limiting the bandwidth, when multimedia application runs beside the Net, by default to 10 packets/millisecond (a bit over 100Mbits/sec). Its purpose to give balance between multimedia and network performance.

As modern CPUs have enough power for parallel tasking (for example playing sound in network games), worth to switch off this component to eliminate lagging. Or alternatively play with the values between 0 and 70 (ffffffff: value of switching off).

Browser settings:

Cache-settings:

Our favourite browser do not downloads each time the same content (for example when pressing the Back button), but recalls them from the cache memory - if it was found.

It is quite adequate, that everyone are using broadband. So the invoice is independent from the data traffic. In this case, we can 'waste' with the traffic, winning more speed, and sparing some energy and also saving the lifetime of SSD drives.

Browsers are using two kind of caches. First one is the the memory cache (RAM), second are the files saved onto the hard drive. Better to turn off the last one! With broadband connection useless to save the data onto the HDD. In addition, the browser looks for the data only in the memory, not in the HDD, which also will give some speed improvement.

And do not let to manage the cache size automatically for the computer. The optimal size is between 10 and 50 megabytes respecting the power of the processor (indexing and reading the data), an the size of the installed memory. The lower limit is good for the older computers (512MB/XP). Useless to set higher value than the upper limit even for the most up-to-date computer system, the speed of the Internet browsing will not improve significantly.

Cookies

Cookie (also known as HTTP/web/browser cookie) is a simple text file (data), which is placed to our machine by the server of the actually browsed page through the web-browser.

Its original goal to identify the (returning) visitors (settings, user names, passwords etc.).

These cookies have more properties, and they can be various subtypes too. Now we take out those ones, which are critical from the viewpoint of the system speed and maintenance.

Expiry day of the cookie: one of the most important property, on which day it will be deleted from the machine. If we do not want to turn our machine into a slowed down 'cookie-box' set the browser to delete the cookies when exiting. With multiple users not only the multiple speed loss, but in order to protect the privacy data is very important about this setting. Let's think about the danger of storing user names and passwords!

Next important step is to turn-off accepting third-party cookies. The actually visited website is called 'first-party'. The partner sites of these sites are the 'third-parties' , for example the advertising, statistical servers. So disable accepting these cookies is another important step to the faster and safer browsing. And now we are tasted into the topic of the advertise blocking and personal tracking...

Adblocking (aka Adfiltering)

When browsing the Net, one continuously faced with advertisements. These can be the most various things: pop-up windows, flash animations, another embedded things.

In order to blocking them, numerous methods are available, for example external programs, adblockers installed into the proxy etc. The most simple what a single user can do, installing an adblocker add-on into the web browser.

To take into account the ethical and business side of this thing, is not belongs to the theme of this page...

Do Not Track

Do Not Track (ask websites not to follow us) is tightly connecting to the cookies and advertising. This setting can be activated through the web browser. When it is switched on, in the header of the HTTP request the value of DNT will be 1 (opt out), which asks the visited website and its partners, not to follow us.

As the system is voluntary, so with standard browser setting, not sure that we will reached the goal, and the blocking of social networks (in reality they are advertising networks) is not solved.

Fortunately, more independent browser add-ons exists, with them can detailed control the content of the website.

Slows down or accelerate this method? It depends on the web page. First, the browser extension scans the web page (slows), and then blocking the elements (accelerate).

Can be said, that the result is positive on the side of bandwidth and browsing speed.

Limit browsing history

Browsers are store the addresses of the previously visited pages. This is handy, as we do not need to type in the addresses of frequently visited pages. But it has disadvantages also. This database can be grow huge, so useful to override the factory defaults. Some browsers are supporting this limiting function out of box, but another needs extensions. Best to set it between 50-100 days, it can be the adequate longness in the most case. Because rarely visited pages will be deleted, while our favourites stay there.

Http pipelining

With the HTTP pipelining we can send more, but maximum 8 HTTP-requests with a single TCP packet to the server without waiting the response of each requests. A typical webpage consist of a HTML page, images etc. Without pipelining, first we get the page itself, then the pictures and another elements. When applying the pipeline technology, they can be arrived parallel to our computer, resulting dramatic speed improvements (for example on relatively slow networks).

As more HTTP request can placed into the same TCP packet, with HTTP pipelining we can reduce the number of sent and received TCP packets also.

In the browsers we can not only set the pipeline number to the dedicated websites, but assuming we are browsing with multiple tabs, we can also set, how much request can handle the browser simultaneously (of course, worth to set the number of multiple of 8, for example 16).

Referer information:

HTTP referer (one 'r'!) is a part of the HTTP packet, which tells to the visited website, from which site we arrived there. It is an important tool for statistics, but in the same way means the track of privacy, especially using together with cookies and 'track me'. In addition, it adds some data to the packets, so slows down the things.

In theory useful to switch it off, but it may causes some homepages do not work.

It has three options: 0: never sends informations; 1: only send information from the link, from we were arrived; 2: send informations about the link and the embedded images also.

Enabling geolocation:

When surfing on the Net, the browser can determine our location and share this information with the visited webpage, if it needs it. It is based on the basis of our IP address, with which the Google Location Services will search us. In the most cases it works within meters, but sometimes with worse precision. After that, the browser sends this info to the website.

In theory nor Google, and nor the browser using our informations. Despite that, seeing the Google search and advertising results (what we are already hopefully switched off) as well the Facebook search lists may change this viewpoint... So if we want to browse in incognito, better to turn off the geolocating, but the total anonymity can not be sure even this way.

Pishing and malware protection:

Most browser has built-in security features such as anti-pishing ('fraud' or 'web forgery') and anti-malware (aka 'attack site'). First category means such webpages, which are try to stole personal informations, for example passwords, accounts, credit card numbers. Malware sites contains harmful codes, which are try to infect our computer without our knowledge. They are common in the sense, that for an average user they hardly recognizable, as their creators try to made them look safe.

The essence of the method is that the browser compares the address of the website with a database, in which the pages are recorded with different classes: (for example safe, secure, secure with verified identity, fraud, malware). And after comparing the actual site with this list, the browser gives a notification, if the user visit a (supposedly) bad site. Switching on this feature of course will slow down the browsing speed, but it is worth to use. In some cases occur 'false positive' alerts too.

Link prefecthing or link prerendering:

The pre-loading of the documents through the link are the common name of these two technologies (Chrome: prerendering, all other use the first terminology).

This is based on the method, that the browser (if the webpage was written in that way), loads the documents thought the links, which we will probably visit. So clicking on them, pages will be loaded instantly.

Its advantage the fast speed, but it also have disadvantages. For example using the bandwidth without reason, and occupying the cache memory. Personally I suggest to switch it off, only recommended to use with high bandwidth and large RAM with the proper cache configuration.