Category Archives: Computer

Best Gaming Laptops on a Budget

With each new generation of hardware, laptops get more powerful. Thanks to better and more efficient manufacturing processes, laptop producers are able to put high-performance components into their gaming laptops without having to worry too much about cooling. For example, you can use last generation’s GT960M cooler to cool down a GT1070M. Furthermore, performance gets better within the same segment, meaning that this year’s mid-range will be on par with the last year’s high-end. This is why we can get cheaper laptops for gaming purposes without losing too much on graphics details, compared to more expensive older models.

In this article we will check out 3 “inexpensive” laptops that are a good fit for gaming, among other things.

Dell Inspiron 15 7000

This is a true gem! A laptop without it’s equal. For the price of around $700 USD, you get the freshest components available on the market. This laptop will run anything and have power to spare most of the time. You can get it in different variants, but this version is the best bang-for-buck. In it you get:

Intel Core i5 6300 HQ 3.3GHz Boost;

8GB DDR3;

256 GB SSD;

GeForce GTX960M ,2GB VRAM;

15.6 inches Full HD IPS;

Windows 10;

The laptop comes with everything you need, including the operating system. You could say it is a plug & play system. The build quality is very good for this price range and so is the cooling system containing both CPU and GPU in the 70 degrees c levels. The screen is decent, sharp and has good viewing angles thanks to the IPS panel.

Dell Inspiron 15 7000 is quite a cheap laptop and for the price has by far the best overall price/performance ratio on the market.

Asus ROG GL552VW-DH71

Now, this is a purebred gaming laptop. A bit less conspicuous than its more expensive cousins, but there is still enough of that ROG red color that makes it so easily recognizable. The build quality is what you’d expect from Asus – superb. Compared to Dell, Asus’s budget machine costs ~$300 USD more, but it also gives a bit more:

Intel Core i7 6700HQ 3.5GHz Boost;

16GB DDR4;

1TB HDD;

GeForce GTX960M, 2GB VRAM;

15.6 inches Full HD IPS;

Windows 10;

Like Dell, Asus’ laptop comes with the latest Windows 10 installed, so it is plug & play ready as well. However, it has two times more RAM than Dell and it is DDR4 instead of DDR3, what makes it more future-proof. It has larger storage space – 1TB, but it is much slower than the SSD in Dell. Display-wise it’s all pretty much the same, Full HD IPS panel enables crisp colors and good viewing angles. Cooling is standardly well executed, something we got used to by Asus.

Basically, for additional $300, you get more CPU power, more storage and more RAM. This will be enough even for more demanding work, such as 3D rendering, video compression, modelling, etc.

HP Star Wars Special Edition 15-an050nr

The best place to start from when describing this piece of geekery is at the very end. For as low as $470 USD you get a decent laptop capable of Full HD gaming with a Star Wars themed livery. There are, of course, more powerful configurations available from HP’s Star Wars line, but this one is the most interesting, as it is so affordable. For the price you get:

Intel Core i5 6200u 2.8GHz Boost;

6 GB DDR3;

GeForce GT 940M, 2GB VRAM;

1TB HDD;

15.6” Full HD IPS;

Windows 10;

Same as with the previous two models, you get Windows 10 and anti-glare 1080p IPS screen. In difference to Dell, just like Asus, HP comes with a standard 1TB mechanical hard disk drive. However, it has noticeably weaker CPU – a power efficient “U” model and the least RAM memory of the three. All of it is understandable, given the price of the laptop. No doubt the biggest customer base for this laptop is among Star Wars fans (and there are a lot of them).

3 Best DropBox Alternatives

USB drives are outdated, not to mention the remains of CD/DVD technology. Cloud is the future of digital storage and, today, everyone knows this. Speaking about file hosting services, California-based DropBox serves as the flagship of the industry. However, although most popular, it is far from being the only such platform out there. With this in mind, if you ever find yourself in desperate need of a cloud storage service other than DropBox, here are five completely adequate alternatives.

1.   OneDrive

If there is one thing Microsoft excels at, it is making their own versions of already successful trends. One quick glance at the Windows Phone or Xbox is all you’ll need to back up this statement. In this manner, OneDrive is nothing more than Microsoft’s DropBox. This service offers 15 GB of free storage space, which means that you will probably have enough space for most of your photos, videos and even music. Its greatest downside is that files in it are not sorted by any logical pattern.

2.   Mega.co.nz

Once upon a time, Kim Dotcom’s Megaupload was almost as big as The Pirate Bay. Consequently, it had pretty much the same purpose, which ultimately brought to its demise. From its ashes, a mighty haven of piracy, Mega.co.nz, rose like a phoenix. Today, Mega can pride itself with over 15 million registered users worldwide. Unlike aforementioned OneDrive, this particular file hosting service offers up to 50 GB of free space. Still, its founder Kim Dotcom is no longer a part of Mega’s team, as he has gone his separate way and currently works on a brand new version of Megaupload.

3.   pCloud

One of the greatest issues of cloud storage is the matter of security, and this is what puts pCloud head and shoulders above the rest. This entire industry has been polluted with myths and misconceptions of cloud not being safe enough for businesses. In order to demonstrate the security of their cloud storage, the team behind pCloud came up with the idea of making a crypto challenge. Namely, somewhere on their network, there was an encrypted folder with a hidden file. Those able to decrypt it would get $100,000. After six months and over 2800 hacking attempts, the challenge has remained unbeaten.

Programmable Network Routers

Like all data networks, the networks that connect servers in giant server farms, or servers and workstations in large organizations, are prone to congestion. When network traffic is heavy, packets of data can get backed up at network routers or dropped altogether.

Also like all data networks, big private networks have control algorithms for managing network traffic during periods of congestion. But because the routers that direct traffic in a server farm need to be superfast, the control algorithms are hardwired into the routers’ circuitry. That means that if someone develops a better algorithm, network operators have to wait for a new generation of hardware before they can take advantage of it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and five other organizations hope to change that, with routers that are programmable but can still keep up with the blazing speeds of modern data networks. The researchers outline their system in a pair of papers being presented at the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communication.

“This work shows that you can achieve many flexible goals for managing traffic, while retaining the high performance of traditional routers,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT. “Previously, programmability was achievable, but nobody would use it in production, because it was a factor of 10 or even 100 slower.”

“You need to have the ability for researchers and engineers to try out thousands of ideas,” he adds. “With this platform, you become constrained not by hardware or technological limitations, but by your creativity. You can innovate much more rapidly.”

The first author on both papers is Anirudh Sivaraman, an MIT graduate student in electrical engineering and computer science, advised by both Balakrishnan and Mohammad Alizadeh, the TIBCO Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT, who are coauthors on both papers. They’re joined by colleagues from MIT, the University of Washington, Barefoot Networks, Microsoft Research, Stanford University, and Cisco Systems.

Different strokes

Traffic management can get tricky because of the different types of data traveling over a network, and the different types of performance guarantees offered by different services. With Internet phone calls, for instance, delays are a nuisance, but the occasional dropped packet — which might translate to a missing word in a sentence — could be tolerable. With a large data file, on the other hand, a slight delay could be tolerable, but missing data isn’t.

Similarly, a network may guarantee equal bandwidth distribution among its users. Every router in a data network has its own memory bank, called a buffer, where it can queue up packets. If one user has filled a router’s buffer with packets from a single high-definition video, and another is trying to download a comparatively tiny text document, the network might want to bump some of the video packets in favor of the text, to help guarantee both users a minimum data rate.

A router might also want to modify a packet to convey information about network conditions, such as whether the packet encountered congestion, where, and for how long; it might even want to suggest new transmission rates for senders.

Computer scientists have proposed hundreds of traffic management schemes involving complex rules for determining which packets to admit to a router and which to drop, in what order to queue the packets, and what additional information to add to them — all under a variety of different circumstances. And while in simulations many of these schemes promise improved network performance, few of them have ever been deployed, because of hardware constraints in routers.

The MIT researchers and their colleagues set themselves the goal of finding a set of simple computing elements that could be arranged to implement diverse traffic management schemes, without compromising the operating speeds of today’s best routers and without taking up too much space on-chip.

To test their designs, they built a compiler — a program that converts high-level program instructions into low-level hardware instructions — which they used to compile seven experimental traffic-management algorithms onto their proposed circuit elements. If an algorithm wouldn’t compile, or if it required an impractically large number of circuits, they would add new, more sophisticated circuit elements to their palette.

Constant Connection

For most of the 20th century, the paradigm of wireless communication was a radio station with a single high-power transmitter. As long as you were within 20 miles or so of the transmitter, you could pick up the station.

With the advent of cell phones, however, and even more so with Wi-Fi, the paradigm became a large number of scattered transmitters with limited range. When a user moves out of one transmitter’s range and into another’s, the network has to perform a “handoff.” And as anyone who’s lost a cell-phone call in a moving car or lost a Wi-Fi connection while walking to the bus stop can attest, handoffs don’t always happen as they should.

Most new phones, however, have built-in motion sensors — GPS receivers, accelerometers and, increasingly, gyros. At the Eighth Usenix Symposium on Networked Systems Design and Implementation, which took place in Boston in March, MIT researchers presented a set of new communications protocols that use information about a portable device’s movement to improve handoffs. In experiments on MIT’s campus-wide Wi-Fi network, the researchers discovered that their protocols could often, for users moving around, improve network throughput (the amount of information that devices could send and receive in a given period) by about 50 percent.

The MIT researchers — graduate student Lenin Ravindranath, Professor Hari Balakrishnan, Associate Professor Sam Madden, and postdoctoral associate Calvin Newport, all of the Computer Science and Artificial Intelligence Laboratory — used motion detection to improve four distinct communications protocols. One governs the smart phone’s selection of the nearest transmitter. “Let’s say you get off at the train station and start walking toward your office,” Balakrishnan says. “What happens today is that your phone immediately connects to the Wi-Fi access point with the strongest signal. But by the time it’s finished doing that, you’ve walked on, so the best access point has changed. And that keeps happening.”

By contrast, Balakrishnan explains, the new protocol selects an access point on the basis of the user’s inferred trajectory. “We connect you off the bat to an access point that has this trade-off between how long you’re likely to be connected to it and the throughput you’re going to get,” he says. In their experiments, the MIT researchers found that, with one version of their protocol, a moving cell phone would have to switch transmitters 40 percent less frequently than it would with existing protocols. A variation of the protocol improved throughput by about 30 percent.

How Computers Has Changed Our Lives

 

gjhThe invention of the computer is one of the most remarkable innovations that have occurred over the last ten decades. The modern world is deemed digital, what most people fail to appreciate however is that the source of life being digital is the computer. Gone are the days when executing stuff was done manually. Today at the click of a button, rocket machines has been launched, ICU  life-support is run, instant communication is enabled, to mention but a few.

Computers are defined as programmable machines that have two key features :they respond to a specific set of instruction(given by the human) that have been well defined and they can execute a pre-recorded list of instructions usually referred to as a program. Therefore computers execute what they have been instructed to.

Computers have evolved over the years from the static mainframe computers to the portable modern computers that we use today. Modern computers are both electronic and digital, and consist of the actual machinery such as wires circuits and transistors –these are referred to as hardware, and the data and instructions that are fed into the computer which are collectively referred to as software.

Components of the Computer.

The main components that make up the computers are:

 Memory: enables computers to store data and programs.

 Mass storage device: this is commonly referred to as the hard disk.

 Input devices: such as the computer keyboard and mouse.

 Output devices: such as the screen.

 CPU: which is the heart of the computer and is responsible for all executions.

Benefits of Computers :

Different sectors have benefited from the use of computers.

i.            Computers have been of tremendous advantage to businesses and how the businesses are conducted in their respective sectors. Technological advancements have been so remarkable that those that have not yet incorporated the use of computers and computer systems in their day to day business activities are suffering great disadvantage as compared to their competitors. The business world uses computers for organization, self sufficiency, reducing costs, increasing the speeds of transactions and managing sales.

ii.            In the academic world, teaching and learning has shifted from the manual and exhausting modes of learning to the computerized versions. Unlike traditional methods of teaching today lecturers   and teachers are using power point presentations to teach. They save the slides on their computers then project them on screens .This is a more efficient mode of teaching as it allows for bigger audiences. Another great advantage is for students who are now using online learning facilities to learn about new stuff as well as research. There is no longer the need to walk miles to the physical libraries because they can access the academic material as well as online libraries from their computers.

iii.            In the medical industry emerging technologies and computer developments have been of significant advantage. The life support systems all run using computers. Additionally, the records and databases of the patients can all be saved once in computers and accessed each time the patient visits the hospital.

New Trends.

The 21st century has been marked with dynamic trends as far as computers are concerned. The capabilities of computers have since been so expanded  that it is hard to imagine how life would be if they ceased to exist.

Some of the most remarkable trends include:

Computers have become intuitive; they now have the ability to learn, to recognize and know what human beings want, as well as our identities.

Computers chips are everywhere and have become almost invisible due to their small sizes. Which is converse to the traditional bigger sized chips.

Computers are now able to manage important global systems. Some of which include food production and transport.

Today, online computer resources allow us to download application via wireless access anywhere anytime at our convenience.

Computers have become voice-activated, video-enabled, networked and connected together thanks to the internet. These open the door for myriad functionalities.

Computers today have digital senses such as speech that enables them to communicate with human beings and other computers.

Finally, human and computer evolution have converged.

More Resilient Networks Of Programmable routers

Also like all data networks, big private networks have control algorithms for managing network traffic during periods of congestion. But because the routers that direct traffic in a server farm need to be superfast, the control algorithms are hardwired into the routers’ circuitry. That means that if someone develops a better algorithm, network operators have to wait for a new generation of hardware before they can take advantage of it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and five other organizations hope to change that, with routers that are programmable but can still keep up with the blazing speeds of modern data networks. The researchers outline their system in a pair of papers being presented at the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communication.

“This work shows that you can achieve many flexible goals for managing traffic, while retaining the high performance of traditional routers,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT. “Previously, programmability was achievable, but nobody would use it in production, because it was a factor of 10 or even 100 slower.”

“You need to have the ability for researchers and engineers to try out thousands of ideas,” he adds. “With this platform, you become constrained not by hardware or technological limitations, but by your creativity. You can innovate much more rapidly.”

The first author on both papers is Anirudh Sivaraman, an MIT graduate student in electrical engineering and computer science, advised by both Balakrishnan and Mohammad Alizadeh, the TIBCO Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT, who are coauthors on both papers. They’re joined by colleagues from MIT, the University of Washington, Barefoot Networks, Microsoft Research, Stanford University, and Cisco Systems.

Different strokes

Traffic management can get tricky because of the different types of data traveling over a network, and the different types of performance guarantees offered by different services. With Internet phone calls, for instance, delays are a nuisance, but the occasional dropped packet — which might translate to a missing word in a sentence — could be tolerable. With a large data file, on the other hand, a slight delay could be tolerable, but missing data isn’t.

Similarly, a network may guarantee equal bandwidth distribution among its users. Every router in a data network has its own memory bank, called a buffer, where it can queue up packets. If one user has filled a router’s buffer with packets from a single high-definition video, and another is trying to download a comparatively tiny text document, the network might want to bump some of the video packets in favor of the text, to help guarantee both users a minimum data rate.

A router might also want to modify a packet to convey information about network conditions, such as whether the packet encountered congestion, where, and for how long; it might even want to suggest new transmission rates for senders.

Computer scientists have proposed hundreds of traffic management schemes involving complex rules for determining which packets to admit to a router and which to drop, in what order to queue the packets, and what additional information to add to them — all under a variety of different circumstances. And while in simulations many of these schemes promise improved network performance, few of them have ever been deployed, because of hardware constraints in routers.

The MIT researchers and their colleagues set themselves the goal of finding a set of simple computing elements that could be arranged to implement diverse traffic management schemes, without compromising the operating speeds of today’s best routers and without taking up too much space on-chip.

To test their designs, they built a compiler — a program that converts high-level program instructions into low-level hardware instructions — which they used to compile seven experimental traffic-management algorithms onto their proposed circuit elements. If an algorithm wouldn’t compile, or if it required an impractically large number of circuits, they would add new, more sophisticated circuit elements to their palette.

Assessments

In one of the two new papers, the researchers provide specifications for seven circuit types, each of which is slightly more complex than the last. Some simple traffic management algorithms require only the simplest circuit type, while others require more complex types. But even a bank of the most complex circuits would take up only 4 percent of the area of a router chip; a bank of the least complex types would take up only 0.16 percent.

Beyond the seven algorithms they used to design their circuit elements, the researchers ran several other algorithms through their compiler and found that they compiled to some combination of their simple circuit elements.

“We believe that they’ll generalize to many more,” says Sivaraman. “For instance, one of the circuits allows a programmer to track a running sum — something that is employed by many algorithms.”

In the second paper, they describe the design of their scheduler, the circuit element that orders packets in the router’s queue and extracts them for forwarding. In addition to queuing packets according to priority, the scheduler can also stamp them with particular transmission times and forward them accordingly. Sometimes, for instance, it could be useful for a router to slow down its transmission rate, in order to prevent bottlenecks elsewhere in the network, or to help ensure equitable bandwidth distribution.

Finally, the researchers drew up specifications for their circuits in Verilog, the language electrical engineers typically use to design commercial chips. Verilog’s built-in analytic tools verified that a router using the researchers’ circuits would be fast enough to support the packet rates common in today’s high-speed networks, forwarding a packet of data every nanosecond.

Method for super computers

Based on that principle, Martinez — engineering project lead for Sandia’s infrastructure computing services — is helping design and monitor a cooling system expected to save 4 million to 5 million gallons annually in New Mexico if installed next year at Sandia’s computing center, and hundreds of millions of gallons nationally if the method is widely adopted. It’s now being tested at the National Renewable Energy Laboratory in Colorado, which expects to save a million gallons annually.

The system, built by Johnson Controls and called the Thermosyphon Cooler Hybrid System, cools like a refrigerator without the expense and energy needs of a compressor.

Currently, many data centers use water to remove waste heat from servers. The warmed water is piped to cooling towers, where a separate stream of water is turned to mist and evaporates into the atmosphere. Like sweat evaporating from the body, the process removes heat from the piped water, which returns to chill the installation. But large-scale replenishment of the evaporated water is needed to continue the process. Thus, an increasing amount of water will be needed worldwide to evaporate heat from the growing number of data centers, which themselves are increasing in size as more users put information into the cloud.

“My job is to eventually put cooling towers out of business,” Martinez said.

“Ten years ago, I gave a talk on the then-new approach of using water to directly cool supercomputers. There were 30 people at the start of my lecture and only 10 at the end.

“‘Dave,’ they said, ‘no way water can cool a supercomputer. You need air.’

“So now most data centers use water to cool themselves, but I’m always looking at the future and I see refrigerant cooling coming in for half the data centers in the U.S., north and west of Texas, where the climate will make it work.”

The prototype method uses a liquid refrigerant instead of water to carry away heat. The system works like this: Water heated by the computing center is pumped within a closed system into proximity with another system containing refrigerant. The refrigerant absorbs heat from the water so that the water, now cooled, can circulate to cool again. Meanwhile the heated refrigerant vaporizes and rises in its closed system to exchange heat with the atmosphere. As heat is removed from the refrigerant, it condenses and sinks to absorb more heat, and the cycle repeats.

“There’s no water loss like there is in a cooling tower that relies on evaporation,” Martinez said. “We also don’t have to add chemicals such as biocides, another expense. This system does not utilize a compressor, which would incur more costs. The system utilizes phase-changing refrigerant and only requires outside air that’s cool enough to absorb the heat.”

In New Mexico, that would occur in spring, fall and winter, saving millions of gallons of water.

In summer, the state’s ambient temperature is high enough that a cooling tower or some method of evaporation could be used. But more efficient computer architectures can raise the acceptable temperature for servers to operate and make the occasional use of cooling towers even less frequent.

“If you don’t have to cool a data center to 45 degrees Fahrenheit but instead only to 65 to 80 degrees, then a warmer outside air temperature — just a little cooler than the necessary temperature in the data center — could do the job,” Martinez said.

For indirect air cooling in a facility, better design brings the correct amount of cooling to the right location, allowing operating temperatures to be raised and allowing the refrigerant cycle to be used more during the year. “At Sandia, we used to have to run at 45 degrees Fahrenheit. Now we’re at 65 to 78. We arranged for air to flow more smoothly instead of ignoring whorls as it cycled in open spaces. We did that by working with supercomputer architects and manufacturers of cooling units so they designed more efficient air-flow arrangements. Also, we installed fans sensitive to room temperature, so they slow down as the room cools from decreased computer usage and go faster as computer demand increases. This results in a more efficient and economical way to circulate air in a data center.”

Big jobs that don’t have to be completed immediately can be scheduled at night when temperatures are cooler.

“Improving efficiencies inside a system raises efficiencies in the overall system,” Martinez said. “That saves still more water by allowing more use of the water-saving refrigerant system.”