Monthly Archives: April 2016

Programmable Network Routers

Like all data networks, the networks that connect servers in giant server farms, or servers and workstations in large organizations, are prone to congestion. When network traffic is heavy, packets of data can get backed up at network routers or dropped altogether.

Also like all data networks, big private networks have control algorithms for managing network traffic during periods of congestion. But because the routers that direct traffic in a server farm need to be superfast, the control algorithms are hardwired into the routers’ circuitry. That means that if someone develops a better algorithm, network operators have to wait for a new generation of hardware before they can take advantage of it.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and five other organizations hope to change that, with routers that are programmable but can still keep up with the blazing speeds of modern data networks. The researchers outline their system in a pair of papers being presented at the annual conference of the Association for Computing Machinery’s Special Interest Group on Data Communication.

“This work shows that you can achieve many flexible goals for managing traffic, while retaining the high performance of traditional routers,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science at MIT. “Previously, programmability was achievable, but nobody would use it in production, because it was a factor of 10 or even 100 slower.”

“You need to have the ability for researchers and engineers to try out thousands of ideas,” he adds. “With this platform, you become constrained not by hardware or technological limitations, but by your creativity. You can innovate much more rapidly.”

The first author on both papers is Anirudh Sivaraman, an MIT graduate student in electrical engineering and computer science, advised by both Balakrishnan and Mohammad Alizadeh, the TIBCO Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT, who are coauthors on both papers. They’re joined by colleagues from MIT, the University of Washington, Barefoot Networks, Microsoft Research, Stanford University, and Cisco Systems.

Different strokes

Traffic management can get tricky because of the different types of data traveling over a network, and the different types of performance guarantees offered by different services. With Internet phone calls, for instance, delays are a nuisance, but the occasional dropped packet — which might translate to a missing word in a sentence — could be tolerable. With a large data file, on the other hand, a slight delay could be tolerable, but missing data isn’t.

Similarly, a network may guarantee equal bandwidth distribution among its users. Every router in a data network has its own memory bank, called a buffer, where it can queue up packets. If one user has filled a router’s buffer with packets from a single high-definition video, and another is trying to download a comparatively tiny text document, the network might want to bump some of the video packets in favor of the text, to help guarantee both users a minimum data rate.

A router might also want to modify a packet to convey information about network conditions, such as whether the packet encountered congestion, where, and for how long; it might even want to suggest new transmission rates for senders.

Computer scientists have proposed hundreds of traffic management schemes involving complex rules for determining which packets to admit to a router and which to drop, in what order to queue the packets, and what additional information to add to them — all under a variety of different circumstances. And while in simulations many of these schemes promise improved network performance, few of them have ever been deployed, because of hardware constraints in routers.

The MIT researchers and their colleagues set themselves the goal of finding a set of simple computing elements that could be arranged to implement diverse traffic management schemes, without compromising the operating speeds of today’s best routers and without taking up too much space on-chip.

To test their designs, they built a compiler — a program that converts high-level program instructions into low-level hardware instructions — which they used to compile seven experimental traffic-management algorithms onto their proposed circuit elements. If an algorithm wouldn’t compile, or if it required an impractically large number of circuits, they would add new, more sophisticated circuit elements to their palette.

Constant Connection

For most of the 20th century, the paradigm of wireless communication was a radio station with a single high-power transmitter. As long as you were within 20 miles or so of the transmitter, you could pick up the station.

With the advent of cell phones, however, and even more so with Wi-Fi, the paradigm became a large number of scattered transmitters with limited range. When a user moves out of one transmitter’s range and into another’s, the network has to perform a “handoff.” And as anyone who’s lost a cell-phone call in a moving car or lost a Wi-Fi connection while walking to the bus stop can attest, handoffs don’t always happen as they should.

Most new phones, however, have built-in motion sensors — GPS receivers, accelerometers and, increasingly, gyros. At the Eighth Usenix Symposium on Networked Systems Design and Implementation, which took place in Boston in March, MIT researchers presented a set of new communications protocols that use information about a portable device’s movement to improve handoffs. In experiments on MIT’s campus-wide Wi-Fi network, the researchers discovered that their protocols could often, for users moving around, improve network throughput (the amount of information that devices could send and receive in a given period) by about 50 percent.

The MIT researchers — graduate student Lenin Ravindranath, Professor Hari Balakrishnan, Associate Professor Sam Madden, and postdoctoral associate Calvin Newport, all of the Computer Science and Artificial Intelligence Laboratory — used motion detection to improve four distinct communications protocols. One governs the smart phone’s selection of the nearest transmitter. “Let’s say you get off at the train station and start walking toward your office,” Balakrishnan says. “What happens today is that your phone immediately connects to the Wi-Fi access point with the strongest signal. But by the time it’s finished doing that, you’ve walked on, so the best access point has changed. And that keeps happening.”

By contrast, Balakrishnan explains, the new protocol selects an access point on the basis of the user’s inferred trajectory. “We connect you off the bat to an access point that has this trade-off between how long you’re likely to be connected to it and the throughput you’re going to get,” he says. In their experiments, the MIT researchers found that, with one version of their protocol, a moving cell phone would have to switch transmitters 40 percent less frequently than it would with existing protocols. A variation of the protocol improved throughput by about 30 percent.

How Computers Has Changed Our Lives


gjhThe invention of the computer is one of the most remarkable innovations that have occurred over the last ten decades. The modern world is deemed digital, what most people fail to appreciate however is that the source of life being digital is the computer. Gone are the days when executing stuff was done manually. Today at the click of a button, rocket machines has been launched, ICU  life-support is run, instant communication is enabled, to mention but a few.

Computers are defined as programmable machines that have two key features :they respond to a specific set of instruction(given by the human) that have been well defined and they can execute a pre-recorded list of instructions usually referred to as a program. Therefore computers execute what they have been instructed to.

Computers have evolved over the years from the static mainframe computers to the portable modern computers that we use today. Modern computers are both electronic and digital, and consist of the actual machinery such as wires circuits and transistors –these are referred to as hardware, and the data and instructions that are fed into the computer which are collectively referred to as software.

Components of the Computer.

The main components that make up the computers are:

 Memory: enables computers to store data and programs.

 Mass storage device: this is commonly referred to as the hard disk.

 Input devices: such as the computer keyboard and mouse.

 Output devices: such as the screen.

 CPU: which is the heart of the computer and is responsible for all executions.

Benefits of Computers :

Different sectors have benefited from the use of computers.

i.            Computers have been of tremendous advantage to businesses and how the businesses are conducted in their respective sectors. Technological advancements have been so remarkable that those that have not yet incorporated the use of computers and computer systems in their day to day business activities are suffering great disadvantage as compared to their competitors. The business world uses computers for organization, self sufficiency, reducing costs, increasing the speeds of transactions and managing sales.

ii.            In the academic world, teaching and learning has shifted from the manual and exhausting modes of learning to the computerized versions. Unlike traditional methods of teaching today lecturers   and teachers are using power point presentations to teach. They save the slides on their computers then project them on screens .This is a more efficient mode of teaching as it allows for bigger audiences. Another great advantage is for students who are now using online learning facilities to learn about new stuff as well as research. There is no longer the need to walk miles to the physical libraries because they can access the academic material as well as online libraries from their computers.

iii.            In the medical industry emerging technologies and computer developments have been of significant advantage. The life support systems all run using computers. Additionally, the records and databases of the patients can all be saved once in computers and accessed each time the patient visits the hospital.

New Trends.

The 21st century has been marked with dynamic trends as far as computers are concerned. The capabilities of computers have since been so expanded  that it is hard to imagine how life would be if they ceased to exist.

Some of the most remarkable trends include:

Computers have become intuitive; they now have the ability to learn, to recognize and know what human beings want, as well as our identities.

Computers chips are everywhere and have become almost invisible due to their small sizes. Which is converse to the traditional bigger sized chips.

Computers are now able to manage important global systems. Some of which include food production and transport.

Today, online computer resources allow us to download application via wireless access anywhere anytime at our convenience.

Computers have become voice-activated, video-enabled, networked and connected together thanks to the internet. These open the door for myriad functionalities.

Computers today have digital senses such as speech that enables them to communicate with human beings and other computers.

Finally, human and computer evolution have converged.