Inception

J.C.R. Licklider envisions a world of networked computers. A group of pioneers turn his idea into a reality.

To understand the internet in its current form, we need to travel back in time to its inception.

The idea of networked computers starts with a man named J.C.R. Licklider, a psychologist. He was fascinated by the first computers and their vast potential usage. In 1960, Licklider wrote Man Computer Symbiosis, which describes his vision for a complementary relationship between humans and computers in the future.

Licklider foresaw machines that would not require human direction, where human brains and computers would be tightly coupled forming a symbiotic human computer relationship. He sees this symbiosis as beneficial for solving problems through play. The computer would “augment human intellect by freeing it from mundane tasks.” (https://en.wikipedia.org/wiki/Man-Computer_Symbiosis#cite_note-8)

So what did the computer look like in the middle of the 20th century?

The first computers were massive, filling up entire rooms with their components. They were programmed, or told to complete a specific task, through the use of plugs and switches that were manually configured by humans.

The first machines were built for specific tasks. For example, Colossus was a code breaking machine developed by the British in 1943. The humans that programmed these machines were actually called “computers” as configuring the machine to solve a problem could take many days.

The first general purpose computer was the ENIAC 1, built in 1947. This computer was still programmed through a series of switches and connections set by humans.

When JCR Licklider wrote Man Computer Symbiosis, the state of the art was the PDP-1, a “micro-computer.” The memory of the computer was 4,096 words. The speed of the processor was 187 kilohertz and it weighed 1600 pounds.

The computer you use today probably has a 2 gigahertz processor. One gigahertz is equal to one million kilohertz. Following a direct comparison in speed, the PDP-1 had .000187 gigahertz. That’s about 1 percent of 1 percent of a gigahertz.

The PDP-1 is also famous as part of the hacker culture at MIT and BBN. It’s the original hardware for the first video game, Spacewar, created by Steve Russell.

A computer today has multiple processors. The CPU, the central processing unit, in a 2020 Macbook has 4 cores on one single chip. So we’re not talking about 2 million kilohertz of power, we’re talking about 8 million kilohertz of power. That’s average, not a top of the line gaming rig.

The punch card. A stiff piece of paper with a sequence of holes. In 1725, looms were programmed using punched cards. In the late 1800s, Herman Hollerith invented a punch card
to store data and read by a machine. This was used in the 1890 census, with machines that read the holes and tabulated the numbers.

The UNIVAC, which came to market in 1951, was the first commercialized computer that used punched cards to program computers. Less than a decade into the punched card era, with machines that weighed nearly a ton, JCR Licklider predicted networked computers that would birth the internet.

How did that vision turn into a reality?

It starts with Robert Taylor. He met JCR Licklider in 1962. Taylor had a job funding large programs for advanced research in computing at the Information Processing Techniques Office (IPTO). This was part of the Advanced Research Projects Agency, known better as ARPA. In 1972, ARPA would become DARPA.

In Taylor’s job at the IPTO, funds were prioritized to support time-sharing. These are systems where many users could work at a terminal and share a single large computer. These terminals allowed users to work interactively instead of using punched cards.

In Taylor’s office at the Pentagon, he had 3 terminals: one connected to MIT, one connected to UC Berkeley, and one connected to System Development Corporation in Santa Monica. What Taylor wanted was to build a network where he could communicate with all three large computers through a single terminal.

Imagine this in your current life.

Let’s say you’re going to school in Miami and have access to a single terminal that connects to the computer where your mother works in Boston. You can communicate with your mother by using that one computer terminal in Miami that’s directly connected to the other computer in Boston. Your sister works in San Francisco and they have their own computer. The University of Miami decides they want to access that computer as well, so they install another terminal that’s directly connected to your sister’s company in San Francisco.

Could you imagine trying to build a social network in this situation?

At that time, there was no standard operating system let alone protocols to allow computers to talk to each other. We take it for granted that my Windows PC can connect to the internet and send an email that shows up on your Mac Laptop and can be seen on your Android phone.

In 1968, Licklider and Taylor published “The Computer as a Communication Device” which stated “In a few years, men will be able to communicate more effectively through a machine than face to face.”

Punch cards went out of style for programming computers in the mid 1970s. Today, we write programs with the English language. How did that happen?

That requires us to talk about compilers. In 1944, Lt. Grace Hopper of the Navy was assigned to help a Havard professor named Howard Aiken. She was to see what his recently developed computer the Mark 1 was capable of.

Grace Hopper was born in 1906. This was not a time of gender equality. Fortunately for Grace, her father wanted her to have the same education as her brother. She turned out to be brilliant at math. And thank you Papa Hopper, for being a feminist.

As a child, Grace wanted to join the US Navy like her grandfather who was a Rear Admiral. It wasn’t until 1941 after the attack on Pearl Harbor, with men being called away, that the US military started to allow women to serve.

According to accounts, Professor Aiken wasn’t enthusiastic about having a female on his team. However, Hopper impressed Aiken enough that he assigned her to write the operating manual for the Mark I. During this time she wrote meticulous notes and small pieces of code that were re-usable. In these notes was also a literal bug. When the Mark I was not functioning properly, the team looked over the machine tediously with all of its components and wires and found a bug interfering with the connections. This is where the term “computer bug” comes from, as well as the process of debugging.

By 1951, computers had advanced enough to have memory and these bits of code Grace Hopper had written could be turned into what’s known as a subroutine. She tried to persuade her employer to let programmers call up the subroutines with familiar words, like “subtract sales tax from cost.”

Hopper called this a compiler. It made programming quicker, but the biggest drawback was that the programs that were compiled ran slower. Her employer at the time, Remington Rand, decided that it was better for experts to program the computers as efficiently as they could.

Grace wasn’t discouraged. She just wrote her own compiler in her spare time. Other programmers loved it. They began sending her their own snippets of code to include the compiler. In this regard, you can consider Grace Hopper a pioneer of open source software. Her work evolved into COBOL, one of the first computer programming languages.

Lawrence Roberts was recruited by Robert Taylor as a program manager for ARPANET and eventually assumed the role of director after Taylor’s resignation. ARPANET is the project name of Taylor’s efforts to realize Licklider’s ideas. Lawrence Roberts is responsible for bringing the idea of packet switching to ARPANET.

Packet Switching is essentially dividing computer messages into packets that are then routed independently across a network and reassembled at a destination.

It was invented by two men on different continents independently at about the same time.

Paul Baran worked at RAND Corporation in 1959. He worked on designing a survivable communications system where communication would still be possible between end points in the face of damage from nuclear war. He wasn’t designing with computers in mind, instead he was thinking about voice communication. His idea for packet switching was the complete opposite of what was the modern telephony system design.

Baran presented his findings to a lot of different audiences, including AT&T. The company scoffed at the idea telling him that he didn’t know how voice communication worked. Since AT&T controlled the phone lines, this effectively killed the possibility for Baran to go any further with his idea.

Donald Davies worked on packet switching specifically thinking about computers sending messages over a network. As where telephone communication is fairly constant, he realized that computer network traffic would be “bursty” with periods of silence. At the National Physical Laboratory (NPL) in the UK he was able to implement a trial network.

It was Davies’ work that caught the attention of the developers of ARPANET. In 1969, when ARPA started developing the idea of an internetworked set of terminals to share computing resources, they referenced both Baran and Davies research.

Transmission Control Protocol (TCP) is the protocol that implements packet switching, telling devices how to collect and reassemble packets of data. Bob Kahn, an employee of the IPTO is responsible for the initial ideas that turned into TCP. His work formed the basis for open-architecture networking which would allow computers and networks all over the world to communicate with each other, regardless of the hardware or software the computers on each network used.

TCP takes care of how to communicate and reach other computers. IP (Internet Protocol) gives each device on the network an address. This was created by Vint Cerf. Bob Kahn and Vint Cerf are considered the fathers of the internet for their invention of TCP/IP, as it is this suite of communication protocols that are used to interconnect devices on the internet.

You have most likely seen an IP address. Behind every domain name like google.com, there is a series of numbers that follow a pattern: 4 numbers with a value between 0 and 255, separated by dots.

For example,www.google.com is 216.58.192.36.

When you join a computer network, it tells your computer the address of a local domain name server (DNS). The DNS allows us to use words instead of numbers.Once the DNS gives our computer the proper IP address, the servers begin to route your request. It could be a few hops, it could be many more. A request for Google’s website from my internet browser required my request to go through 10 servers. I don’t know for sure how many hops it took to get back to me.

There are over 4 billion IP addresses throughout the entire world. If you’re trying to figure out how many more addresses the IP system that Vint Cerf invented will allow for, the answer is not much. If you hear the term IPv6, it's in reference to an updated IP protocol that allows for many more addresses as it’s eight groups of 4 hexadecimal digits.

Let’s nerd out a bit. We commonly use the decimal system, which is base 10.

If you look at a number like 54, it has two numbers 5 and 4 and we understand that the 5 would change to 6 as the number in the first position (currently 4) counted up past 9.

Each position symbolizes a range of 10 numbers. Deca.

Binary allows for 2 numeric symbols: 0 and 1.

Hexadecimal allows for 16 numeric symbols in each position. The additional symbols are the letters A through F.

Converting between numeric systems, AA in hexadecimal is equivalent to 170 in the decimal system.

The web uses hexadecimal to symbolize colors. It’s a 6 digit hexadecimal value. This allows for over 16 million colors with only 6 characters.

2020 NERDLab