The long beginning: 1820s - 1950s

Difference Engine, Analytical Engine, Turing Machine & Turing Test

by Yvette Weeks, April 10, 2025

Charles Babbage (1791-1871)
Charles Babbage, often regarded as the "father of the computer," conceived two groundbreaking mechanical calculating machines in the 19th century: the Difference Engine and the Analytical Engine.

The Difference Engine, first envisioned in 1821, was designed to automate the tedious and error-prone task of calculating mathematical tables. It functioned on the principle of finite differences, allowing it to compute polynomial functions without multiplication or division.

With the guidance and assistance of Dionysius Lardner, Babbage secured government funding in 1823 and began building the machine (team leads listed below), but due to technical difficulties, insufficient precision in machining (at the time), escalating costs, and ultimately Babbage’s difficult personality, the project was abandoned in 1833, only a portion complete. Babbage was already formulating his next, more ambitious, project, the Analytical Engine.

Unlike the Difference Engine, which was limited to specific calculations, the Analytical Engine was to be a general-purpose machine, featuring elements of modern computing such as a central processing unit (the “mill”), memory (the “store”), punched cards for programmability, and conditional branching. Interestingly, the punch cards were inspired by loom cards used by textile mills. This machine foreshadowed modern computers in its ability to execute any calculation given the proper instructions. However, due to its complexity, lack of continued government funding, and the limitations of 19th-century engineering, the Analytical Engine was never fully constructed.

Ada Lovelace (1815-1852)
Charles Babbage met Augusta Ada Byron, later known as Ada Lovelace, at a party in 1833, where he demonstrated a working portion of his Difference Engine. Ada was intrigued with the concept of a machine replicating human tasks at greater speed and efficiency, and soon became one of his few true disciples. However, as a high-born woman, her initial role was likely that of an interested observer — an educated aristocrat who was curious about scientific progress but hardly expected to contribute in a technical capacity. She might have been considered a future patron.

Ada was born in 1815 to the poet Lord Byron and his aristocratic wife, Annabella Milbanke (Baroness Wentworth). Her mother, determined to steer Ada away from her father’s “poetic madness,” insisted on a rigorous education in mathematics and logic — an unusual path for a woman of her social standing. Fortuitously, Ada was naturally gifted in this avocation.

Among the collaborators on the Analytical Engine, Lovelace was not to be taken lightly. When Italian mathematician Luigi Menabrea published an 1842 paper on the Analytical Engine, she was already very knowledgeable about Babbage’s ideas. She translated the paper from French at Babbage’s request, but then went far beyond simple translation, adding extensive notes and tripling the paper’s length. These notes contained what is now considered the first true computer program — an algorithm to calculate Bernoulli numbers using the Analytical Engine’s capabilities (Note G, below).

Charles Babbage was a polymath — an inventor, mathematician, engineer, and philosopher — whose brilliance was matched only by his stubbornness. Born in 1791 in London, he was educated at Trinity College, Cambridge, where he found the mathematics curriculum obsolete and took it upon himself to study more advanced works.

He co-founded the Analytical Society in 1812 to reform British mathematics by promoting continental methods, particularly calculus, which was more advanced than the Newtonian approach that was popular in England at the time.

Babbage was deeply influenced by the Enlightenment and the Industrial Revolution, believing in the power of mechanization and rational thought to solve society’s problems. He wrote extensively on economics, statistics, and industrial efficiency, and his book On the Economy of Machinery and Manufactures (1832) laid the groundwork for later studies in operations research and production efficiency (e.g., Frederick Taylor, Frank & Lillian Gilbreth, and Henry Ford).

After the Analytical Engine was effectively shelved in the 1840s due to lack of funding and technological limitations, Babbage continued working on improvements to mechanical computing, designing a Difference Engine No. 2, which was never built in his lifetime but was later successfully constructed in 1991 using contemporary 19th-century tooling and tolerances (It performed perfectly, but more on that ina a future installment). He also conducted research into cryptography, railway efficiency, and even the postal system. Despite his many contributions, Babbage grew increasingly bitter toward government bureaucracy and the scientific establishment, whom he blamed for failing to support his work adequately. Despite Babbage disparaging Lovelace’s desire to take over fundraising and PR, he always acknowledged her genius.

Ada Lovelace's contributions to computer science remained largely forgotten after her death in 1852, until her pioneering documentations were rediscovered in the early 20th century. Alan Turing is said to have referred to her notes when designing Bombe during World War II. Today, she is rightly celebrated as the first computer programmer: a woman who, despite societal constraints, visualized the future of computing more accurately than her male contemporaries. Ada died young, age 36, of uterine cancer. The second Tuesday of October is celebrated as Ada Lovelace Day in England. A first edition of her first published algorithm sold at auction in 2018 for £95,000 ($123,000 US).

Much of Babbage’s work was also forgotten after his death in 1871; nonetheless, his ideas deeply influenced future computing pioneers. These concepts, particularly a programmable machine, laid the foundation for the development of modern computers.

Several notable individuals contributed to the development of Charles Babbage’s Difference Engine and Analytical Engine, either through direct collaboration or intellectual support. The following are names and bullet points are quoted directly from chatGPT.

  1. Ada Lovelace: The Visionary Programmer (1815-1852)
    • Position: Mathematician, Translator, and Algorithm Developer
    • Engine: Analytical Engine
    • Contribution: Translated Luigi Menabrea’s paper on the Analytical Engine and added extensive notes, including the first known computer algorithm. She was also the first to fully grasp the machine’s broader potential beyond numerical calculations.
  2. Luigi Federico Menabrea: The Theorist (1809-1896)
    • Position: Mathematician and Engineer
    • Engine: Analytical Engine
    • Contribution: Published a seminal paper in 1842 describing the Analytical Engine, based on Babbage’s lectures in Turin. His work provided the foundation for Ada Lovelace’s expanded notes, which introduced the concept of programming.
  3. Joseph Clement:The Master Engineer (1779-1844)
    • Position: Chief Engineer and Machinist
    • Engine: Difference Engine
    • Contribution: A highly skilled machinist responsible for crafting precision-engineered parts for the Difference Engine. His workshop produced the only functioning portion of the device. A dispute over payment and workshop ownership led to his departure, contributing to the project being abandoned.
  4. George Scheutz (1785-1873) and Edvard Scheutz (1821-1881): The Independent Builders
    • Position: Engineers and Inventors
    • Engine: Inspired by the Difference Engine
    • Contribution: A Swedish father-son duo who successfully built a functioning difference engine in the 1850s, based on Babbage’s original design. Their machine was one of the first practical mechanical calculators used for generating mathematical tables.
  5. Dionysius Lardner: The Public Advocate (1793–1859)
    • Position: Science Communicator and Publisher
    • Engine: Difference Engine
    • Contribution: A prominent science writer and lecturer who initially promoted Babbage’s work to secure government funding. However, he later became skeptical and critical of Babbage’s failure to complete the machine.
  6. Michael Faraday: The Scientific Supporter (1791-1867)
    • Position: Physicist and Chemist
    • Engine: Indirectly supported both Difference Engine and Analytical Engine
    • Contribution: While not directly involved in the machines, Faraday was one of Babbage’s scientific peers and supported his broader scientific efforts. Babbage attended Faraday’s lectures and the two often exchanged ideas.

While Babbage was the driving force behind these machines, it was these individuals — engineers, mathematicians, and advocates — who played critical roles in attempting to bring his mechanical computers to life.

Alan Turing: The Architect of Modern Computing (1912-1954)
Alan Mathison Turing was born in 1912. At school he was soon noted for his unconventional talent for mathematics and logic, usually devising his own learning methodology for problem-solving. In today’s parlance we would say he saw “patterns as wayfinding toward the solution.”

He attended King’s College in Cambridge, where he immediately demonstrated his genius in mathematical logic. Turing’s papers were uniquely theoretical, positing queries of machines, numbers, and both general and specific thought. These early ideas would transform into the roots of the digital universe.

For more than 80 years, Babbage and Lovelace had been little more than footnotes at the beginning of machine-learning history, their contributions stymied by the limitations of both mechanical tooling and public indifference. In fact, the term “computer” in the context of a machine was first used in 1897.

Never content with the topic on the blackboard, Turing sought something beyond it, something unknown, and he would spend his life finding and defining it, much like Babbage when he was at Trinity.

In 1936, his seminal paper was published, On Computable Numbers. This paper introduced the world to his life’s work, The Turing Machine. Purely theoretical at the time, it followed a set of rules (prompts) to read, write, and erase symbols on an infinite “tape.” In short, the computation and manipulation of symbols according to pre-defined logical steps. Another proposal at this time was the Universal Turing Machine, positing the concept of Universality, that a solitary machine could simulate any other machine — the modern idea of a computer.

Setting the stage for the age of artificial intelligence, Turing defined, mathematically, the possibility of a concept to be ‘computable’. In other words, any problem to be solved by a human with pencil and paper, chalk and board could, in theory, be solved by a machine faster and without human error. This was the first true step toward artificial intelligence. In 1936, much like 1836, there wasn’t a ‘need’ for computers, humans were capable of computation as required at the time. Mechanized mathematics was little more than a curiosity, and barely a thought outside of theoretical circles.

But the world would wait for theories to bloom. World War II began when the Nazis invaded Poland on September 1, 1939. It was time to put theories to use. The British government recruited Turing to join the top-secret Government Code and Cypher School at Bletchley Park, where he led colleagues to design and employ the Bombe Machine to decipher and eventually break the Nazi’s Enigma cipher.

Enigma’s encryptions were reset daily, posing an almost infinite number of code possibilities. By the time the Allied codebreakers mastered one, their key was obsolete. Quickly grasping the problem, Turing and Gordon Welchman designed the Bombe to process key settings faster than humans could. He “programmed” Bombe to simulate multiple Enigmas at once, effectively exposing enemy movements and battle plans. The Nazis were unaware of this and prior to D-Day the U.S and Allies “allowed” their invasion timeline and location to be known.

It was a trap, of course, and the enemy had to scramble as U.S. forces landed at Normandy on June 6, 1944. It was a triumph of machine intelligence over human secrecy, as well as a demonstration that machines could solve problems once thought to require human intuition. Because the work at Bletchley Park was highly classified, Turing’s contributions were unknown at the time, and the breadth of his work was celebrated only after his death.

Another machine, Colossus (1943), was employed at Bletchley Park, and also targeted encrypted military code. The brainchild of engineer Tommy Flowers, it was built with vacuum tubes, fully electronic, and much faster than Bombe’s electromechanic build.

Most historians credit the Bletchley Park codebreakers with shortening the war by years. I researched the wartime similarities and post-war disparities between Bletchey Park and the Manhattan Project. (See notes at the conclusion of this week’s chapter. The information is thought-provoking.)

After Germany surrendered on May 7, 1945, the European allied forces went home to rebuild their countries. Alan Turing went home to re-embark upon his goal: building real, tangible, programmable machines. He joined the National Physical Laboratory and collaborated on designing one of the first modern computers, the Automatic Computing Engine (ACE), in 1945. It was not built.

ACE was the manifestation of Turing’s earlier Universal Engine. Bombe, though highly effective, was not really a computer because it could not multitask. It was a super-codebreaker constructed on computational theories. ACE was designed to execute algorithmic processes by using stored memory. When the pilot ACE was powered up in 1950, it was one of the fastest machines built, surpassing even ENIAC (1945), the first commercially available computing machine in Britain. (See chart in notes.)

One of ACE’s ground-breaking aspects was its modular construction. It employed reusable subroutines, a remarkably early concept in software development. The goal was speed, input memory organization, and stored instructions. ACE used instruction pipelines, a basis for future CPU design.

Like Charles Babbage a century earlier, Turing’s brilliance allowed no patience for bureaucratic inertia. He left the NPL in 1947 before Pilot ACE was completed. This in no way diminishes his profound contributions to the development of the Pilot, as well as our modern understanding of computing and artificial intelligence.

Turing’s earlier studies on patterns in nature, replicated in his work on Bombe, led him to publish The Chemical Basis of Morphogenesis, a 1952 paper linking biology with mathematical principles by pattern formations. He opined that interacting chemicals could evolve to spatial order. This is important in that the concept was much more than a theoretical premise. It envisionedbiological processes aligned with human-generated algorithms and feedback loops. These were all precursors to AI pattern recognition, complex modeling, the possibilities of aligning to natural phenomena, and the idea that intelligence might develop from iterative rules and adaptive behaviors.

As impressive as these accomplishments are, Turing’s most remarkable and far-reaching contribution to the evolution of Artificial Intelligence was his 1950 paper, Computing Machinery and Intelligence, in which he asked the humble yet profound question: “Can machines think?”

He did not answer the question directly. Rather he invited speculation by proposing the Turing Test, also referred to as ”the Imitation Game,” in which a machine’s “intelligence” would be determined by whether or not it could simulate human conversation so well that people could not discern if they were talking to another person or a machine. If the machine “passed” it would be a breakthrough, and the question would be answered “Yes.”

The Turing Test was the foundation for one of the first AI developments: a chatbot called ELIZA (1966), whose creator was disturbed that a number of subject patients knew it was not human, yet trusted and confided in “her,” raising the alarm about the relationship between human psychology and AI. Another, called “Eugene Goostman” (2014), simulated a 13-year old Ukranian boy and convinced 33 percent of the judges he was real, and given a passing grade. Critics were not convinced, claiming the non-native speaker gave it an unfair advantage. (More on anthropomorphizing next week.)

Tragically, Alan Turing was charged with “Gross Indecency” in 1952 after admitting to a homosexual relationship, which was illegal at the time. England’s leaders, who once sought his unique and brilliant mind during the war, were now indicting him for being gay. Found guilty, he chose chemical sterilization over prison and was stripped of his clearances to conduct official research. He died two years later from cyanide poisoning. His death was officially ruled a suicide although there was evidence to suggest that the poisoning might have been accidental.

In 2013, Queen Elizabeth II granted an official Royal Pardon to Mr. Turing. In 2017, the Police and Crime Act was amended by popular demand to immediately pardon all those who had been prosecuted and convicted of sexual acts that were no longer considered a crime.

Unlike Charles Babbage and Ada Lovelace, Alan Turing was never forgotten. He and others had proven the need for ever-expanding machine intelligence. The first Turing Award was presented in 1966 by the Association for Computing Machinery (ACM) and is often referred to as the “Nobel Prize of Computing.” Further honoring Turing’s legacy, the Loebner Prize (1991-present) is an annual competition in which AI chatbots compete to pass the Turing Test. There have been no winners to date.

During the 1970s, a gradual shift began to take place with respect to machines in general — particularly the Turing Test. If the original test was to see if a machine could fool a human, the test transformed into an exercise as of whether humans would accept machines as intelligent, knowing they were not human. Whom do you trust – a programmed computer or an imperfect person? Will humans accept healthcare advice from an algorithm -trained bot? How do we eliminate biases? The proper question might not be not “Can machines think?” but rather “When do u humans accept them as “thinking beings?” These queries will be addressed in future installments.

Alan Turing did not live to see the implementation of his machine intelligence theories, i.e., that machines could self-correct from prior instructions and “learn” like humans do. Babbage, Lovelace, and Turing’s theories laid the groundwork for this science. It is up to humanity to stretch them as far as they can go, one leap at a time.