From Myth to Mathematics: How Humanity Conceived 'Artificial Intelligence' Before Computers?
From Myth to Mathematics: How Humanity Conceived ‘Artificial Intelligence’ Before Computers?
AI Genesis Chronicles · Issue 1
Exploring the Thousand-Year Intellectual Stream of Artificial Intelligence
On a crisp morning in 1801, rhythmic mechanical sounds echoed through the silk factories of Lyon, France. Workers gathered around a peculiar loom—one that required no human intervention, guided only by a series of punched cards to automatically weave complex and exquisite patterns. This Jacquard loom, invented by Joseph Marie Jacquard, prompted the same question in every observer: Did this machine possess some form of “intelligence”?
This question was not unique to Jacquard’s era. In fact, for thousands of years before the birth of computers, humanity had been exploring a profound philosophical proposition: Can machines possess wisdom? Can they think? Can they reason like humans?
What we call “artificial intelligence” today is not a sudden invention of the 1950s, but the crystallization of millennia of human thought and exploration. From ancient myths of intelligent robots to modern philosophers’ theories of mechanical thinking, to the logical tools of contemporary mathematics—each historical node has contributed crucial sparks of thought to today’s AI revolution.
Ancient Dreams of Intelligent Machines: The First Step from Myth to Reality
Humanity’s imagination of intelligent machines can be traced back to the dawn of civilization. In ancient Greek mythology, Hephaestus, the god of fire, crafted a group of mechanical servants from gold who could not only walk but also understand their master’s commands and execute complex tasks. These mythical “golden men” embodied ancient humanity’s earliest dreams of creating machines capable of autonomous behavior.
More surprisingly, these dreams were not confined to imagination alone. As early as the 4th century BCE, the Chinese text Liezi recorded a story about a mechanical puppet: a craftsman named Yanshi created a mechanical figure for King Mu of Zhou that could sing and dance with such lifelike movements that the king initially suspected it was a real person.
Ancient engineers also strived to turn these imaginations into reality. In Alexandria, Greek engineers Hero of Alexandria and Ctesibius designed and built a series of water-powered automatic devices—from time-telling water clocks to mechanical door-opening mechanisms. Though simple, these inventions demonstrated the embryonic concept of “automation.”
During the medieval period, Arab engineer Ismail Al-Jazari completed The Book of Knowledge of Ingenious Mechanical Devices in 1206, detailing over 50 automatic mechanical devices, including robot orchestras that could play music and mechanical servants that could automatically serve tea.
The 11th-century Sanskrit text Samarangana Sutradhara even described mercury-powered mechanical soldiers capable of guarding palaces and executing combat missions.
These early conceptions and practices reflected humanity’s deep desire to create machines capable of autonomous behavior. Though limited by the technological capabilities of their time, these “intelligent machines” could mostly only execute preset simple actions, but they already contained the core concepts of modern AI: autonomy, goal-directed behavior, and responsiveness to the environment.
From mythical imagination to engineering practice, ancient humanity had already begun to ponder: What makes an object possess “intelligence”? This question would guide us into the next historical stage—where philosophers began using rational methods to explore the possibility of machine thinking.
Philosophers’ Mechanical Thinking Revolution: Breakthrough Thinking in the Age of Reason
The European Enlightenment of the 17th to 18th centuries brought revolutionary philosophical foundations to the concept of “machine intelligence.” Three great thinkers—Descartes, Hobbes, and Leibniz—explored the possibility of machine thinking from different angles, directly influencing the later development of computer science and artificial intelligence.
Descartes: The Boundaries of Thinking in a Mechanical Universe
René Descartes’ mind-body dualism, while strictly separating mind from matter, unexpectedly laid the foundation for mechanical intelligence theory. In Descartes’ view, the entire material world, including animal bodies, could be completely explained through mechanical principles.
In Discourse on Method, Descartes boldly proposed the “animal machine theory”: animals are essentially extremely complex automata, and all their behaviors can be explained through mechanical principles without assuming they possess rational souls. Though controversial at the time, this view laid the theoretical foundation for later mechanical behaviorism and computationalism.
More importantly, Descartes proposed the concept of “universal mathematics” (mathesis universalis), envisioning the establishment of a unified mathematical method capable of handling all scientific problems. This idea directly inspired the later development of symbolic logic, becoming an important precursor to modern computer science.
Hobbes: The Revolutionary Insight that “Reasoning is Reckoning”
Thomas Hobbes presented a startling viewpoint in Leviathan: “Reasoning is reckoning.” He believed that all thinking processes could essentially be reduced to addition and subtraction operations.
In De Corpore, Hobbes further elaborated this thought: “When a man reasons, he does nothing else but conceive a sum total, from addition of parcels; or conceive a remainder, from subtraction of one sum from another.”
This insight was epochal. Hobbes actually proposed the core idea of modern computational theory: complex thinking processes can be decomposed into simple basic operations. He also emphasized the importance of language in reasoning, believing that complex reasoning was impossible without language—a viewpoint that directly anticipated the development direction of modern symbolic AI.
Hobbes’ mechanical materialism philosophy laid the foundation for later analytical philosophy and computational cognitive science. His thinking suggested that if reasoning truly is computation, then in principle, machines should also be able to reason.
Leibniz: The Grand Vision of Universal Symbolic Language
Gottfried Wilhelm Leibniz proposed one of the most forward-looking ideas in human history: Characteristica Universalis (universal symbolic language) and Calculus Ratiocinator (logical calculus).
Leibniz envisioned creating a universal symbolic language capable of precisely expressing all concepts in science, mathematics, and metaphysics. More importantly, he hoped to develop a set of logical calculus rules that could solve all rational problems through pure symbolic manipulation.
In a letter to a friend, Leibniz wrote: “Once we have this language, we will be able to calculate metaphysical and moral problems just as we calculate mathematical problems. When disputes arise, philosophers need not argue; they need only say: ‘Let us calculate!’”
This vision directly anticipated the core concepts of modern computer science: transforming complex problems into symbolic operations and solving them through algorithms. Leibniz even designed a mechanical calculator capable of performing four arithmetic operations, considered an important precursor to modern computers.
However, Leibniz also recognized the limitations of mechanical explanations of consciousness. In Monadology, he proposed the famous “mill argument”: even if we could magnify the brain like a mill to observe its internal workings, we would still only see mechanical movements and could never find the source of perception and consciousness. This argument remains an important issue in consciousness philosophy today.
These philosophical thoughts laid a solid conceptual foundation for later formal logic and computational theory. From Descartes’ mechanical worldview to Hobbes’ computational reasoning theory to Leibniz’s symbolic calculus dream—17th-18th century philosophers had already outlined the basic contours of modern artificial intelligence.
Their thinking suggested that intelligent behavior might not require mysterious “vital force” or “soul,” but could be achieved through mechanical processes, symbolic operations, and logical calculus. This revolutionary conceptual shift paved the way for the mathematical breakthroughs of the 19th century.
Mathematical Breakthroughs Paving the Path to Intelligence: From Abstract Theory to Computational Tools
If 17th-18th century philosophers provided the conceptual framework for machine intelligence, then 19th to early 20th century mathematicians provided concrete tools for these abstract concepts. Three key mathematical breakthroughs—Boolean algebra, Gödel’s incompleteness theorems, and Turing’s computational theory—transformed “machine thinking” from philosophical speculation into operable mathematical theory.
Boolean Algebra: Transforming Logic into Mathematics
In 1847, English mathematician George Boole published An Investigation of the Laws of Thought, inaugurating a new era of modern symbolic logic. Boole wrote in the preface: “The design of this treatise is to investigate the fundamental laws of those operations of the mind by which reasoning is performed; to give expression to them in the symbolical language of a Calculus.”
Boole’s revolutionary contribution was transforming logical reasoning into algebraic operations. In traditional Aristotelian logic, reasoning relied on natural language and intuition; in Boolean algebra, logical relationships were expressed as mathematical formulas that could be processed through mechanized symbolic operations.
In 1854, Boole further refined his theory in An Investigation of the Laws of Thought. He proved that logic and probability could be handled with the same mathematical tools and proposed the complete system now known as “Boolean algebra.”
The significance of Boolean algebra far exceeded its era. In 1938, American engineer Claude Shannon proved in his MIT master’s thesis that Boolean algebra could be directly applied to circuit design. Shannon discovered that circuit switch states (on/off) perfectly corresponded to Boolean algebra truth values (true/false), a discovery that directly catalyzed the birth of modern digital computers.
Boole himself might never have imagined that the mathematical tool he created to study “laws of thought” would ultimately become the theoretical foundation of all digital devices—from smartphones to supercomputers, all running the basic operations of Boolean algebra.
Gödel’s Incompleteness Theorems: Revealing the Boundaries of Formal Systems
In 1931, 25-year-old Austrian mathematician Kurt Gödel published the incompleteness theorems that shocked the mathematical world. This theorem appeared to be a technical result about mathematical foundations, but its deeper implications directly influenced the development of computational theory and artificial intelligence.
Gödel’s First Incompleteness Theorem states: In any consistent formal system containing basic arithmetic, there exist true statements that can neither be proved nor disproved within that system. In other words, no formal system can completely capture mathematical truth.
This result had profound implications for the concept of “machine intelligence.” It suggested that if we understand intelligence as some form of symbolic manipulation system, then this system must have inherent limitations. Any attempt to completely formalize human reasoning is doomed to be incomplete.
However, Gödel’s work also made positive contributions to computational theory. In proving the incompleteness theorems, he developed recursive function theory, which directly influenced later concepts of computability. Gödel actually provided a mathematical foundation for the question “what can be computed.”
Interestingly, Gödel himself held a cautious attitude toward machine intelligence. He believed that the human mind possessed some ability that transcended mechanical computation, capable of “seeing” truths that formal systems could not prove. This viewpoint continues to spark controversy in cognitive science and AI philosophy today.
Turing’s Computational Theory: Defining the Essence of “Computation”
In 1936, 24-year-old British mathematician Alan Turing published On Computable Numbers, with an Application to the Entscheidungsproblem, a paper that not only solved Hilbert’s decision problem but, more importantly, provided a precise mathematical definition of the concept of “computation.”
The concept of the Turing machine possessed stunning simplicity and universality. A Turing machine needed only three basic components: an infinitely long tape, a read-write head, and a set of state transition rules. Despite its simple structure, Turing proved that such a machine could compute any “computable” function.
Turing’s work solved the decision problem (Entscheidungsproblem) proposed by Hilbert in 1900: Does there exist an algorithm that can determine the truth or falsehood of any mathematical proposition? Turing proved that no such universal decision algorithm exists by constructing a problem (the halting problem) that cannot be solved by any algorithm.
More importantly, Turing’s work established the “Church-Turing thesis”: Any function that is intuitively computable can be computed by a Turing machine. Though this thesis cannot be strictly proven (because “intuitively computable” is not a mathematical concept), it provided a solid philosophical foundation for computational theory.
The universality of the Turing machine concept means that if human thinking processes are indeed some form of computation, then in principle they can all be simulated by Turing machines. This provided theoretical assurance for later artificial intelligence research: Machine intelligence is not only possible but theoretically equivalent to human intelligence.
These mathematical breakthroughs transformed the abstract concept of “machine thinking” into an operable theoretical framework. Boolean algebra provided mathematical tools for logical operations, Gödel’s theorems revealed the boundaries and possibilities of formal systems, and Turing’s theory defined the essence and limits of computation.
From Leibniz’s symbolic calculus dream to Turing’s precise definition of machines, humanity spent nearly three centuries finally transforming the philosophical question “can machines think” into concrete mathematical and engineering problems. Now, what remained was to put these theories into practice—which is precisely what we will explore in our next chapter.
Engineering Prototypes of Intelligent Machines: The Crucial Step from Theory to Practice
While philosophers were speculating about the possibility of machine intelligence and mathematicians were constructing theoretical frameworks for logical calculus, engineers had already begun using their hands to create truly “intelligent” machines. Two key engineering breakthroughs—the Jacquard loom and Wiener’s cybernetics—provided direct technical inspiration and theoretical guidance for modern artificial intelligence.
The Jacquard Loom: Engineering Embodiment of Programmatic Thinking
In 1801, Joseph Marie Jacquard completed his masterpiece: a loom capable of automatically weaving complex patterns. The revolutionary nature of this machine lay not in its mechanical precision, but in its concept of programmatic control.
The Jacquard loom used a series of punched cards to control the loom’s operations. Each card’s hole pattern corresponded to specific weaving instructions: which warp threads should be raised, which should be lowered, and when to change weft colors. By changing the sequence or content of cards, the same machine could weave completely different patterns.
This design contained all the basic elements of modern computer programs:
- Data storage: Punched cards stored weaving pattern information
- Program control: Card sequences defined operational steps
- Conditional execution: The machine executed different operations based on hole patterns on cards
- Loop structures: Repetitive patterns were achieved through repetitive card sequences
The influence of the Jacquard loom far exceeded the textile industry. British inventor Charles Babbage directly borrowed the punched card concept when designing his Analytical Engine. Babbage’s collaborator Ada Lovelace wrote in 1843: “The Analytical Engine might act upon other things besides number… Supposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
Lovelace’s passage is considered the first description of a universal computer’s potential in human history, and this insight directly stemmed from inspiration from the Jacquard loom.
20th-century computer pioneers continued using punched cards as storage media for programs and data. From IBM tabulating machines to early mainframes, punched card technology dominated the computer industry for nearly a century. Jacquard’s invention proved a key concept: complex behavior can be achieved through ordered combinations of simple instructions.
Wiener’s Cybernetics: Feedback Mechanisms and Intelligent Behavior
In 1948, American mathematician Norbert Wiener published Cybernetics: Or Control and Communication in the Animal and the Machine, a book that not only created the term “cybernetics” but, more importantly, provided a completely new theoretical framework for understanding intelligent behavior.
Wiener’s core insight was that the key to intelligent behavior lies in feedback mechanisms. Whether biological or mechanical, to exhibit goal-directed behavior, systems must be able to:
- Perceive the difference between current state and target state
- Adjust their behavior based on this difference
- Continuously monitor behavioral effects and make corrections
Wiener used automatic aiming systems for anti-aircraft guns to illustrate this concept. Traditional anti-aircraft guns required manual calculation of aircraft position and speed, then manual adjustment of gun barrel angles. Automatic aiming systems, however, could continuously track targets and adjust gun direction in real-time, greatly improving hit rates.
This seemingly simple feedback concept actually revealed the essence of intelligent behavior. Wiener pointed out that human learning, adaptation, and goal pursuit are essentially complex feedback processes. When we learn to ride a bicycle, we continuously perceive our body’s balance state and adjust muscle force directions until we achieve stable riding.
Wiener also explored the differences and connections between analog and digital computers. He believed that the brain was more like an analog computer, achieving intelligent behavior through continuous signal processing; while digital computers simulated these processes through discrete symbolic operations.
Cybernetics’ influence on modern AI is profound:
- Neural networks: The backpropagation algorithm in modern deep learning is essentially a feedback mechanism, continuously adjusting network parameters to minimize prediction errors
- Reinforcement learning: Intelligent agents obtain feedback through interaction with the environment, gradually optimizing their behavioral strategies
- Adaptive systems: From autonomous vehicles to intelligent recommendation systems, all rely on real-time feedback to adjust their behavior
Wiener also foresaw the social impact that artificial intelligence might bring. He warned in The Human Use of Human Beings that automation technology might lead to massive unemployment, and society needed to prepare for this. These concerns remain significant in today’s AI ethics discussions.
These engineering practices proved the feasibility of theoretical concepts and laid the technical foundation for modern AI. The Jacquard loom demonstrated the power of programmatic control, proving that complex behavior could be achieved through combinations of simple instructions; Wiener’s cybernetics revealed the feedback nature of intelligent behavior, providing theoretical guidance for adaptive and learning systems.
From ancient mechanical automatic devices to Jacquard’s programmatic loom to Wiener’s feedback theory, engineers used practical actions to prove the theoretical visions of philosophers and mathematicians. Machines could not only execute preset tasks but also adjust their behavior based on environmental feedback, exhibiting some form of “intelligence.”
Challenges and Reflections: Historical Controversies and Contemporary Insights
In the historical process of exploring machine intelligence, not all voices were optimistic. Some profound questions and concerns permeated the entire development process. These historical controversies not only shaped the development of AI theory but also provided important insights for contemporary AI ethics discussions.
Leibniz’s “Mill Argument”: The Irreducibility of Consciousness
Despite envisioning universal symbolic language and logical calculus, Leibniz simultaneously raised fundamental questions about mechanical explanations of consciousness. In Monadology, he proposed the famous “mill argument”:
“Suppose there were a machine whose structure produced thoughts, sensations, and perceptions; we could imagine it enlarged to the point where we could enter it as we would a mill. Upon examining its interior, we would find only parts pushing against each other mechanically, and never anything that could explain perception.”
This argument remains a core issue in consciousness philosophy today. Even if we could completely understand the brain’s neural mechanisms, or even perfectly simulate these processes with machines, we would still face the “explanatory gap”: Why is there subjective experience? Why is information processing accompanied by consciousness?
Contemporary AI systems, no matter how complex, face the same questioning. Can ChatGPT truly “understand” the meaning of language, or is it merely performing complex pattern matching?
The Deep Implications of Gödel’s Theorem: Fundamental Limitations of Formal Systems
Gödel’s incompleteness theorems are not only technical results about mathematical foundations but also pose fundamental challenges to AI’s possibilities. If human mathematical intuition can “see” truths that formal systems cannot prove, does there exist some cognitive ability that transcends mechanical computation?
Some philosophers and mathematicians argue that human intelligence possesses qualities that machines cannot replicate. Mathematician Roger Penrose argued in The Emperor’s New Mind that human mathematical insight proves the non-computational nature of consciousness.
However, this argument also faces rebuttals. Computer scientists point out that Gödel’s theorems apply equally to humans: we also cannot solve all mathematical problems in finite time. Human “intuition” might just be more efficient heuristic algorithms, rather than mysterious abilities that transcend computation.
Early Social Concerns: The Double-Edged Sword of Technological Progress
Interestingly, concerns about “machines replacing humans” are not unique to modern times. When the Jacquard loom was invented, textile workers in Lyon worried that this automated machine would take away their jobs. In 1831, angry workers even destroyed some Jacquard looms, considered the earliest “anti-automation” protest in history.
Wiener foresaw the social impact of automation in 1948. He wrote in Cybernetics: “Let us remember that the automatic factory and the assembly line without corresponding social adjustments are bound to average a great deal of unemployment… The scale of this unemployment may be enormous.”
Wiener also warned that if we view humans merely as “cogs” in the production process, then machines could indeed completely replace humans. But if we value human creativity, empathy, and moral judgment, then human-machine cooperation would be more valuable than pure automation.
Insights for Contemporary AI Ethics
These historical controversies provide important insights for contemporary AI development:
1. The Distinction Between Technical Capability and Conscious Experience Leibniz’s mill argument reminds us that even if AI systems can perfectly simulate human behavior, we must still be cautious about the question of “machine consciousness.” This has important implications for AI rights, responsibility attribution, and other ethical issues.
2. The Unique Value of Human Intelligence The controversy over Gödel’s theorem suggests that humans may possess certain unique cognitive abilities. Even if these abilities can ultimately be replicated by machines, we should still cherish human creativity, intuition, and moral judgment.
3. Social Responsibility in Technological Development From the Jacquard loom to modern AI, technological progress has always been accompanied by social change. Wiener’s warnings remind us that technology developers have a responsibility to consider the social impact of their inventions and actively participate in related policy discussions.
4. The Importance of Human-Machine Cooperation Historical experience shows that the most successful technological applications often do not completely replace humans but enhance human capabilities. From the Jacquard loom liberating workers’ creativity to modern AI assisting scientific research, human-machine cooperation has always been the most promising direction for development.
These historical controversies and reflections provide profound insights for understanding AI’s nature and limitations. They remind us that while pursuing technological progress, we must maintain respect for human values and attention to social impact.
Conclusion: The Modern Echo of Millennial Dreams
From the golden servants in ancient Greek mythology to the programmatic control of the Jacquard loom; from Descartes’ mechanical worldview to Turing’s precise definition of machines—humanity’s exploration of “artificial intelligence” has continued for thousands of years. This is not a sudden invention, but the millennial accumulation of human wisdom.
Reviewing this history, we can clearly see an evolutionary path:
Mythical Imagination → Philosophical Speculation → Mathematical Tools → Engineering Practice
Each historical stage contributed key elements to modern AI: ancient myths provided initial dreams and goals; modern philosophy established the theoretical possibility of mechanical intelligence; contemporary mathematics created precise logical tools; engineering practice proved the feasibility of theory.
More importantly, the core insights of these historical pioneers are stunningly embodied in contemporary AI:
- Hobbes’ “reasoning is reckoning” anticipated modern symbolic AI and logical reasoning systems
- Leibniz’s universal symbolic language dream is realized in programming languages and knowledge representation
- Boole’s logical algebra became the foundation of all digital devices
- Turing’s computational theory defined the theoretical boundaries of AI
- Wiener’s feedback mechanisms inspired machine learning and adaptive systems
However, history also reminds us to remain humble and vigilant. Leibniz’s consciousness questioning, Gödel’s limitation theorems, Wiener’s social concerns—these profound reflections remain significant today. They tell us that AI development is not only a technical issue but also a philosophical, ethical, and social issue.
When we marvel at ChatGPT’s conversational abilities, AlphaGo’s chess prowess, and autonomous driving’s technological progress, we are actually witnessing the realization of humanity’s millennial dreams. But as history shows, every technological breakthrough brings new problems and challenges.
We stand on the shoulders of history, both proud of humanity’s intellectual inheritance and awed by future responsibilities. From myth to mathematics, from dreams to reality—the story of artificial intelligence continues to be written, and each of us is a participant and witness to this story.
Next Issue Preview: In the next issue of AI Genesis, we will explore “From Theory to Reality: How the 1956 Dartmouth Conference Officially Launched the AI Era,” examining how modern artificial intelligence was formally born from millennia of intellectual accumulation.
This article is the first issue of the “AI Genesis: Chronicles of Artificial Intelligence” series. This series is dedicated to tracing the historical trajectory of AI development, exploring the intellectual streams behind technological progress, and providing historical perspective for understanding the contemporary AI revolution.