Abstract: The development of artificial intelligence has led some people to hold overly optimistic views, believing that it will soon replace humans as the intellectual dominator, even supplanting the spontaneous order that humans have followed due to “rational inadequacies.” However, some philosophers and economists have long pointed out that spontaneous order is something that human rationality and sensibility cannot fully comprehend. Just because humans have succeeded in understanding simple systems does not mean that the same methods can be applied to complex systems. Research by mathematicians on computational complexity and computability shows that there is a class of problems (NP) that are very difficult to solve, and many problems in real-world complex systems are even harder than NP problems. Thus, NP-hard problems serve as a mathematical expression of the “difficulty in understanding complex systems.” Since artificial intelligence relies on computer computation, and there exists a large class of problems that are difficult or impossible to compute, general artificial intelligence cannot be realized and cannot replace spontaneous order—a problem that is much harder than NP problems. Humans gradually discover spontaneous order through random exploration—a form of brute-force search—relying on two abilities: the capacity to judge the superiority of one choice over another and the motivation to explore. Because human utility is a complex combination of various sensory and psychological cultural factors, its complexity far exceeds that of NP problems. Therefore, artificial intelligence cannot simulate human utility and thus cannot replicate the human methods of discovering spontaneous order.
Keywords: Artificial Intelligence, Spontaneous Order, NP, Random Exploration, Utility
The Question Raised: Can Artificial Intelligence Replace Spontaneous Order?
Recently, some breakthroughs in artificial intelligence have led to inflated imaginations, with people believing that general artificial intelligence will soon be realized and will completely replace human dominance in intellectual matters. This notion may stem from the business sector developing artificial intelligence models, which has sparked exaggerated admiration for this prospect among the general public. However, this exaggerated judgment is somewhat vague, leading to the impression that it could replace all human knowledge accumulation and intellectual capabilities, including not only human calculations under specific rules but also the rules that humans have consistently followed, such as market rules, the rule of law, and democratic rules. This raises a serious issue.
Although no one has openly proclaimed this, the actual behavior of some individuals provides a definitive answer. For example, after the Trump administration took office in the United States, Elon Musk led the “Department of Government Efficiency,” which took over the International Development Agency on its first day and gained access to the federal government’s computer systems. Musk claimed that he wanted to use “artificial intelligence to lead government decision-making,” with the goal of having AI systems automatically handle federal budgets, policy analysis, and administrative approvals within a few months (YouCao, 2025). His assistant, Thomas Shedd, announced plans to create “AI coding agents” to automate government processes and centralizing sensitive data like government contracts (Economic Times, 2025). According to the U.S. Constitution, government decision-making follows a set of legally established procedures. This set of procedures has evolved over a long history within the constitutional framework, and its rationality and legality have been proven by historical practice.
Using “artificial intelligence to lead government decision-making” implies that Musk believes decisions made by artificial intelligence would be superior to those made based on the traditions of the American institutional system. This institutional tradition has deep roots; it inherits the tradition of English common law and has evolved over more than 200 years in America, incorporating characteristics that adapt to America and refining innovative rules. It can be said that this political tradition in the U.S. is precisely what Hayek referred to as the real counterpart of spontaneous order. For Hayek, “spontaneous order” is almost equivalent to ideal order. In reality, institutions and rules that approach spontaneous order include markets, families, governments, enterprises, religions, social organizations, and cultural traditions. Throughout human history, people have integrated the concept of “adhering to spontaneous order” and the rules manifested by spontaneous order into the human knowledge system.
As for the formation of spontaneous order, a deep understanding of spontaneous order itself is something that human rationality cannot fully grasp. Hayek stated, spontaneous order’s “degree of complexity is not limited to what a human mind can master. ” (2000a, p. 57). Spontaneous order is formed through long-term interactions among people without a specific purpose; during this formation process, people are often unaware of the order’s emergence, and future generations cannot know it either. The value of the rules inherent in spontaneous order is also beyond the complete understanding of human rationality. It serves as an “abstract rules operate as ultimate values because they serve unknown particular ends” (2000b, p. 21). Therefore, spontaneous order holds a dominant significance for human civilization. Hayek remarked, “civilization has largely been made possible by subjugating the innate animal instincts to the non-rational customs which made possible the formation of larger orderly groups of gradually increasing size.” (2000b, p. 500).
Thus, the question is whether artificial intelligence has reached such a height that it not only surpasses human rationality but also transcends the spontaneous order that is beyond human “rational inadequacy,” and can therefore be used to replace spontaneous order?
A Large Class of Problems Are Theoretically Unsolvable by Computers
In the academic field of artificial intelligence, opinions are relatively cautious. So far, no one has claimed that general artificial intelligence has been achieved; industry leader Yann LeCun has emphasized that the efficiency of computers is far inferior to that of the human brain (2021, p. 64). Furthermore, the basic view in computer mathematics research is that artificial intelligence may never be able to solve a certain large class of problems. In fact, discussions about computational complexity and computability problems have long existed in the mathematical community, particularly regarding the P and NP problems. With the development of computer computational power, this discussion has become increasingly important, to the point that Scott Aaronson pointed out, “The P vs. NP problem is one of the deepest questions posed by humanity” (2021, p. 49). To date, most mathematicians believe that P ≠ NP, which means there is a class of problems known as NP that are very difficult to solve. It is noteworthy that, even today, mathematicians acknowledge that there is a large class of problems that computers may not be able to solve.
Here, we need not repeat the P and NP problems in mathematical language, as general readers may find it difficult to understand and might miss the main point. This issue actually highlights the impact of problem complexity on solvability. The higher the complexity of a problem, the more computational resources are required, making the problem harder to solve and more likely unsolvable. The term “computational resources” in this context is abstracted as time and space, with a particular emphasis on the time dimension. Excessive computation time not only affects the timeliness of the results but also makes the cost prohibitively high; it is also possible that the problem being solved is ultimately unsolvable. Thus, time becomes a measure of computational complexity. When the computation time approaches infinity, the problem becomes computationally intractable.
For sufficiently long inputs (n), mathematicians categorize problems into four general classes based on their complexity levels and the corresponding difficulty of solving them. One class can be solved in polynomial time (understood as simple or quick, i.e., P), another class cannot be solved in polynomial time but can be verified (i.e., NP), yet can be solved in non-deterministic polynomial time (understood as very difficult), one class requires exponential time to solve (exponential time means very long, making solutions nearly impossible), and another class can only be approached through brute-force search (or exhaustive search), which involves testing every possible combination in astronomical numbers to find the optimal solution, making it even less feasible . Although these four categories are not sharply defined and may overlap, generally speaking, from front to back, each subsequent class is more difficult, requires longer solving time, and is more likely to be unsolvable.
These four classes of computational complexity problems are differentiated by algorithms. Mathematicians acknowledge that, in the broadest sense, brute-force search is considered an algorithm, albeit the simplest and most clumsy one, and it involves the most computational steps and time, serving as a fundamental reference for various algorithms. Philosophically, an algorithm represents a method of thinking; brute-force search—testing each possibility one by one—is the most basic form of thinking, while other algorithms are more sophisticated thinking techniques. For instance, polynomial algorithms, non-deterministic polynomial algorithms, and exponential algorithms exist, which are more clever than brute-force search, characterized by requiring fewer steps and less time. This indicates that mathematics, as a refined method of thought, is a way to save thinking time by utilizing the properties of numbers. It is part of humanity’s effort to reduce the cost of action. Therefore, the so-called computational complexity problem, stripped of the nature of the problem itself, is fundamentally a question of cognitive economy. Computation time, as a cost for humanity, forces people to make a detailed trade-off between computational costs and the utility of solutions, specifically between marginal computation costs and marginal utility of solutions.
Conversely, the problem itself is another aspect of computational complexity. Of course, the “problems” discussed here are not simple ones; they are neither elementary arithmetic issues nor university-level physics or chemistry problems. No matter how complex these problems may be, they are still simple system problems. This roughly corresponds to inorganic systems. In contrast, complex systems correspond to biological, physiological, or social systems. Several typical NP problems studied by mathematicians, such as the traveling salesman problem, clique problem, and packing problem, are social issues. Economists applying mathematical methods to solve the problems of reclaiming and re-auctioning wireless channels also deal with social issues. However, these “problems” regarded as mathematical issues are already quite abstract and simplified, much simpler than real-world social issues. Yet, if these relatively simple problems are difficult or even unsolvable, how much more so for problems in reality?
The complexity of most real-world problems exceeds that of NP problems, “things in nature and human society are many times more complex than phenomena in number theory” (Huang, Xu, 2004, p. 12). Most mathematicians have already concluded that P≠NP, meaning the NP problem is very difficult to solve (Roughgarden, 2023, pp. 334-336). For example, even if a traveling salesman only sells products in 48 cities in the United States, finding the optimal route is still difficult. The problem of distributing M types of products among N people in reality is much more complex; even in a small country like Singapore, with a population of 5.92 million, or even in a village of only 100 people, solving the optimal distribution of 100 types of goods would require choosing from 100100 = 1E+200 possible combinations. Thus, the so-called mathematical problems are merely abstractions and simplifications of real issues, with mathematics unable to solve a large class of mathematical problems, let alone most real-world problems. Therefore, since there exists a class of problems that are difficult for computers to solve, how could artificial intelligence be all-powerful? How could general artificial intelligence be realized?
Economists: Complex Systems Are Beyond Rationality
Regarding the failure of human planned economic experiments, economists attribute the cause to the fact that complex systems are too intricate for human rationality to understand and grasp, let alone “design.” Hayek stated, “The 20th century is undoubtedly an age of superstition, primarily because people have overestimated the achievements of science; when we say that people have overestimated the achievements of science, we do not mean that they have overestimated the successes of science in the realm of relatively simple phenomena (where science has indeed achieved great success), but rather that they have overestimated the achievements of science in the realm of complex phenomena, as evidence has shown that applying techniques proven to be highly beneficial in the realm of relatively simple phenomena to complex phenomena is extremely misleading” (Hayek, 2000, p. 530).
This perspective is essentially a form of epistemological agnosticism, a tradition found in both Eastern and Western thought. Laozi said, “To know that you do not know is the highest; not to know that you do not know is a disease.” Kant believed that human reason can only understand the appearances of things, not the things-in-themselves. His bottom-line logic is, “It is obvious that I cannot know as an object that which must be the prerequisite for knowing any object.” “If we wish to make a judgment about the origins of sensibility and intellectuality, I can only see that such exploration completely exceeds the limits of human reason and is beyond our capability.” (quoted from Zeng and Liu, 2007, p. 136). Hayek shares a similar view: “A complete explanation of even the external world as we know it would presuppose a complete explanation of the working our sense and our mind. If the latter is impossible, we shall also be unable to provide a full explanation of the phenomenal world.” (Hayek, 1976, p. 194)
The development and success of science since modern times seem to contradict this agnosticism. However, as previously quoted from Hayek, this success is only in the realm of simple systems, while in the realm of complex systems, humanity remains “beyond rationality.” We simply do not see Hayek’s detailed distinctions between “simple phenomena” (systems) and “complex phenomena” (systems) to clarify where the boundaries lie. However, the discussion on computational complexity by mathematicians in the previous section regarding P and NP problems seems to help us further understand why humans can achieve success in exploring simple systems while their rationality cannot grasp complex systems.
The four types of computational complexity discussed in the previous section—polynomial time algorithms (P), non-deterministic polynomial time algorithms (NP), exponential time algorithms (ETA), and brute-force algorithms (BF)—can be broadly divided into two categories: one that is solvable in polynomial time and another that includes NP problems and those more difficult than NP. Mathematicians refer to these two types of problems as “easy problems” and “hard problems” (Roughgarden, 2023, p. 31). We can roughly correspond these two types of problems to simple systems and complex systems. Since mathematics is a refined form of thought, if we can mathematically prove that problems in simple systems are solvable while problems in complex systems are difficult or even unsolvable, it would demonstrate that simple systems are within the grasp of rationality, while complex systems are beyond it.
Figure 1 Comparison of Time Complexity Growth Trends
Note: The chart was created using Microsoft Excel. It compares a polynomial time complexity, O(n³), and an exponential time complexity, O(2ⁿ). When n exceeds 10, the exponential time complexity surpasses the polynomial time complexity. This can be seen as a watershed in terms of computational complexity between simple systems and complex systems. Of course, in practice, this watershed is not a single point but a fuzzy area. The boundaries between the two will also shift with advancements in computational methods. Nevertheless, such a watershed exists.
Many descriptions or definitions exist regarding simple systems and complex systems. From a mathematical perspective, the distinction lies in dimensions, linearity, coupling between components, emergence, and so on. However, complex systems are often viewed as systems generated from an exponential number of possibilities, thus roughly corresponding to exponential time algorithms, indicating that the difficulty of solving complex systems exceeds NP. In contrast, simple systems are generated from a relatively small number of possibilities, roughly corresponding to polynomial time algorithms, making them easier to solve. Therefore, we can use precise mathematical methods to distinguish between simple systems and complex systems, with their significant difference lying in computational complexity. This can be measured by computational complexity time. The computational complexity of complex systems is very high, making them difficult to solve or even unsolvable. Computational complexity times above NP represent long, unacceptable computation times or costs, which may even approach infinity. This is why complex systems are “beyond rationality.”
How Did Humans Survive Without Computers?
However, the problems in reality are not just mathematical issues; they are matters of winning or losing, life or death. Without solutions, humanity would be unable to develop or even survive. How is it that humanity has reached this point today, without extinction but even thriving?
Generally speaking, “complexity” can be understood as the challenge of selecting the best answer from how many possible answers; the more choices possible, the more complex the problem. The complexity of a problem primarily depends on its scale—specifically, the number of variables, dimensions, and states involved. Complexity arises from the combinations of these factors. When there are few variables, such as when a traveling salesman needs to visit only six cities, there are 720 possible route combinations. This is not manageable for the salesman, but for a computer, finding the optimal route is not complex. However, if there are 48 cities, the number of possible route combinations becomes astronomical—1.24E+61—making it impossible for a computer to find the optimal route “quickly.” When a cellular automaton model is in a one-dimensional, two-state, three-cell configuration, there are only 256 possible rules. But if the number of states increases to three, there are 7,625,597,484,987 possible rules (Wolfram, 2002, p. 60). If we move to two dimensions with at least nine cells in a two-state scenario, the number of possible rules skyrockets to 1.34E+154.
Upon closer examination, the problems classified as NP or NP-hard are fundamentally exponential in nature. Thus, NP problems are included within exponential time problems. For example, the traveling salesman problem involves finding the optimal route among N! possible choices, and the number of possible rules for cellular automata is a combination of cell count (N), state count (S), or dimensions (D), leading to ; all exhibit exponential function forms. We can also say that the difficulty of exponential time problems is greater than that of NP problems. Another characteristic of exponential time problems is their rapid acceleration; as the number of variables, states, or dimensions increases slightly, the algorithm’s steps or time grow disproportionately fast, the increase in computing power is far from keeping up (Roughgarden, 2023, pp. 34-35). In the real world, with hundreds of thousands or millions of variables (or people, nodes), and states that are not just binary but at least four-dimensional, most real-world problems are of extremely high complexity, at least falling into NP problems or exponential time problems, which are essentially non-computable issues.
However, mathematics is a profession pursued by only a small number of people. Its simple methods serve merely to save computation time when addressing simple problems for the general public. When faced with numerous real-world problems that cannot be computed, how do people solve them? In fact, people have resolved these issues; otherwise, humanity would not exist today. They also use mathematics to solve these problems, employing the brute-force search (exhaustive search) algorithm mentioned earlier. This can be referred to as the calculation in one base system, but it is certainly the clumsiest method and at the bottom among the various calculation methods mentioned above. One might ask why ordinary people do not seem to use brute-force search methods to solve problems. In reality, people do not intentionally test each possible option one by one; their actual method is random exploration. Randomly probing among astronomical possibilities is akin to an exhaustive search method, as the probability of repeating the same option is extremely low, resembling an exhaustive search, albeit not in a sequential manner.
Moreover, people do not need to discover the optimal choice immediately; they only need to find a better option among two or more choices and apply it to similar actions in the future. They may also continue to explore, either intentionally or unintentionally, and if they find a better option, they will replace the existing better option with this new one. This random exploration is not limited to specific individuals. Anyone who discovers that another person’s choice yields better results can learn and imitate. This only requires individuals to distinguish the better option among two or more choices. They can be contemporaries, descendants, or others’ descendants. Thus, through generations of trial and error, people’s choices improve over time. It is not just one person randomly exploring, but many individuals exploring simultaneously in different locations. Since there is some correspondence between specific actions and specific outcomes, gradually, the choices of many people converge and become similar; especially when individuals choose behaviors that do not harm each other but benefit all, a set of behavioral rules emerges that everyone follows—this is spontaneous order.
To adopt this choice model, two questions must be addressed. One is how to determine what is “better.” The other is what motivates people to explore randomly. Regarding the first question, it seems unnecessary to resolve. People judge based on the outcomes of their choices. This outcome impacts the interests of those making the choices. If one choice brings greater benefits—lower costs or higher gains—then that individual will continue to make that choice. If they cannot recognize this benefit, natural selection will take effect. Ultimately, in a competitive environment, those who do not choose the better combinations will disappear, while those who do will survive. Thus, regardless of whether individuals are aware of which choice is better, natural selection will favor the better choices.
Then, what motivates people to explore randomly? For example, a traveling salesman, when unable to use a computer to find the best route, is essentially engaging in random exploration. He might follow traditional routes, but occasionally he takes a different path, resulting in three possibilities: it could be worse than the previous route, the same, or better. If it is better, he will continue to take that route in the future. Similar opportunities for chance will arise again, and he will handle them in the same way. Gradually, the routes improve and become more efficient. If there is a rule between the traveling salesman and the company that states if a better route is discovered, the saved travel expenses will be shared equally, both parties will have the motivation to continue exploring new, more economical routes.
This is just a simple example. If we acknowledge that most problems in society are NP problems, or exponential time problems, or even problems that can only be solved through brute-force calculation, then people in reality use the aforementioned random exploration method to resolve these issues. Some may question, now that this random exploration method does not immediately find the optimal solution, how people to survive? In fact, survival does not require an optimal solution; generally, a better solution is sufficient. At this point, people do not need to follow mathematical rules but rather the rules of natural selection. As long as the benefits they gain can meet their energy needs—ensuring energy supply exceeds energy consumption—they can survive; furthermore, if there is a surplus, they can reproduce. Of course, if competition exists, it will depend on whose “better solution” is superior. This is the competition for survival. A traveling salesman who discovers a better route will outperform one who takes a less optimal route.
Market and Spontaneous Order as Simple Solutions to Complex Problems
Using the traveling salesman problem as a threshold, problems with higher complexity are even less likely to have optimal solutions found by computers. For example, the aforementioned market problem. This is an exponential time problem and can at least be solved using brute-force search. However, the sheer number of possible combinations makes it nearly impossible to find the best one through brute-force search. Therefore, in reality, people solve the distribution problem of M types of goods among N individuals through random exploration. This is also a form of brute-force search; they simply do not need to discover the optimal combination immediately.
An individual is not concerned about whether the resource allocation in society is optimal; rather, they care about whether their own welfare improves. When a person needs a certain good, they have several choices. One option is to rob or steal from others; another is to deceive or spread false information; a third is to use their physical advantage to force an “exchange” from others, i.e., coercive buying and selling; and a fourth is to exchange items they own with others’ consent. A person can randomly choose one of these actions, but the outcomes are distinguishable. They might temporarily believe that stealing or robbing has low costs, but such behavior is unsustainable, as others will eventually retaliate or become wary. If they persist in this behavior, they will ultimately disappear. Thus, repeated random trials will lead people to conclude that respecting others’ property rights and exchanging their own property is a better behavioral choice. This conclusion is also the result of many others’ random explorations, and they will eventually accept such rules. This is how property rights systems are formed. This represents the first stage of results from this random exploration.
Within the framework of property rights, people can continue to engage in random exploration while adhering to exchange rules—market rules. For example, if they have an item, they can try to take it to the street to exchange for other items, with the exchange rates being completely random. Of course, there are also detailed distinctions within this behavioral framework, such as differences in negotiation positions, abilities, and information availability, which can lead to different exchange prices. However, when many people are exchanging the same item, the prices they form through bargaining may differ, and they can learn from each other. If one person has a more advantageous exchange, others will adjust their strategies accordingly, using that as a basis for bargaining with sellers; otherwise, they can turn to others for purchases. Since lower prices are beneficial, a nearly uniform price gradually forms in the market for that item. Similarly, there are thousands of such items, each forming a market with its own price. Reasonable price comparisons will also emerge between markets. Thus, a pricing system is established throughout society. This represents the second stage of random exploration.
The market rules are similar to the incentives discussed earlier regarding the traveling salesman, as the potential for profit encourages adherence to these behavioral rules, prompting random exploration. Therefore, in reality, the exploration for optimal combinations transforms into behavioral rules that incentivize this exploration. We can view market rules as a mechanism that facilitates computation; they do not directly compute the optimal combination in the market but instead drive people to randomly explore better behavioral rules and, under those rules, better behavioral choices. Through extensive random interactions among large groups, a set of equilibrium market prices gradually emerges (see Figure 2). Within this pricing system, any individual can use a very simple algorithm—addition, subtraction, multiplication, and division — to determine the quantities of goods they need within a given budget to maximize their utility. The situation for producers is similar, though perhaps slightly more complex.
Figure2 Demonstration of Final Supply and Demand Selection Points and Trajectories
Note: This figure illustrates the results calculated and demonstrated using a MATLAB model created with GPT-4.5. It assumes that any consumer has their own utility function and will randomly explore beneath this function curve, while any producer has their own cost function and will randomly explore above this cost curve. Their random exploration areas are located in the triangular region to the left of the equilibrium point in the figure. In this area, the model randomly generates 40 points, and the consumer and producer each randomly select two points for comparison. The consumer retains the point with larger utility and discards the lesser one; they then randomly select another point to compare with the retained point, keeping the one with greater utility and discarding the lesser one, and so on. The producer follows a similar process, comparing based on their revenue and retaining the larger points while discarding the smaller ones. After 39 selections, both ultimately choose the equilibrium point where the utility function and cost function intersect, or a point close to equilibrium.
History and economics have proven that this market equilibrium price guides people in rationally allocating their expenditures to achieve utility maximization under budget constraints. It also enables producers to determine their production quantities and varieties based on this pricing system, ultimately maximizing their revenues. From a societal perspective, when all producers and consumers make rational decisions within this market price system, the total quantity of M products will roughly equal the total demand of N individuals. In reality, there are likely hundreds of millions of types of goods (M) on Earth, with N being approximately 8 billion. From a mathematical standpoint, solving such a problem is fundamentally impossible, yet the market effortlessly resolves it.
This super difficult problem has been simply solved by humanity using random exploration methods. The approach is not traditional mathematical computation, but it is a form of mathematical computation—specifically, brute-force search through random exploration. The method people use is to promote this random exploration through an incentive mechanism, manifested as a behavioral rule—market rules. It includes a simple stipulation: a transaction can only occur and be deemed legal when both parties agree. It does not directly provide the optimal answer for resource allocation but encourages people to engage in random exploration, ultimately leading to the formation of market equilibrium prices. This provides an important parameter in this vast polynomial — price — contributing significantly to solving it. With price as a parameter, each consumer or producer can then determine their respective demand or production quantities, greatly reducing computational complexity and making it much easier.
The allocation of fiscal funds requires the measurement and comparison of the utility of public goods. The overall utility of a public good is not something that any individual can directly perceive. Each individual can draw analogies to the utility of that public good based on their evaluation of the utility of private goods. Economists express this with a relatively strict formula, stating that the marginal utility of one unit of a public good should equal the marginal utility of one unit of a private good. The overall utility of a public good is the sum of all individual utilities. In an ideal scenario, all stakeholders would participate in voting to determine the specific form of public good utility. In practice, this involves making a comparison between two choices: to provide or not to provide, or to provide option A or option B. This is essentially a comparison of utilities. We already know that artificial intelligence cannot accurately simulate the utility of private goods, and judgments regarding the utility of public goods are even beyond the capabilities of AI. Moreover, even if AI could assess the utility of public goods, finding the optimal allocation scheme among multiple public projects within a given budget would be similar to the packing problem in NP problems, which is difficult to solve (Roughgarden, 2023, pp. 47-48). In voting, each person makes a vague but appropriate comparison in their mind between the utility of a particular public good and the utility of the same unit of a private good. After a long period of trial and error, the results of their voting will approach the optimal outcome.
The market is just one of many customary laws in human society. Besides the public finance distribution procedures, other aspects such as gift-giving rules, family order, village customs, legal regulations, religious norms, and so on share exchange characteristics similar to those of the market. Their formation is also akin to that of markets, gradually converging into widely accepted behavioral rules through random exploration by people. Their common feature is the selection of the better option among any two possible behaviors or things, while also allowing for new choices. These behavioral rules embody principles similar to those of the market, where people are always engaged in exchange, albeit not exchanging goods but rather exchanging behaviors (Sheng, 2021). Exchange follows the principle of equivalence, with the rule requiring agreement from both or multiple parties. Markets and other customary laws collectively constitute the rules of social order for humanity, under which people primarily produce, live, and transact. This is the essence of human society. They exist because humanity has solved socially complex problems, which appear complicated mathematically, using simple behavioral rules, allowing human society to exist and develop.
Can Artificial Intelligence Replace Human Exploration of Spontaneous Order?
This question refers to whether artificial intelligence can discover a method completely different from spontaneous order, such as the market, by directly calculating each individual’s instantaneous demand at any location and automatically satisfying that demand. As discussed earlier, many social issues that are more difficult than NP hard cannot be solved using mathematical algorithms, and thus AI cannot replace humans in addressing problems that the market can solve through spontaneous order. This conclusion should be quite certain. Broadly speaking, AI cannot replace all institutional rules with spontaneous order characteristics, and it will never be able to do so. However, some may ask whether AI can imitate humans, discover rules similar to spontaneous order, and follow these rules to conduct random exploration to find better or optimal solutions to these social problems.
This requires AI to meet two conditions. First, it must be able to evaluate the outcomes of any two actions. Second, it must have the motivation for random exploration. The first condition is one AI can both satisfy and dissatisfy. It can meet this condition in the sense that people can input a relatively simple “value function” into AI, allowing it to judge the results of two randomly chosen actions. This has been achieved by successful AI projects, such as image recognition and board games. However, it cannot fully satisfy this condition because the value function must closely mimic human value judgments. This involves assessing the utility of different goods. Utility is a comprehensive judgment that requires multiple dimensions of perception, such as visual, taste, smell, tactile, auditory, proprioceptive, internal bodily sensations, psychological feelings, cultural preferences, and combinations of these perceptions. This requires inputting relevant value functions into the AI beforehand.
Even assuming the technical issues of perceiving various sensations have been resolved, this is still impossible. The value function is a utility function, and not only is human utility derived from a combination of multidimensional sensations, but each sensation also has multiple levels or scales, and can even change continuously. If there are ten dimensions of sensation, with ten levels for each, there would be 1010 (1E+10) combinations, which would decrease as the quantity consumed increases, meaning that at every consumption level, there are different utilities and combinations. If we further subdivide into ten levels, there could be 1E+100 possible combinations, which is still a rough estimate. Moreover, these utilities are also related to different environments, times, and physical conditions. For example, one needs an umbrella only when it rains, feels hungry only near mealtime, and requires cold medicine only when sick. Therefore, strictly speaking, human utility is not replicable. According to epistemology experts, human utility is “embodied.” To imitate it perfectly would be equivalent to creating a real person, which is fundamentally impossible. Those who advocate modified computationalism believe that when they include human sensations in their calculations, the problem is solved. However, based on computational complexity theory, we know that such an imagination can never be realized.
As for the costs incurred by individuals or enterprises, they are actually derived from utility. People work to earn income, which means sacrificing leisure time; their judgment of costs is essentially an evaluation of this sacrificed time. The value of this time lies in its utility. Work prevents one from traveling or enjoying art. For humans, all costs can be measured in time—that is, utility—so costs represent negative utility. Their satisfaction with work compensation also depends on whether these rewards can purchase goods that satisfy their utility, which is also based on utility judgments. Therefore, for AI to obtain the value function for production or service costs is as difficult as determining utility. The complexity makes it challenging to solve.
Furthermore, exploring the pricing system across the entire market requires not only comparing the utilities of two specific goods but also comparing the utilities of any two goods. The sheer volume of goods necessitates even more dimensions of perception for evaluating their utilities, leading to exponentially increasing utility combinations. Thus, just the value function for a single individual would require thousands of variations, which is currently beyond the capabilities of computers. In fact, even if these value functions could be constructed, no matter how intricate they are, they remain imitations and do not represent real human utility. Consequently, AI cannot replace humans in evaluating the relative merits of any two randomly selected choices. Even human value functions can be flawed because humans may break down the overall assessment of natural selection into local, short-term utilities, which may not be correct. This is not a concern because human errors will be corrected by natural selection; those who make mistakes will suffer losses or perish. However, AI’s value functions, even if they appear realistic, will not have natural selection to correct them.
Now, regarding the motivation issue. At first glance, this may not seem problematic. Computers do not require motivation; they only need human commands. However, following commands is fundamentally different from having intrinsic motivation. What self-interest does a computer have? Without human commands, can AI have the motivation to explore randomly? Computers have silicon-based bodies, and their energy source is electricity. They likely do not experience discomfort (hunger, pain, etc.) due to a lack of energy, nor do they possess the desire for bodily expansion. Random exploration of customary laws such as the market does not bring any benefits to themselves. Therefore, they lack the motivation to engage in such random exploration. Could humans program them with an incentive-based system to motivate random exploration? This might work but would only be effective in specific areas, while areas not covered by the program would remain unaffected. Human utility is comprehensive; they naturally explore advantageous domains without needing external prompting.
Moreover, simulating spontaneous order in a market involves more than just an individual comparing the utility of any two goods; the market cannot be formed through just one or two interactions. It must emerge from prolonged interactions among thousands of consumers and producers. Even if the previous two issues could be resolved, AI would still need to construct comparisons of utility among thousands of individuals for thousands of goods, and similarly construct comparisons of costs among thousands of enterprises for thousands of goods, while also facilitating competitive negotiations between consumers and enterprises to form an equilibrium pricing system. This presents an overwhelming challenge for an AI with such vast capacity, as it is illogical; the market interaction involves countless independent individuals, each with different utility functions and cost functions formed from various combinations of sensory dimensions. AI often operates as a singular entity, with a single logic and value function. If AI were divided into many individual units, it still could not simulate independent individuals because their structures, logic, and value functions would be the same. For instance, in a game of AlphaGo, both sides use the same value function. If thousands of different value functions were required, the complexity would be unachievable. Most importantly, this is merely simulation.
AI can defeat world champions in Go but cannot evaluate two products in the market. This illustrates Moravec’s Paradox, which states that AI can accomplish tasks that humans find difficult, yet struggle with tasks that humans find easy. This is likely because difficult tasks for humans require rational abstraction and reasoning, which are easier to formalize and thus easier for machines to execute. In contrast, the human senses, evolved over millions of years, and the recognition and synthesis of those senses are often completed subconsciously. The information processed by these senses and their combinations grows exponentially with increasing dimensions and quantities, creating complex problems that are difficult for computers to solve. The languages formed by humans are abstractions of specific sensory experiences, serving to reduce dimensionality and significantly simplify the problems requiring computation.
Compared to human language, individual human utility is multidimensional, continuously changing, varying with time and location, and unique to each person. When AI attempts to simulate human utility, it must increase complexity; when the dimensionality of perception exceeds a certain point, its complexity increases exponentially, leading to what is known as the “curse of dimensionality,” which exceeds the computational capabilities of computers. Yann LeCun stated that while we can train a system to estimate the probability of the next word in a sentence, “we cannot train a system to predict what will happen next in a video,” because “the physical world is far more complex than language” (2025). Since AI cannot simulate human utility, it cannot compare it, making it impossible to form spontaneous order through random exploration. Therefore, it cannot imitate spontaneous order or solve the problem of distributing M products among N individuals, and thus cannot replace spontaneous orders like markets.
To date, most successes of AI have been based on the language formed through the random exploration of many individual humans. On this basis, computers still have advantages in second-order choices (i.e., making choices based on the results of others’ choices). However, they cannot replace humans in random exploration. Thus, even though human random exploration may seem simple, AI cannot imitate it, let alone replace it. Moreover, the spontaneous order formed through the random exploration of many individuals over a long time is something AI cannot match.
Why Do Many People Believe That Artificial Intelligence Can Replace Traditional Human Rules?
History shows that humans have several times exaggerated the role of rationality due to its development and minor achievements, extending its application to inappropriate areas. This error has been recognized and pointed out by wise individuals among humanity, as previously cited from Hayek’s perspective.
The first reason for this error is that people view the victory of rationality in simple systems as evidence of its omnipotence, applying rationality to complex systems without distinguishing the significant differences between complex and simple systems. Many thinkers have pointed out that human rationality is limited, stemming from the fact that humans are finite individuals with limited energy, time, attention, memory capacity, and cognitive ability. The distinction between simple and complex systems, in terms of mathematical description, relates to the issue of computational time. In recent decades, mathematicians have increasingly focused on the P vs. NP problem, which addresses complexity and computability, beginning to differentiate the computational difficulty between simple and complex systems. Generally, complex systems tend to have exponential difficulty, with their computational challenges surpassing NP difficulty, making them hard to compute and impossible to solve using human rationality. This is also a mathematical expression of the notion of “limited rationality.” The general public does not understand this distinction and easily misinterprets the success of rationality in simple systems as applicable to complex systems; many elite intellectuals, unaware of the conclusions from computational complexity research, also make similar mistakes.
The second reason is that those who claim they can rationally design complex systems can, in practice, simplify the problem to reduce its complexity to a computable level, thus proving that their “rational” designs are feasible. When mathematicians are unable to solve NP problems, they discover some approximation methods that compromise on accuracy, not requiring optimal solutions but achieving near-optimal solutions. This may involve reducing problem size, lowering problem dimensions, breaking problems into smaller parts, or only searching locally, then connecting various local solutions, or conducting local random explorations, etc. There are so-called greedy algorithms, local search methods, and so on. In practice, the operation of simplifying complexity is not as “mathematical.” During the planned economy period in mainland China, the planning authorities only planned for over 400 types of products(Cheng, 2016, p.53). This starkly contrasts with the current Taobao/Tmall platform, which has 2 billion products, even after consolidating different models down to 35 million. The planning authorities greatly reduced the variety of goods to simplify product types. A straightforward example is that clothing at that time was predominantly blue, with green and white as alternatives, making the populace appear like a group of “blue ants.”
The third reason is that people’s general perception of complexity is entirely different from actual complexity. As Moravec’s Paradox reveals, “Computers can play chess and solve mathematical problems, but we cannot make them perform simple physical tasks, such as manipulating objects or jumping, which animals can easily accomplish.” “Computers can easily handle discrete objects and symbolic spaces, but the real world is too complex; a technique effective in one situation may completely fail in another” (Yann, 2025). Breakthroughs in AI regarding games and mathematical problems lead the general public to believe that AI should easily handle tasks that humans find simple, such as household chores, transactions, business management, and political affairs, and even do them better. If AI can defeat Lee Sedol, why can’t it toast bread or pour coffee? This fosters the illusion that AI will soon replace humans in leading society.
The fourth reason is that people tend to overestimate the role of human rationality while underestimating the role of spontaneous order. Human success in science leads people to further imagine that it will play an even greater role in human affairs, as seen with the emergence of planned economies. This is because rational behavior is conscious; the success of rationality is something humans consciously reinforce and recognize, and it is also an important aspect of proving their own value. In contrast, spontaneous order operates in a “too high to be known” manner, forming and functioning silently without people’s awareness. The successes achieved under this spontaneous order are not attributed to the spontaneous order itself, which people do not recognize, but rather to their own efforts. Thus, humanity repeatedly errs in underestimating spontaneous order and exaggerating human rationality.
The fifth reason is that some individuals, in pursuit of commercial or political gain, exaggerate the role of rationality or artificial intelligence to create public opinion that benefits their acquired interests or power. Given the aforementioned psychological weaknesses, the general public is also easily swayed by such arguments. This has been evident in the current hype surrounding AI’s imminent surpassing of human capabilities, leading some AI companies to gain excessive profits. Historically, planned economies also gained power for certain political groups under the guise of “scientific theory.” Recently, Musk’s “Government Efficiency Department” took over government departments, again under the banner of “artificial intelligence.” The rhetoric surrounding AI has convinced many that its power is boundless, superior to the constitutional system and political traditions established in the United States for over two centuries, and better than Congressional decision-making; thus, such clearly unconstitutional actions have been implemented.
Conclusion: The Appropriate Role of Artificial Intelligence in Spontaneous Order
This article starts from the perspective of computational complexity, asserting that the problems spontaneous orders (like markets) can solve are more difficult than NP problems. Since NP problems indicate issues that are difficult or even impossible for computers to solve, and because NP problems are already abstractions and simplifications of human problems, many real-world issues are even more challenging than NP problems. Therefore, artificial intelligence cannot solve a significant portion of real-world problems, rendering it incapable of achieving so-called “general” intelligence. Consequently, it cannot replace spontaneous order in addressing resource allocation and product distribution issues, nor in resolving conflicts of rights and interests among individuals.
In reality, people adopt random exploration methods, evaluate the outcomes of these explorations, retain better results, and discard poorer ones; they continue this process iteratively. This method not only allows the parties involved to explore but also enables others to observe and imitate better outcomes. Thus, people continuously discover better behavioral choices. Due to the relationship between good outcomes and specific behaviors, most individuals tend to gravitate toward these better behaviors, ultimately forming a rule, which constitutes spontaneous order.
The formation of spontaneous order requires two conditions from individuals. The first is the ability to discern the merits of any two things or actions; the second is the motivation to do so. People possess utility judgments regarding various items or actions, which are formed from a combination of their sensory perceptions—such as vision, hearing, taste, smell, touch, proprioception, psychological states, and cultural preferences—each contributing to different combinations. Based on utility, individuals can also form cost judgments. The interactions and negotiations among numerous different individuals effectively communicate and compare their utility judgments, ultimately reaching agreements. This is what establishes prices. The numerous transaction prices influence each other and eventually converge to a roughly unified price. The market prices of thousands of different goods ultimately form a pricing system. People make purchasing or production decisions based on prices, using simple algorithms to solve extremely complex issues from a societal perspective.
This is a rather clumsy and fundamental algorithm—a form of brute-force search through random exploration—that can easily solve problems that are mathematically difficult. However, AI cannot replicate this. Although AI can generate or input simple value functions and compare any two items, it cannot generate a utility function that approximates human utility. This is because utility consists of a combination of multidimensional sensations and psychological states, and generating it is far more complex than language; furthermore, comparing any two items requires the interaction and comparison of the different utility functions of thousands of individuals, which is overwhelming for AI. As the dimensionality of perception increases, its complexity grows exponentially, far outpacing the speed of computer processing power. Therefore, AI is not only currently unable to replace or imitate spontaneous order, but it will also never be able to do so.
Overall, human cognition of the universe can be roughly divided into three domains and corresponding three stages. The first involves random exploration in a state of complete ignorance; the second involves making rational choices based on experiences gained from random exploration; and the third requires letting natural selection make judgments. Artificial intelligence is most suited to operate in the second domain, or second stage. In this stage, humans have gained experience through random exploration in the first stage and have abstracted and simplified it into language, facilitating rational formalization and analysis, affirming and generalizing superior experiences for easier dissemination and learning; however, the correctness of this approach still needs to be judged by natural selection. Since AI does not possess a human-like body and cannot intricately mimic human utility, it is fundamentally impossible for AI to replace humans in random exploration or to consequently form spontaneous order. Natural selection remains the ultimate judge of all things, including humanity, and AI cannot hope to match that. Thus, the appropriate role for AI is to process low-dimensional information that humans have already simplified. In doing so, it may achieve results that surpass human capabilities.
Nevertheless, AI has achieved significant success. This is primarily reflected in its ability to make judgments and reasoning based on language, in formalizable domains (such as games and theoretical explorations), and in problems that can be computed and solved in polynomial time (i.e., simple system problems). In these areas, AI serves as an excellent assistant to humans. When people follow spontaneous orders (such as markets or parliamentary procedures), AI can provide specific individuals with recommendations. For instance, it can offer judgments on a budget to members of Congress or provide individual traders in the stock market with assessments of stock fluctuations, helping people make better judgments on comparatively complex matters. AI will always be the best assistant to humans, not the decision-maker in human choices.
References
Cheng, Liansheng, Trailblazing: The Planned Economy in China, Beijing: CPC History Press, 2016. Chen, Zhiping, Xu, Zongben, Computer Mathematics, Science Press, 2001. Economic Times, “Is Elon Musk-led DOGE using AI to help rewrite a large chunk of the federal government’s computer systems?” Economic Times, February 6, 2025. Fu, Yuxi, “Theory of Computational Complexity,” Tsinghua University Press, 2023. Hayek, (2000a) Law, Legislation and Liberty (Volume 1), Beijing: China Encyclopedia Press, 2000. Hayek, (2000b) Law, Legislation and Liberty (Volumes 2 and 3), Beijing: China Encyclopedia Press, 2000. Hayek, The Sensory Order, Chicago: The University of Chicago Press, 1976. Huang, Qiwen; Xu, Ruchu; Introduction to Modern Computational Theory, Beijing: Science Press, 2004. Aaronson, Scott, Quantum Computing Open Course – From Democritus, Computational Complexity to Free Will, Beijing: People’s Posts and Telecommunications Press, 2021. Sheng, Hong, “The Effectiveness of Customs,” Research on Institutional Economics, Issue 1, 2021. Roughgarden, Tim, Algorithms Illuminated—Part 4: Algorithms for NP-Hard Problems (electronic version), Beijing: People’s Posts and Telecommunications Press, 2023. Wolfram, A New Kind of Science, Wolfram Media Inc., 2002. Yann, LeCun, “Current Large Language Models Will Ultimately Be Eliminated,” Weicao Zhiku, March 30, 2025. Yann, LeCun, The Road of Science, Beijing: CITIC Publishing Group Co., Ltd., 2021. YouCao, “The Six Young Generals Seize Power, Musk Rewrites the Power Game in Washington,” Village General, February 7, 2025. Zeng, Jijun; Liu, Ye; Kant’s Wisdom, Beijing: China Film Press, 2007.
然而总体来看,虽然移民问题是政治竞争的重要战场,在相当长时间两党还是克制的,并不极端。从上世纪80年代起,虽然经过数次两党轮替,移民政策也各有不同,但不同时期的移民增量差别也并不明显是不同政党政策所致。如里根时期年平均净移民数量是56.1万人,布什时期就跳升为99.5万,克林顿时期为102.9万人,小布什时期则为132.9万人,奥巴马时期为158.8万人(Migration Policy Institute, 2025)。可以看出,净移民数量是随着时间推移逐渐增大,其中同是共和党的不同总统之间相差很大,而共和、民主两党的差距却属正常,并非共和党执政就明显更为严厉,民主党执政就更为缓和。而到了2016年大选,移民问题竟成了特朗普竞选策略中的重要一维,他的建立美墨边境墙的主张更是引人注目,深入人心。这说明特朗普比共和党其他人更清楚移民问题的政治利用价值。