Skip to main content

Capitalism and Greedy Algorithms

Capitalism: you hate it, you love it, you love to hate it, and you hate to love it. As the dominant, or at least most successful, economic system of the last 3-5 centuries (depending how you want to define capitalism), it has been an integral part of the socioeconomic fabric for innumerable successes and tragedies. For almost any take you might conceive someone having on capitalism, if you look around enough, you’ll likely find someone has already written about it (maybe even this one!).


A common trope, of course, is the “greedy capitalist,” the evil banker or industrialist or business owner ruthlessly exploiting those beneath them for their own profit. Gordon Gekko infamously lauded the goodness of greed, kicking off a generation of bloodthirsty blue-shirted hostile takeover artists who saw economics in brutal Darwinian terms.

 

Blue shirt + white collar + power tie + suspenders + gold Cartier = hide ya assets, hide ya balance sheet, hide ya sales projections, cause they takin over e'rybody's companies out here


This post is not about that kind of greed, or that kind of capitalism for that matter. Here, we’re going to look at the ways capitalism resembles what we call in computer science a “greedy algorithm.” And, once we understand that, we will talk about why capitalism has had so many historical successes, why it has nonetheless had major failures, and what kinds of improvements or alternatives might make it better.


First, we have to define “capitalism” and “greedy algorithms.”

Capitalism: Solving the Problem of Human Economic Activity

Capitalism has a long history, evolving (arguably) from feudalism and mercantilism as the role and importance of private property in the structure of Western society developed. Adam Smith published The Wealth of Nations in 1776 (Murica!), laying out the foundational model for how a capitalist economies worked. Modern capitalism really got going in the 19th century as the Industrial Revolution took off, and America’s recovery from existential capitalist shocks such as a series of panics in the 1800’s and the Great Depression seemed to demonstrate its resilience and power. The collapse of the Soviet Union marked, in political if not economic terms, capitalism’s victory over its chief ideological rival, communism (well, that’s the story we tell ourselves, even if it’s not quite accurate).


But this is not a history lesson, and I am not a historian. You can find any number of books about the history of capitalism for those kinds of details. As a computer scientist, my primary skillset is framing problems rigorously and figuring out algorithmic ways to solve them. So, if we do that, what is capitalism?


At its root, capitalism is a solution to a problem. Solutions are best understood through their problems. Thus, to understand capitalism, we need to understand what problem it is trying to solve: coordinating human economic activity.


Obviously, “coordinating human economic activity” is an even broader topic of study than just the history of capitalism, maybe so broad it’s essentially formless. So, we’re not going to talk about its details or history. Instead, we’re going to simply establish what the “problem of human economic activity” is.


The problem of human economic activity (PHEA) is simply this: how do we commodify our needs and desires?


There are three important components to this problem: “commodify,” “needs,” and “desires.” By “commodify,” we mean create some kind of mutual, structured understanding of how things can be valued and exchanged (we mean “commodify” purely descriptively and without judgement, not in the Marxist use of the term). There are many ways to do this, of course, but any solution to PHEA must have a proposal for how to commodify things. Sometimes it might be pretty loose. For example, in a simple bartering system, the individual participants in a transaction personally assign whatever value they want to something, then exchange it however they want. Other times it might be pretty strict; perhaps an authoritarian system would assign values from some central authority, and only permit transactions through a centrally run clearing house. There are even solutions whose proposed form of commodification is the lack of commodification: in a nihilist-anarchist system, maybe the only form of valuation and exchange would be through the use of persuasion or force.


The next two components, “needs” and “desires,” are closely related. So closely related, I almost didn’t separate them, because in many systems, there is no real distinction between them. Is a cheeseburger a need or a desire? On one hand, it’s food, and everyone needs food. On the other hand, it’s a rather luxurious form of food (“beef and dairy, wow you must be rich” says the weaver from 1785), and you could certainly survive with less. However, I decided to separate them into two classes, because some systems do try to draw a distinction between them. In a communist system, for example, it might be decided that anything that is classified as a “desire” is inherently unfair unless everyone in the society first has their “needs” met, and then also receives a roughly equal amount of their “desires.”


Using this definition of PHEA, we can easily see how capitalism is a proposed solution. How do we commodify our needs and desires? By supporting a strong concept of private property and creating free markets where they can be exchanged. With everything we might need or desire encapsulated within private property, we have an obvious, universal means of obtaining and exchanging anything: using the means available to you in the free market, convince whoever owns the thing you want to give it to you.


By this mechanism, capitalism also dodges the problem of needing to draw a distinction between needs and desires. One person’s need is another’s desire, and that’s fine, because it’s all still just private property. And to ensure efficient, accurate commodification, we make sure all transactions happen freely, openly, and honestly on the market. Whenever private property changes hands, others see it, and it helps them understand the value of the private property they and others have. Every exchange on the free market, by definition, is fair, because no one else proposed an alternative transaction that one party preferred more than the transaction they accepted.


What could possibly go wrong?

Greedy Algorithms: Solving Hard Problems

In computer science, we anthropomorphize a class of algorithms by describing them as “greedy.” The algorithms are not, of course, literally greedy, because they don’t have emotions (nor do we, really, but that’s a whole other story). The algorithms simply use a tactic which strikes us greedy to do something very difficult: solve hard problems.


Before we say more about greedy algorithms, which, naturally, are a kind of solution, let’s talk about the problems they solve first (again, solutions are best understood through their problems). One of the most important and interesting subfields of computer science is complexity theory, which is essentially the formal study of how hard different problems are.


Some kinds of problems are really simple. For example, suppose I asked you to solve this problem: “Given a letter of the alphabet, tell me the next letter of the alphabet.” This is an easy problem for a typical adult to solve, requiring just one step to solve it: you think of the next letter. We say this kind of problem can be solved in “constant time” because, no matter what letter I give you, it always takes the same number of steps to solve. Even if I make the problem a bit harder, like asking for the fifth letter after rather than the next letter, the solution is still in constant time. You might have to count letters on your fingers or say the alphabet in your head (which, of course, an actual computer wouldn’t need to), but it still takes you the same number of steps to answer the question, no matter what letter I give you. The optimal solution to the problem is still found in constant time, so we say the problem can be solved in constant time (or, colloquially, we also say the problem is constant time).


Now, here’s a harder problem: “Given a list of words, tell me all the words that have the most copies of the letter E.” This is also a relatively simple problem to solve, but it’s not as easy as just telling the next letter of the alphabet. You have to count up how many E’s appear in each word, then pick those words with the most copies. Crucially, you have to look at each word to count up the E’s. If I give you a list of 10 words, you have to examine 10 words. If I give you 10 million words, you have to examine 10 million words. Since the difficulty of the problem scales linearly with the number of words I give you, the problem requires “linear time” to solve. And linear time, we can see, is associated with more complex problems than constant time.


The complexity of problems turns out to form a wide array of classes. For example, suppose I gave you a list of words and asked you to sort them alphabetically. This problem can be shown to be solvable using what we call “log-linear time,” meaning the amount of time it takes to solve is proportional to the product of (a) the number of words and (b) the logarithm of that number. Or, suppose I hand you a list of alphabetically sorted words, and ask you to determine if a given word is in the list. This problem can be solved in “logarithmic time” using an algorithm called binary search. There are lots and lots and lots and lots of problems people have studied like this; it’s what people are actually doing when they say they are “building an algorithm” (like, if you want to zoom in and enhance).


For all of the above problems, the solutions we described have one thing in common: they can ultimately be solved in an amount of time which is some kind of polynomial function on the number of inputs. “Linear time,” “log-linear time,” “constant time,” and even things we haven’t mentioned like “n-squared time” or “n to the one-hundredth power time,” are all polynomial-time solutions. Any problem which can be solved in polynomial time is said to have an “efficient solution,” because, eventually, a polynomial time solution scales in a reasonably efficient manner. These problems are so important we have a name for them: “P.” Problems with efficient (aka polynomial time) solutions belong to the set P.


There are also problems that may or may not have solutions requiring polynomial time, but for which we can check if a solution is correct in polynomial time. We call these problems “NP” (note: P is a subset of NP because of that “may or may not” bit). For example, suppose I gave you a really large number (like, many billions) and told you it was the product of two prime numbers. If that's all I tell you, it would take you a long time to find those two prime numbers. But if I also gave you the two prime numbers, you could easily check whether their product equalled the number I gave you at first (RSA cryptography is in fact based on this property). This dynamic is what places a problem in NP: even if we don't know an efficient way to generate a solution, we do know an efficient way to verify a solution.


Going further, there are problems for which we actually know of no efficient (aka polynomial time) solution, nor even an efficient way to check if a result is correct. We call these problems “NP-complete.” Problems that are NP-complete only have known solutions which are worse than polynomial time (on a normal computer), such as exponential time, and further, they do not have a known efficient solution for verifying whether a proposed solution is optimal or not. For example, a solution may require “two raised to the n-power time” to solve or to verify, which is worse, eventually, than any possible polynomial time solution.


Importantly, notice I say that we know of no polynomial time solutions or verifications to problems that are NP-complete. We don’t actually know for sure none exist. It’s possible no one has been smart enough yet to figure one out. However, there is a long-standing, well-founded, mostly accepted assertion that there exist problems in NP which have no polynomial time solution. But it is not yet proven, one way or another, whether this assertion, formally written as “P ≠ NP,” is correct. Incidentally, “P ≠ NP” is quite literally a million-dollar question: if anyone ever proves or disproves “P ≠ NP,” they will win a cool million dollars.


NP-complete problems are fascinating, but for people who build stuff, they are also annoyingly common. One of the most famous NP-complete problems is called the Traveling Salesman Problem (TSP). In this problem, a salesman has a list of cities he has to visit for sales calls, and he wants to do the least amount of driving to do it. He wants to find the shortest route to drive that visits all the cities.


On its surface, this problem seems simple. Just start at the first city on the list, then drive to the one closest to that, then the one closest to that, etc., etc., until you’re done. However, it turns out this naïve solution can go horribly wrong. Let’s look an example. Suppose this map shows the list of cities the salesman has to visit, starting from city A:



If we follow our naïve algorithm, what happens? Well, the first step is easy: go to B.



And the next step looks pretty straightforward too: head to C.



Now, we will see the greedy algorithm make the salesman do something he will later regret. City E is the next closest city, so he goes there.



From E, it’s an easy hop up to D.



And finally, our salesman makes the trip down to the last city, F.


 

Where was the mistake? Well, if you look at that route from D to F, you’ll notice the salesman has to go right back past city E. Certainly, if he were smart enough, he could have gone to D from C instead of going to E, and then just swing by E on his way down to F. While the greedy algorithm found a decent solution, it could have done better. This turns out to be a common pattern with greedy algorithms: they find good but not optimal solutions to hard problems (except for in trivial cases).


So, as we can see, the naïve approach doesn’t always work. After rigorous study of this problem, no one—like, no one, not the smartest person who ever lived or anyone else—has found an efficient (aka polynomial time) solution to TSP. Our best algorithm takes slightly worse than exponential time to find a solution: the amount of time it takes to solve the problem increases (slightly worse than) exponentially with the number of cities. Roughly speaking, if you want to add a tenth city, you have to double the work it would take to figure out nine cities. And adding an eleventh city doubles the time again. Twelfth city? Double it again. Practically speaking, problems like this are very difficult to solve in any kind of real world system (unless, maybe, you have a powerful quantum computer).


These problems don’t plague salesmen alone. For example, robots assisting with fulfilling orders in a warehouse have to solve problems like this: “Given a list of orders containing items scattered around the warehouse, what’s the best route to take to collect them?” In, say, a huge Amazon warehouse processing tens of thousands of orders an hour with hundreds or thousands of robots, there are a lot of exponentially difficult problems to solve, all while preventing robots from crashing into each other too.


One could imagine some genius central control program, planning and coordinating an efficient dance involving all of the robots, making sure they each follow the shortest path to collect all their items, and delicately timed so they never crash. But imagine how difficult that problem is to solve! Thousands of overlapping traveling salesmen, each needing efficient routes, and each needing to be timed and routed to avoid getting too close each other. And if a robot breaks down, the central controller needs to send in an extraction team to remove the damaged robot, adjust all the other robots’ paths to avoid it, and re-route a robot or two to finish up whatever work the broken robot didn’t get to. This is an extremely daunting set of overlapping NP-complete problems and contingency plans.


So, of course, this is not how the robots work. There is no central master planning everything for all robots. Implementations vary, but all of them rely to some extent on decentralized robots making their own pathfinding and anti-collision decisions. Each robot might be provided with a set of items to collect by a central planner, but it’s up to the robot from there. This federated decision system is also a form of a naïve algorithm for solving a hard problem. Rather than explicitly determining a globally optimum solution, individual agents are configured to make locally optimum decisions that, typically, also accrue to a solution that approaches the global optimum.


And at the end of the day, we all get our packages, because, often, it is not necessary to solve these problems completely, totally optimally. A “good enough” solution is often, well, good enough. If the salesman has to drive a little farther than he might have, fine. If the robot didn’t get the items in exactly the best possible order because it had to swerve to avoid another robot, the world can keep spinning.


What does a “good enough” solution to TSP often look like? Well, our naïve solution of course! That approach generates a reasonable itinerary almost all of the time, especially for actual, real-world scenarios. And when it doesn’t there are plenty of techniques that can use a decent greedy solution as a starting point and then improve it to be more optimal.


This naïve approach is common enough we give it a special name: … (wait for it) … a greedy algorithm. The algorithm is “greedy” because, at every step, it takes whatever next step is the best locally. It doesn’t look at the overall state of the world, or where it’s been, or where it can get to from where it’s going in the future. It just happily makes the best local choice it can find. This is an extremely simple thing to do. All it needs to do, at every step, is look at the cities it could go to next and pick the one that’s closest. Usually that’s a reasonable choice. In other words, greedy algorithms are a great way to approximate optimal solutions to hard problems.


Now that CS301 is over, where were we… oh, right. I hope it is now obvious capitalism is a greedy algorithm solving PHEA, so obviously we can come up with better solutions. Clear as mud? No? OK, then let me explain.

Capitalism is a Greedy Algorithm to Solve PHEA

We defined capitalism earlier as proposed solution to PHEA in which private property, backed by strong rights, is exchanged on the free market. That is how capitalism commodifies needs and desires.

 

What does it mean to say that approach is a greedy algorithm? It means that capitalism does not seek to find some globally optimum way to solve PHEA. Instead, it relies on simpler, lower-level decisions that can be made solely by looking at more easily measured local improvements. Like an army of robots in a warehouse individually making greedy item-collecting decisions, individual humans making their own decisions in their own best interests tend, on the whole, to lead to good solutions for the entire system of all humans together. Humans decide for themselves what ends they pursue in the market according to some personal “objective function”: an assessment of the values they have and goals they seek in relation to the transactions they can make in the market. Whereas warehouse robots might have an objective function that values moving closer to needed objects and avoiding collisions, humans might have an objective function that values obtaining food, shelter, sex, status, power, etc.


The development of this decentralized economic approach is incredibly important in human history. Again, this is not a history lesson, but earlier kinds of solutions to PHEA did not so crisply and directly enable individual human actors to make their own, personal, best decisions. Instead of being forced to act in the best interests of, say, their lord, their guild, or their town, people’s natural desire for self-betterment (which Gordon Gekko might call greed) turns from vice to virtue. By enshrining a free market where private property could be openly exchanged, capitalism created a machine that channeled individual people’s natural desire to better themselves with the betterment of society at large.


However, also incredibly important, PHEA is a hard problem! I have not studied it formally enough, nor do I know of any such studies, to say something like “PHEA is NP-complete,” but I can say it is like, really, really hard. Even harder than coordinating 1000 robots in a warehouse. Setting aside the human factors like answering questions like “is it OK to make one person suffer so ten others may live?” or “is private property/taxation/your-least-favorite-thing theft?”, trying to come up with an efficient solution to optimize some kind of objective function for a bunch of people is quite challenging. What’s more, unlike warehouse robots, not everyone will have the same objective function, and people’s objective functions may change over time. Given these aspects, it is reasonable to assume PHEA is a hard problem because no efficient solution for it is known.


Because capitalism is a greedy algorithm, and PHEA is a hard problem, then capitalism is not guaranteed to generate an optimal solution to PHEA. By this, we mean there may exist some other means of commodifying needs and desires that would be even better than the results capitalism generates. We don’t necessarily know what such a solution would look like, or even if one exists for sure, but it’s possible there is a better solution.


Why does any of this matter? For two reasons: first, the context for why capitalism has done so well, and second, what might a better solution look like.

What Capitalism Does Well

Capitalism has been a successful solution to PHEA because, firstly, it’s usually possible to craft a good greedy algorithm to solve any hard problem. But the form of capitalism we have today is not just the basic greedy algorithm version. That version of capitalism would have little beyond a clear-cut statement about private property rights, a public ledger recording free market transactions, and some simple rules to enforce the system (“bust a deal, face the wheel!”). Today’s Western capitalism has numerous modifications beyond a basic greedy algorithm which establish additional norms and rules that, based on observation, seem to (usually) improve the quality of the solution.


For starters, the market is not totally free. If you want to tell someone you can sell them a drug that will cure their cancer (and you are in the US for example), you need to get approval from the Food & Drug Administration before you can claim that. If you want to buy or sell a stock, you cannot do so based on insider information, or else the Securities & Exchange Commission will take all your profits and throw you in jail. More broadly, you cannot lie in an effort to get someone to enter into a transaction with you, or else you’ll be prosecuted for fraud. All these rules prohibit transactions that, while they might be extremely rewarding to one individual within the population, are considered so detrimental to the population as a whole that they are not permitted. It is not only that these things may be considered morally wrong. It is that, if we allow individual agents (people) do perform these transactions on the free market, they will be pulling us away from a more optimum overall solution.


American capitalism also does things to change people’s objective functions in an effort to change the decisions they make. The federal government provides subsidies to farmers to encourage them to grow certain crops, like corn. It also provides people tax incentives to perform certain transactions, such as to take out a mortgage or to save for retirement. The Federal Reserve adjusts interest rates to encourage investment, manage inflation, or reduce unemployment. These kinds of policies modify the incentives people have to make certain decisions because, absent these modifications, an individual would find the decision suboptimal. In other words, the government influences people’s objective functions (“puts their thumb on the scale” is sometimes used to describe this kind of action) to cause them to make decisions that lead to a better overall solution to PHEA.


Modern capitalism allows people to choose to form groups and act together economically. This is the essence of unions, trade groups, partnerships, and even political parties to some extent (side note: democracy is just a greedy algorithm for solving the problem of human governance, instead of PHEA). Forming groups reduces the complexity of PHEA because there are fewer agents making decisions within the population. To the extent these groups more effectively advocate for their members than the members themselves (case in point: unions get better healthcare cheaper for their members than individuals do), these groups help their members achieve more favorable results than they would be able to individually.


Clearly, all these modifications to a basic, naïve form of capitalism seem to be helpful, or at the very least ubiquitous. But the degree to which modifications have been applied has varied from country to country and era to era. Modern Western notions of capitalism contain a spectrum of configurations depending on how a society wants to prioritize components of the solution. Mainstream libertarians, for example, prioritize the economic agency of individual people within the society above all else. Anything that reduces a person’s control over their private property, or that restricts their ability to enter into whatever transactions they see fit in the free market, is considered abusive. Meanwhile, democratic-socialists consider negative side effects experienced by uninvolved parties in property rights or free-market transactions abusive, to the extent they emphasize collective ownership. And, of course, there are many varieties of positions about these kinds of topics to suit one’s taste. But regardless of where one falls on this spectrum, all of these ideologies are tolerated within broader capitalist thought, and they all seek to improve the greedy algorithm represented by it.


All that said, perhaps capitalism’s chief advantage over other economic systems is one simple trick: it requires almost no central planning (gasp! Someone said central planning!). In capitalism, aside from setting up the laws, structures, and a few core policies (e.g., norms, incentives), no one has to make any actual smart economic decisions on behalf anyone else. Save for a handful of rules or programs, everyone is personally responsible for their own economic health. Capitalism relies on its individual agents, acting in their own self-interest, to make personal decisions that, more or less, are good ones overall for the whole of society.


By way of example, let’s examine the question “how many loaves of bread should we bake next week?” In a capitalist society, this question is answered by summing together the individual decisions of individual bakers. Each baker looks at the orders on their books, how long their lines are, seasonal bread eating patterns, and whatever other information they care to look at to make a personal decision about how many loaves of bread to bake. If they do a good job, they profit. If they do a bad job, they lose money. If they keep doing a bad job, one way or another, they will be removed from the bread loaf decision making process. All of these individual decisions accrue to some society-wide prediction of how much bread will be eaten next week. Usually, these predictions at a society-wide level are pretty accurate, even though no one ever considered that problem directly.


In a society with a stronger central authority, some central person or body actually does need to answer this question, for everyone, and with enough nuance to know which bakers can make and distribute the right amount of bread among a variety of other complexities. It’s quite a challenging problem. Just deciding how to distribute bread efficiently, for example, smacks of an NP-complete problem all on its own! The stakes are higher too. If the central planner is right, everybody eats, and maybe it was even a little bit more efficient than the capitalist solution to the bread-forecasting problem. But if the central planner is wrong, the effects could be disastrous. Instead of one bad-decision-making baker running himself out of business, a bad central planner could cause a famine for the entire country. With a relatively small upside and a much bigger downside to the capitalist solution, it seems hard to expect a centrally planned solution to be worth using. With the possible exception of China, who are in the midst of showing us how competent they are at central planning (for now), previous large economies that were managed with a strong central hand have not fared well over time. From the kings of old to the Soviets, it seems to be very hard to centrally plan an economy well for long. It’s just a really, really hard thing to get right, and inevitably, we just aren’t that smart.


Nonetheless, as we have shown above, us not being smart enough to find some kind of solution better than a greedy algorithm doesn’t mean there isn’t one. It just means we might not have been smart enough yet to do it. Could there be something better?

What Better Looks Like

Just by continuing to modify rules and incentives, it is simple to imagine the kinds of proposals that could be made within the context of capitalism’s greedy algorithm that might improve its solution to PHEA (though, of course, one can agree or disagree with the proposals). Should we prohibit mortgage-backed securities? Should we lower the corporate tax rate to incentivize people to build businesses in America (or wherever)? Should we spend public dollars to subsidize college education and encourage people to gain advanced skills?


We can imagine almost endless ideas that work within the overall greedy solution of capitalism to solve PHEA better, whether by modifying the rules governing the free market, the incentives used to manipulate people’s decisions, or anything else. Indeed, basically all mainstream economic discussion centers on these kinds of things because it is easy to talk about incremental changes to the solution we already have.


Nothing will change, however, the fact that capitalism is still a greedy solution to PHEA. Can we envision qualities an alternative, potentially better solution might have? Well, me being me and this blog being this blog, we can draw inspiration from computer science. In computer science, when faced with a hard problem, we either need to simplify the problem, or we need to be smarter.

Simplifying PHEA: Reduced Economic Scope

One of the first things one can do when faced with a hard problem in computer science is to try and simplify the problem. Maybe the salesman is happy if he only finds the optimal travel itinerary some of the time, or if the itinerary is within 10% of optimum. Or, maybe he’s comfortable skipping a city if it would cause the itinerary to be too crazy. Depending on the situation, in the real world, we often don’t need a perfect solution. Just a good enough one.


One way to simplify PHEA is to not attempt to have a single system that solves for all economic activity at once. American capitalism attempts to lay out economic rules for all kinds of private property and free market transactions within an ultimately holistic, consistent system. Whether you are buying a tank of gas, an oil field, or shares of Exxon, you will use dollars and follow the associated rules, and these rules won’t contradict each other based on what you are trying to buy. This attempt at economic completeness is part of what makes PHEA so difficult to solve. However, there are ways to break it down into smaller problems that are indeed easier to solve.


Just using our previous example, why must it be the case that we buy tanks of gas, oil fields, and Exxon shares in essentially the same system? Sure, there are some minor differences in how different kinds of property are traded, but why do I buy each of them following the same rules of private property, or the same accounting rules for how I bought them, or the same body of contract law to enforce the transaction? Why do we even use dollars for all of them?


Let’s examine that last bit, about not basing our economy on the single currency of dollars, some more. “That’s crazy,” you might be saying to yourself right now, “how would that even work! We need a single unit of economic exchange for the economy to hold together, and that’s the dollar.” And you’re right, that does sound crazy, because it’s so different from what we are used to. It’s hard to even imagine what it would look like if, fundamentally, we did not even use the same units to measure value in gas, oil fields, or oil companies. It might help, then, if we look at an appropriately techbuzzy example of how something like that might work: asset-specific cryptos.


Suppose I was a rich oil company and created three new kinds of cryptos: GasCoinX, FieldCoinX, and ShareCoinX. I accept GasCoinX for tanks of gas, FieldCoinX if you want to buy one of my oil fields, and ShareCoinX if you want to buy shares in my company from me. To get the coins out there, we do an ICO, and after the ICO sells out, new coins slowly enter the system from mining. Importantly, though, we don’t buy or sell these coins for dollars after the ICO.


What happens? What did we just do? Well, essentially, we have created three independent currencies for these three different kinds of property. They are monetarily decoupled from each other: changes to the value of one crypto do not directly influence the others. It might be the case that the value of my oil fields going up increases the value of my shares, of course, but if that happens, it’s not because of monetary connections between them. It’s because the underlying business connects the oil fields to the company’s shares.


In perhaps a clearer example, there is no direct connection between the dollar and the number of GasCoinX’s I will sell someone a tank of gas for. If the dollar tanks, GasCoinX might go up in value relative to the dollar, but not relative to a tank of gas, or relative to FieldCoinX or ShareCoinX. GasCoinX behaves essentially as a foreign currency relative to the dollar. Sure, they might be correlated a lot of the time, but they aren’t locked together, and their economies can be solved for independently, using independent rules and mechanisms. (Incidentally, proposals around creating a cap-and-trade carbon system to address climate change are essentially this idea: create a separate sub-economy where a greedy algorithm can try to solve a simpler, targeted problem.)


This example is still an oversimplification of what a system like this would need to look like to be anything more than just “dollars by another name.” Certainly, it would require more than just one oil company to break down correlations between oil prices and the dollar, and the same ideas would have to be applied across all economic sectors. But laying out some sort of brilliant, buttoned up, perfect alternate vision of the economy is not the point of this piece.


All we are trying to do here is envision a simpler, plausible version of PHEA, so that it is easier to solve. In this example, we see inklings of doing that. We are trying to take related but largely separate kinds of property and place them into separate problem statements. Instead of looking for a solution to PHEA, we are only looking for a solution to the problem of gasoline tank economics, or oil field economics, or oil stock economics. Smaller, more focused, decoupled portions of the overall PHEA which can more easily be solved with a more narrowly focused capitalist greedy algorithm, or possible a different algorithm in some cases. This approach does create a new problem—coordinating activity between these decoupled units—and it’s not guaranteed that combining more optimum solutions to these sub-problems always yields a more optimum solution to PHEA (in fact, it’s probably true that PHEA does not have optimal substructure). However, it’s often the case that this type of simplification does yield improvements to the broader problem, while also reducing the total complexity of the overall solution.

Simplifying PHEA: Solving for Smaller Groups

Another form of simplification is to make the problem smaller. Instead of trying to solve PHEA for all of humanity, solve just for a smaller group. To some extent, we already do this at a country level, and even state and municipal levels to a degree. However, it would also be possible to optimize for small, explicitly formed groups of people. Kibbutz’s and anarcho-communes (even anarcho-syndicalist communes) do this to some extent: a group of like-minded individuals agrees to adopt a system that lets them solve their economic problems together. They commodify their needs and desires at the community level, making decisions to optimize for the overall community.


Within a country like America, this kind of system would place a much heavier emphasis on local decision making. While the Founders may have envisioned an ever-shrinking domain of authority moving along the continuum from local to state to national politics, in modern America, we often find the opposite to be the case. With all economic activity quantified by the dollar at the end of the day, and the federal government given explicit domain over interstate commerce (not to mention the various ways they regulate economic activity in general), PHEA in America is set up to use solutions that apply more or less nationwide.


If we were to enable PHEA to be solved for smaller groups of people, instead of by broad, single-source federal means, smaller groups would need to be defined. These smaller groups could then operate their own solutions for the commodification of their needs and desires. The natural way to define these groups is based on geography, with each state, county, and/or town having its own approach. In fact, at various points in history, this was how things were: small, local regions came up with their own currencies and economic systems. However, whether we look at Rome’s spread through Europe, at the early US’s adoption of a national currency and bank, or at the taming of the Wild West’s frontier economies, it seems, for whatever reason, small, geographically based economies tend to get subsumed by larger national ones over time.


What we draw from that history, then, should only be healthy skepticism that letting people solve PHEA in small geographic groups is less likely to lead to an enduring economic solution than solving PHEA in larger geographic groups. It’s not necessarily the case that the centralized solution for all the population was betterthan the fragmented version for geographic subgroups; only that the centralized solution conquered the smaller solutions. And, importantly, geography isn’t the only way of dividing people into subgroups for PHEA solutions. They could be divided by industry of employment, or education level, or political philosophy, or any number of factors (also bearing in mind these kinds of divided societies can be rather dystopian if done poorly). What would it look like to solve PHEA independently for college-educated people and non-college-educated people? Or people working in the service industry vs. people working in agriculture vs. people working in manufacturing? I’m not entirely sure, but it seems there should be configurations like this that don’t devolve into Hunger Games.

Simplifying PHEA: Separating Needs and Desires

A final kind of simplification goes back to an earlier point, in which capitalism simplified PHEA by choosing to treat needs and desires the same. For a greedy algorithm, this is helpful because it makes objective functions more consistent and predictable. By enabling one solution for commodifying needs and a different one for commodifying desires, we create more degrees of freedom that, intuitively, should make it easier to approximate the optimum economic solution.


This setup would be pretty obvious: society at large decides a set of things that are considered needs, and everything else is a desire. Needs would perhaps include things like food, shelter, clothing, voting, education, freedom from fear, protection of private property, basic rights, or whatever collection of objectives floats society’s boat. Everything else would be classified as a desire, though it would be important to allow definitions of needs and desires to change over time (for example, 100 years ago, electricity was a desire, but now it is a need). Separate economic solutions would then be used to govern needs vs. desires.


An algorithm that presents a good solution for handling economic activity associated with needs might be different from one that solves for desires, and we could prioritize which solution takes precedence in the case of a conflict. For example, we might bias towards the needs solution until all people’s needs are satisfied, before allowing economic activity associated with desires to take place. “That’s communist!” you might object (and it’s not communist, it’s socialist), but all this requires is trying to make “needs” independently solvable from “desires.” Programs like welfare, social security, and Medicaid attempt to do this. As does something like universal basic income, which seeks to ensure people have enough money to at least satisfy their needs. All of these approaches fail to break correlation with the dollar, of course, so they aren’t truly separating needs from desires. But fundamentally, that is what they are trying to do: pose sub-problems of PHEA aimed at turning hunger, homelessness, or other basic needs into individually solvable problems.

Get Smarter: An AI Assist

Now, suppose all of the above sounds like hogwash, and keeping the full, society-wide, pan-economic PHEA as our main problem is the “right” way to address economics (it very well may be that breaking it into subproblems creates damning inefficiencies anyway). What else might we imagine a better solution than a greedy algorithm to look like?


Based on the historical challenges central planning, we can assume humans aren’t smart enough to pull it off. If you’ve read anything else on this blog, my next proposal won’t shock you: artificial intelligence. AI has shown itself to be quite adept at solving hard problems, especially narrow ones. In fact, AI is already gaining traction for complex, high-stakes, real-world problems like train scheduling, which is known to be an NP-hard problem (i.e., it’s at least as hard as NP-complete problems). While we might be a long way from having a super-AI smart enough to control the whole economy, thinkers such as Asimov have imagined what that might be like. And even if we look at today’s AI capabilities, we see some promise it might be able to help centrally plan pieces of a larger solution.


In one of our earlier examples, we looked at the problem of deciding how many loaves of bread to bake. Humans have historically been bad at predicting this at scale, so we rely on capitalism’s greedy algorithm to solve it. Instead, though, one could easily imagine and AI that would predict the future supply of bread that needed to be baked based on examining vast amounts of historical data about bread consumption. AI systems like this often outperform humans, sometimes making shocking predictions that turn out correct. In fact, it would be surprising if large bread manufacturers like Grupo Bimbo and Flowers Foods are not already exploring this problem as part of their AI adoption efforts, for they already have to answer this question for their own massive operations.


One limitation of currently available AI technologies is that they are often narrowly focused, great at performing one specific task but nothing else. These specialist AIs, called narrow AIs, can be combined into what are called “multi-agent systems.” These systems rely on individual narrow AIs to make their specialized predictions, then combine the specialized predictions into broader predictions or statements. The act of combining predictions can even itself be the job of another narrow AI, which is trained to accurately generate meta-predictions from lower level specialized predictions. Such a system could, for example, predict the total amount of all foodstuffs needed by a population, and therefore the amount of agricultural products needed, and therefore the amount of farmland needed for a given amount of agricultural products needed, and therefore the amount of water needed for agriculture, and so on. This system would be massive, complex, and likely inscrutable. But it is extremely plausible using only extant AI technology.


Viewed in another way, this is actually almost what capitalism is doing! Each baker is a specialist, capable of predicting the amount of bread they need to make. Each farmer is a specialist, capable of predicting the amount of food they need to grow and resources they need to do it. However, capitalism lacks an explicit entity to synthesize these projections into a meta-prediction, so instead it relies on the free market to more or less figure it out with its “invisible hand.” It may be that something smarter than us, or at least more adept at specializing in this kind of analysis, can fulfill these ends better.

Better Objective Functions with Brain-Machine Interfaces

Another futuristic approach to solving PHEA would seek to change people’s objective functions to better approximate global utility rather than individual utility, without departing much from the capitalist greedy algorithm. Today, people often make economic decisions that favor them despite their impact on others; sometimes, even, they make decisions knowing full well that they are negatively impacting others. Using brain-machine interfaces (BMIs), it might be possible to cause people to truly feel and understand the impact of their choices on others, and thus use improved objective functions when assessing their choices.


For example, suppose BMIs would allow the full-fidelity capture of a person’s experience, and then allow that experience to be delivered at full-fidelity to another person. It would be possible to record and save the actual, first-person experience of homelessness, or being swindled, or having to watch your child try to go to sleep hungry for the third day in a row. And then, that experience could be delivered to others, such that they actually, truly received that experience as if it were their own. You know, like in Total Recall. Would people still choose to make decisions that benefitted themselves at real cost to others, if they knew they were going to have to experience those costs first-hand? If they knew other people could also to experience those exact costs, and know exactly who it was who imposed those costs? Likely not. This kind of forced empathy, though possibly ethically dubious, would nonetheless likely have positive socioeconomic effect.

Conclusion

Whew! This is a long post, so let’s review. We started off by defining capitalism in relation to the problem it purports to solve: the commodification of needs and desires, aka PHEA, the problem of human economic activity. Next, we looked at complexity theory in computer science and what problems that are really, really hard to solve look like. We defined greedy algorithms as solutions which try to solve hard problems by making short-term optimal decisions, and that greedy approaches often yield good but not optimal solutions.


Armed with that background, we looked at capitalism as a greedy algorithm attempting to solve PHEA. We saw how it has had many successes in this regard, especially relative to its competitors that employ central planning. We also saw how modern capitalism has deftly adjusted the rules governing the free market and objective functions used by people to make their decisions, all in an effort to make its greedy algorithm more closely approximate the overall optimum solution. Then, we thought about ways we might generate a better solution than modern capitalism. We thought through subdividing PHEA for smaller groups of people, for smaller domains of economic activity, or by separating needs from desires. Finally, we considered how artificial intelligence might make us better at central planning, and how brain-machine interfaces might help people have objective functions which help them evaluate decisions for overall rather than personal optimality.


That’s a lot of different stuff. Even after writing over 8000 words about it, I am still left with more questions than answers about what kinds of things might really, truly improve capitalism’s ability to facilitate good economic outcomes for society at large. Capitalism has done really well, and greedy algorithms are very attractive when it comes to really complex problems. There might be more changes to market mechanisms or incentives to drive significant (but incremental) improvements without straying from capitalism’s greedy algorithm. Or, perhaps these are simply the only kinds of changes we can make with any degree of confidence because the problem is so complex. Yet it could also be (and I hope it is) true that, by leveraging technology to make us smarter and more empathetic, we will be able to craft a new, better system. In any case, however, it is useful to think of capitalism not as some kind of all-consuming devil, nor as some kind of sacred savior, but simply as one of the ways we are trying to solve a really, really hard problem. It’s been a decent solution, by and large, but it’s by no means guaranteed to be optimal. With a pragmatist’s problem-solving mind, we just might be able to engineer a better solution, if we can finally start asking ourselves what it truly is that we actually want our economic system to do for us.

 

Comments

Popular posts from this blog

What is “Sentient AI?”

Recently, as anyone who has managed to find this post is likely to know, a  Google engineer was placed on leave after raising concerns that they may have created a sentient artificial intelligence (AI) , called LaMDA (Language Model for Dialog Applications). This story made waves in the popular press, with many people outside the field wondering if we had at last created the sci-fi holy grail of AI: a living machine. Meanwhile, those involved in cognitive science or AI research were quick to point out the myriad ways LaMDA fell short of what we might call “sentience.” Eventually,  enough   popular   press   articles   came   out   to   firmly   establish   that   anyone   who   thought   there   might   be   a   modicum   of   sentience   in   LaMDA   was  a fool, the victim of their own credulity for wanting to anthropomorphize anything capable of language. That previous sentence has 23 hyperlinks to different articles with various angles describing why having a sentience conversation

Traffic, game theory, and efficient markets

One of the perks of working for Microsoft and living in Seattle is getting to sit in traffic for a couple hours a day (I'm working on something greener, honest). So, I have a lot of time to think. Often, my thoughts turn to why exactly there is so much traffic. (There is a lot of scholarly work on traffic, and you can write code to model different things, but I'm speaking from a more theoretical perspective). I take State Route 520 to and from work. There are no traffic lights, so on the surface, I thought traffic couldn't get too bad. I don't see much reason why the ratio of total trip time to the number of cars on the road shouldn't remain fairly constant or at least scale gracefully. But, it does not. At some point, there is enough traffic that you spend most of the time stopped, and getting home takes an hour plus, instead of 15 minutes. So, there must be other factors affecting traffic patterns besides the road itself. There are two important other factors: sur

AI, Medicine, and Xenocomplexity: Beyond Human-Understandable Data

Medical applications for Artificial Intelligence (AI) and Deep Learning (DL) have drawn a lot of attention from investors, the media, and the public at large. Whether helping us  better interpret radiological images ,  identify potentially harmful drug interactions ,  discover new therapeutic targets , or simply  organize medical information , AI and DL are beginning to impact real care given to real patients. Because these systems learn from examining data, rather than being programmed to follow specific logic, it is often challenging to understand why they make the decisions they make. Furthermore, recent results suggest the role of AI in medicine is transitioning to a new phase, in which AI enables us to do more than merely solve known problems in an automated way. AI is enabling us to solve  “xenocomplex” problems : problems so difficult, humans have a hard time identifying a solution or even fully articulating the problem itself. To fully translate these capabilities into better o