In the modern computing world, many tech companies want you to believe that the limitations of computing do not exist. In this idealized world, computers seem to have the ability to do everything, and all your tech giants (Google, Apple, Microsoft, Amazon, etc.) are working tirelessly to reach this reality. They will throw out buzzwords, such as, “innovative”, “cutting edge”, “trailblazing”, etc. in their advertisements, all in an effort to make you believe that computers really could do anything you can imagine. But does this reality really exist?

Simple computer science theory tells us the answer to this question: of course not. One classical example in computer science of an unsolvable computer problem is known as “The Traveling Salesman Problem” (TSP). TSP was first formulated by the mathematicians W.R. Hamilton and Thomas Kirkman, and the problem simply asks: given a finite number of inter-connected cities, what is the shortest path through every city and that returns to the starting point?

Now the obvious solution to TSP is to simply check every possible route between all the cities and find the shortest one. However, the problem with this brute-force algorithm is that as the number of cities grows, the number of routes to check grows exponentially. For example, with 10 cities, there are more than 300,000 possibilities, and with 15 cities, the number of possible routes skyrockets to over 87 billion. Checking this many routes far exceeds the capabilities of even some of the fastest computers out there. Currently, an exact solution to this problem does not exist, and some theorize that such a solution is not even possible.

While this is just one example, computability theory also holds the key to why computers really can’t do everything. The answer lies in two important concepts: the famous “Halting Problem” and “Decidability”.

The Halting Problem is stated as: given any arbitrary computer program and some set of input, will the program run to termination or will it continue to run forever? In 1936, Alan Turing, the “father of theoretical computer science,” showed that an algorithm to solve the halting problem for **all **possible program-input combinations is impossible. The halting problem was one of the first problems to be proved undecidable, which simply means that it is impossible to design a general algorithm that always leads to a yes-or-no answer.

Decidability theory relates the question of whether a problem is undecidable or not by asking whether there exists a “Turing Machine” that is capable of solving it. A Turing Machine is a device that can simulate any computer algorithm, and the Church-Turing thesis showed that a problem is algorithmically solvable if (and only if) it can be solved by a Turing Machine.

Since Turing proved the Halting Problem to be undecidable, many other problems have been shown to be undecidable as well. Given that undecidable problems cannot be solved by any Turing Machine, and since all of today’s computers are simply complex Turing Machines, it is clear that computers cannot do everything!

Until it can be conclusively proven that there exists a theoretical machine capable of eluding the limitations set forth by the Church-Turing thesis, then computational limits do exist, despite what many tech companies may lead you to believe. However, there is research being done currently on machines that can solve problems that Turing Machines can’t, such as “oracle machines” and “hypercomputers”.

Despite the fact that computers are incapable of doing **everything**, there are still many advances in computer science that are being made that should be indeed labeled as “innovative” or “cutting-edge”. One such field of study that is popular right now is that of machine learning, which essentially allows for computers to learn what to do without explicitly being told to do by programmer’s code.

Artificial intelligence (AI) is another popular field of study right now that is exploring how far we can design machines to be rational beings capable of thinking on their own. Currently, AI is limited by the distinction between decision making and choosing. By its very nature, deciding something is a possible computational activity, but the capacity to choose how to act, based on judgement and not calculation, is theoretically impossible to design for computers.

With the current theoretical research in computer science, it would be inaccurate for companies to claim that their machines can do everything. Below, Ron Swanson demonstrates our frustration when computer giants make such over reaching claims!

While it is a clever marketing strategy, it just simply isn’t true. But who knows? Maybe someday research will theorize machines that can do things beyond our wildest imaginations!

Jonathan Waring is an Athens native and an undergraduate student studying Computer Science at the University of Georgia. When he’s not watching Netflix in his room, he can be found watching Netflix in his friends’ rooms. He aspires to pursue an advanced degree in Medical Informatics and to one day work on disease tracking software at the CDC. As a reminder he is just one person: not statistically significant nor representative. You can email him at jwaring8@uga.edu or follow him on Twitter @waringclothes. |