Computing and its power
Pierre Depaz (NYU Berlin)
Introduction
I don't really have a background in philosophy of science, nor in computer science of mathematics, I'm rather in the field of literature and media studies, and I'm generally in how the computer can represent things, things that are not limited to computation. This also explains why this presentation will be a bit more broad than the previous ones, but hopefully still of interest.
Practicing is one way to do things in concrete reality.
The approach I propose to take today is to focus on the concept of practice, and what it does to computation.
Practice, praxis, is the putting into action of things, to manifest ideas into tangible situations and consequences. When it becomes tangible, it also becomes individuated. If an idea, a concept, would tend towards objectivity, then the practice is choosing a specific application of a specific part of this concept, out of many. There might be a single concept of computation, but there are definitely multiple practices of computation, multiple ways of manifesting it, multiple ways of thinking about it.
This multiplicity is because practice is contigent: it happens in a particular context, with particular constraints and goals. For every context, constraint and goals, there will be a different approach to computation in practice.
And this connection to context also means that practice has a lot more to do with the human (that is, the psychological, the social ) than the theory, which strives (but never really truly succeeds, in my opinion) to detach itself from worldly concerns.
Thesee concepts of multiplicity, contingency, sociality, associated with practice, involves some things that are not usually associated with computation, which we would usually consider as a single coherent concept (but with different ways of organizing information), context-independent, and a-social.
And yet, in practice, computation tends to be involved in controversies, between tech-solutionism, effecvtive accelerationism, luddites and skeptics. If all agree around the broad existence of computation in theory, they disagree on computation in practice.
From the theory of computation to the practice of computing.
Since practicing is deeply intertwined with reality, with concrete consequences, and distinguishable effects, I propose to look at the practices around computation as this putting-into-action of computation, that is, as computing.
Computing, as the action of doing computation, implies the beginning of a shift form science towards engineering, from the more abstract to the more applied. Computation refers to itself, while computing is applied to something specific.
So if practices are about choosing one angle on an abstract concept, the one I propose to follow here is that of the power of computing. Since power, indeed, is the ability to do things concretely.
Power also bridges across computer science and engineering. There is both computational power , but also computing power
From the computational power (intransitive) to computing power (transitive).
We can see this shift in this reversing of terms: the power of computing refers to the effectiveness of an idea (here, computation), while computing power is about the concrete application of computing.
In other words, we're not longer at computing as an end, as something to be studied in and of itself, in order to understand all of the specificities of what it can do, almost in an autotelic manner. Computing has indeed interesting properties, to the point that computer science can be understood as being the study of computing.
Rather, we're looking at it as means to accomplish other things. Those specificities of computing are no longer in discussion, in reference, with each other. They are being involved with other domains (first of all engineering), producing results whose study is related to, but no longer strictly bound, to computer science research.
Power as the ability to act (sometimes affecting things in the world)1.
So, when we put computation into practice, we are expecting some kinds of concrete consequences, and as such that's the result of the exercise some kind of power as well. It is this kind of practical power that I am interested in today. But first, we need to disambiguate what power implies.
Power, in physics, is "the rate of doing work". In politics, it's the ability to influence people or events. In both cases, power is effective.
The difference between a physics perspective on power, and a political one, is the difference between power as a way to act and power as a way to dominate another;
In french, there is such a distinction, between puissance and pouvoir, and there is a similar distinction in German between kraft and macht.
We could say that the first one, puissance is praxis (the exercises of possibility, in which something might be affected, created through mutual shaping of an actor and its context), while pouvoir is rather poiesis (in which something is created, formed by the command of a superior, shaped into being, a sense of transcendent power that an agent applies to its environment).
What are the fundamental capabilities and limitations of computers?2
How does power change, or remain the same, when computation is practiced through computers?
Are there any counter-powers?
So, starting from Sipser's question on the power of computing (specifically, its capabilities and its limits), I would like to push it further with this dual lense of power as acting and power as affecting.
How does the work done by computing influence people or events? In the first sense, what are its abilities, what it can do? Then, in the second sense, what are its consequences? The new state of things once computation has practiced its power?
And then, ultimately, what are the limits to computing power? Are they internal or external? Are they insurmountable or are they temporary?
- The power of computation
- The implementation of computation and computing power
- The limits to computing
So, to start with, I'll establish what I mean by the power of computation, specifically in terms of abstraction, modelling and automation.
Then, switching from theory to practice, I'll show how the process of implementing the concept of computation through an egineering approach led to the application of computing power to various fields, and will sketch out some possible consequences.
Finally, I'll turn to the different kinds of limits to computation that we can think of: theoretical limits, and material limits.
So in general, this presentation is about the practical application of computational in non-computational (i.e. social) contexts, and I would be happy to discuss with you further on this topic of the relationship and influences between computation and society.
The theoretical power of computation
Computation, understood as a well-defined arithmetic calculation, involves abstraction, modelling and automation.
There are a few things that make computation effective, in the sense of being consequential. Some of it derives from its roots in logic and mathematics, but take on another flavor when combined within mechanization, and this happens through the models of computation.
Abstracting
The power of abstracting discards features to function autonomously.34
Abstraction is one of the essential steps in the process of computation, and arguably in human thought in general. Abstraction is the process of discarding certain features of entities, and of preserving only those that are considered relevant for the discussion at hand.
Hilbert's formalist school of mathematics is "the manipulation of symbols according to agreed upon formal rules". It is therefore an autonomous activity of thought.
This power of abstraction is much older, and related to writing technologies in general. Clarisse Herrenschmidt, in Les trois écritures, shows how the evolution of the alphabet is a process of the abstraction of both the things designated and the sounds uttered in order to form systems of symbols and rules which can carry meaning.
Geometry is another great case of how abstraction works: there are no perfect squares in our world, but being able to decide that a square is perfect becomes very useful to reason about how shapes are formed, and how they can be further assembled. Abstraction is the process of making computation thinkable.
The power of abstracting over strings.
While geometry is an abstraction through shapes, the alphabet is an abstraction through a sub-family, that of glyphs. Series of these glyphs, strings, is what some of the most popular models of computation, including the Turing model, use as a basis for operation. This started with logical propositions, which were already formalized, stricter subset of natural languages.
The thing about strings is that they are not preemptively limited to logical propositions. Anything that can be abstracted via a string in a formally rigorous way can be computed. In other words, as long as you can turn it into a string, you're good to go (paraphrasing Yann LeCun).
One example is the case of game programming. When you implement a game design into a computer, you usually have to handle a player character, and some features become more important than others, for the purpose of automating the rules of the game by the computer. In this case, you usually want to abstract resources into units, position into an n-dimensional vector (usually n is comprised between 1 and 5, I've never seen a game with 6-dimensional positioning). You also discard some features to define this abstraction. In game programming, you rarely take into account the fun that a player is having, while this is an essential aspect of the design process, it becomes irrelevant in the development process.
Computation appears to be effective at handling complexity once it is formalized.5
If we had it [a characteristica universalis], we should be able to reason in metaphysics and morals in much the same way as in geometry and analysis... If controversies were to arise, there would be no more need of disputation between two philosophers than between two accountants [...] Let us calculate.6
If you can break it down in terms of strings, things can be computed over, it therefore there can be a solution.
There are already some limits to this statement: as you might have noticed, we do not use arithmetic in day-to-day lives to solve issues. One reason for this might be that there are things that any formal language cannot express, as put perhaps most elegantly by Wittgenstein in the Tractatus Logico-Philosophicus and the subsequent dismissal thereof in the Philosophical Investigations.
And yet, this ability to decide on a particular outcome of a series of arithmetic operations over strings was precisely what the Entscheidungsproblem was about. Being able to decide.
A definition in the negative: what can it not do?
One of the origins of our modern concept of computation is Hilbert and Ackermann's question of whether there is an algorithm that can universally verify the validity of a given statement. To this question, Church and Turing answer by the negative, and in doing so, come up with models of computation.
This negative answer I find particularly interesting becauses it raises a paradox. On the one hand, computation is defined by what it cannot do:
- prove that two functions are equivalent
- or prove that a Turing machine would halt or not.
On the other hand, this negative answer has been followed by a lot of positive answers regarding what can be computed in practice.
It turns out that there are a lot of things which you can formalize and operate upon! Loan requests, health insurance claims, cultural taste, human faces, student ranking, etc.
Modelling
The models of computation make the theory tangible.7
Computation becomes an important object of study once it has been proven, through a model. Lambda calculus and Turing machines, are both models of computation.
In brief, models are the sets of entities and the relations binding these entities; the data and the functions which can operate on this data. They're related to models in science in general, in that they help manifest how a particular theory would work, were it to be put in practice.
But modelling is not a property of computation per se. Rather, computation works quite well with models, or as a model
They involve the process of abstraction we described previously, and allow us to think about how these abstractions might interact in more complex relationships, in order to satisfy some theoretical approach.
A model makes a theory practical, and this lends them epistemological authority8.
Indeed, [the model's] very complexity, plus the precision to which it carried its calculations, might lend it a certain credibility.9
a model makes a theory practical
Strictly speaking, a theory cannot solve anything, but a model can. So models can, in a non-trivial sense, figure things out. And this ability to "figure things out", to perform epistemically, is reinforced by computers. So the model is not crucial to computation, because there can be many different models. But in another sense, it is crucial, because it is what verifies computation.
Yet, at the same time, the aim of a model is, precisely not to reproduce reality in all its complexity. It is rather to capture in a vivid yet formal way, what is essential to understanding some aspect of the structure or behavior of the reality we want to study. The word essential here is very problematic, and it harks back to our discussion of abstraction above, of choosing what is essential or not.
The difference between a theory and a model in explainability and predictability: a model does not exactly explain nor predict, only represent.
why are models powerful:
- enable epistemic actions (and as such can be convincing)
- consistent: by being able to perform a theory, it sustains an impression of consistency, something that can take on aspects of cosmogony, that has been shown in particular by sherry turkle and her work on simulations
A model can be symbolical, or electrical.
As long as you have parts that are consistent and replicable, and that the interactions between parts are under control, and that it behaves according to the theory, you have a model.
We see this well in the description of a Turing machine: It's part-symbolic description, part-blueprint to build an actual machine. The power of the model might therefore be to render theories practical.
A model therefore acquires clout by its empirical nature.s
Automating
The point about this automation is to combine abstraction and modelling into machines
It was not a given to build a machine which can handle ideas.10
There were a couple of centuries of attempts and fascination.
The early automats (le canard de vaucansson, the mechanical turk, or the mythical golem) were rather objects of fascination, until the invention of the steam engine and the electrical circuit enabled a whole new range of possibilities.
Particularly for the case of the calculator: the theory was there, and was sound, but a mechanical model of it proved particularly difficult, highlighting one of the differences between theory and practice, which illustrates the old adage that, between theory and practice, there is no difference, except in practice.
However, once you can build computation into a machine, it turns out that the benefits are quite noticeable.
Automation is the process of removing the human intervention in the functionning of an artefact, by predetermining input data, decision criteria, subprocess relationships, and related actions.
Automation transforms correctness into speed and reliability.
Automation is the process of removing the human intervention in the functionning of an artefact, by predetermining decision criteria, subprocess relationships, related actions AND putting all of those in a machine.
If you have the quality of correctness, of certainty about what is going to happen, then you can consider scaling, increasing in terms of speed and reliability without changing much in the model itself, as long as it still performs as expected.
Which brings another perspective on the effectiveness part of logic/calculus, as it becomes efficient. Effective is about having a result, efficient is about bringing in the question of resources when assessing the value of the result.
A Turing-complete practical computational model, can model itself, and automate the automation process.
Finally, one of the properties of computation, recursion, also applies to automation: it can automate the automation. So if the models that we are presented via a computer are persuasive, then we can extrapolate that this makes the underlying model, the computer itself, more persuasive!
Abstraction, enabled modelling which, when made practical, can be automated.
So this is how I propose to view the power of computation, its ability to act: as an automated model of abstract entities.
By working on abstractions, it broadens its field of application, as long as anything is an alphabet.
By involving a model, i.e. a set of entities and their formal relations, it acquires a certain epistemological authority, as models veryify the effectiveness of the theory.
Finally, models of abstract entities and relationships can be automated: put in practice in the real world, in particular through research on self-replicating automata, it starts to take on a life of its own (no pun intended)
Implementation: from abstraction back to reality
This section is about the shift from theory to practice,
First looking at the transition between theory and programming.
Second, looking at the context in which this shift happens: the governance by numbers and the rise of engineering
third, we illustrate this computing power with two examples of scientific fields that were affected by computers, from computing social sciences and digital humanities (the need to categorize, that is derived from the power of abstraction)
From computing to programming
Software engineering as a practice of computer science.11
One important kind of practice around computation is software engineering, taking computation out of the university setting, into the industry, and computing over the abstracted entites in those other domains.
The performance12 of Moore's law:
- The IBM Power goes from 34*10^3 ops/sec in 1964 to 16.6*10^6 pos/sec in 1967 with a single core.
- The IBM Power reaches 1.9*10^12 ops/sec (60 cores, 8 threads/core) in 2020.13
IBM Power is a RISC architecture for mainframes, and it embodies the shift from the power of computation to computing power.
In 3 years, computing power explodes, and doesn't. This is when the phrase "computing power" starts to be relevant.
Moore's law is ambiguous in how it both describes and predicts the way that chips will evolve. It makes things possible that were not before. Performative in
This shift to practice is also the shift from computing to programming.14
This can be seen in the development of ever more expressive programming languages like COBOL, or Java, in order to deal with non-strictly computational fields (which actually was the case since the beginning, because the military funded computational research to calculate missile trajectories)
Expert systems are a good example of this transition from broad questions about intelligence and abilities, to industry application in narrow fields.
The epistemological power of numbers
Governing by numbers, switch from humanities to mathematics15
All this is very well summed up by Alain Supiot's La Gouvernance par les Nombres (Governing by numbers)
He explains that this is part due to the magical power ascribed to numbers, and to the interest in decidability, something also value in private market and industry (e.g. decision-makers). This shift from humanities to engineering and mathematics also takes place in the wake of industrialization to contribute to economic development and management.
So here we move to a political power that is rooted being able to calculate, as a means of governance: planification, steering, cybernetics!
Seeing like a state, and the abstraction of the realm.1617
This happens in a context: there are two parallel movements that contextualize and facilitate the application of computational power.
Foucault was laying out the process in the measuring processes of the hospital, of the school.
Then, Scott, in seeing like a state, argued further for a bureaucratic way to handling politics. The example of forests.
It makes things visible through a particular computable format, like the US Census and IBM.
This power of computation is also going to affect other fields of science.
The extension of the realm
Digital humanities, from qualitative to quantitative sense-making18
Humanities is traditionally more about quality than quantity: what happened?
But, at the same time, it has a lot of power in terms of indexing, started with the encyclopedia, then with Paul Otlet's Mundaneum, and subsequent document modelling through hypertext, hypercard, the web.
It focuses on a quantitative perspective on science and literature, that wasn't possible before, and perhaps to the detriment
Another power of computation depends on the mathematical theory of communication, which gets rid of meaning. It can be argued that it gets its power (its effectiveness) because it gets rid of meaning.
Computation affecting humanities via the reification of blurry categories19 and the ushering of efficiency20
- specifically handling times, durations
- handling geographies and locations
alternative: https://dataxdesign.io
Finally, there is also the accusation of digital humanities as the ushering of neoliberalism into the university. It might be controversial to say that this is absolutely the case, but it does highlight the point that there is a specific socio-economical context for the putting-into-practice of computation. Also because technology is more amenable to processes-as-results, it also follows a logic of "setting up conditions" rather than "having consequences" as seen in the bankspeak of the World Bank.
Computational social sciences and scale-making.
Even though the empirical component of sociology has started, since Durkheim, to include numbers and statistics as components of making sense of society, as we can see more anecdotically with the rise in polling and estimates in political campaigns.
Computational social sciences is the use of computational methods in social sciences, particularly helpful in big datasets. I find it interesting for two reasons:
- There is a sense of scale that gives a sense of power (big data analytics is always better than small data analytics)
- There is an ease to deal with more and more digitized (meaning, abstracted, formal) data, particularly in social networks analysis. With a tendency to digitize non-discreete data to study it.
Graph theory is looking good but not rigorously justified21
In CSS, one of the tools are network graphs. This has two implications:
The first is that the visual power of graphs exerts a certain kind of persuasion, even if it's not directly rooted in logical soundess. There is no methodological reason why the lin-log is a better algorithm than others to visualize a network. There is only a consensus that some systems of representation "make more sense" than others.
Graph theory as an abstracting device: the case of homophily22
Second, this visual representation might not reflect perfectly the reality of networks, and so it might skew the results.
This can be attributed to complementary issues. On one side, the erroneous choice of what to abstract by humains. On the other side, the obfuscating power of computation which, by virtue of being a model that appears coherent and effective, prevents us from seeing such mistake.
Example of the homophily.
Computing power brings in modelling and abstraction in fields where it was not inherent.
I chose the two examples here to illustrate the use of computation (the application of formal rules of calculation) to domains that are not strictly included into the logic and mathematics, but which can nonetheless end up being abstracted into and modelled by computation.
In the case of social sciences, computation also has implication once it is embedded in concrete contexts. We are switching our networks of dependency: a lot of research was cancelled after Twitter was bought.
So here, we find a very clear limit: if there is no data, there is no computation.
The limits of computing power
This extension to the realm of computation, its power of abstraction and automation, might nonetheless has limits. There are two perspectives we can take on those limits: the material limits and the theoretical limits. However, I'm not entirely sure about whether these are proper to computation, or if these are the only ones, so I would be curious what you think of this during the discussion
Material limits
The material ambiguities of computation.23
Computation is powerful because it can be made material, but it might also reach practical limits due to its materialities.
We can see this in how Turner and describes software as abstract artefacts, and how he dismisses the implementation details (e.g. how the processor would describe the precision of a floating point number): implementation really does matter.
What was one of the strengths of computation, the fact that it was implementable in a physical machine, might also be one of the limitations.
The job of electrical engineers is to manifest the dream of the infinite tape24
One of the reasons why we can afford to discard the implementation details is the fact that electrical engineers are doing a lot of work in order to support "virtual memory", an abstract plane on which computation can operate.
There is an interesting observation with the rebound effect: if we make things more efficient computionally (theoretically) they become less efficient practically, since we use more of it. The more we make available, the more we use.
Computing always requires resources, whether copper, tantalum25 or data.
The mines in Congo.
The overflooding of content for LLM training.
Theoretical limits
The theoretical limits of complexity classes.
There are problems in the study of computation that are deemed complex. Since these are not deemed impossible, there is a suggestion to defining these kinds of limits as the limits you can always push back.
The limits are therefore classes of problems unsolved, and not impossibilities.
It seems like these problems are connected to the material limits: as long as material limits are pushed back (i.e. we can do more calculations), the computational complexity becomes a lot less relevant.
But otherwise, I don't see any other fundamental, practical limits to what computation can do (vision, language). So maybe it is this ability to expand that might clash with the limits of materiality.
Are scalability and extensibility intrinsic properties of computation?26
The fact that limits can be always pushed back further can also be looked at through two desirable properties of software, as they are set forth by the industry, and taught in computer science curricula. My question here is whether those properties are somewhat intrinsic to computation in theory, or if they are extrinsic, and projected by other systems onto computation in practice.
For instance, in the discussion of the limits of software engineering, one recurrent criticism is that of considering software as scalable and extensible.
Scalability, even though it's present in a lot of discussions around software engineering, is actually a business term. This shows a conflation between two different realms. But, at the same time, if scalability is defined as the ability to increase the size of a system while maintaining the viability/validity of the system, then computation is indeed scalable! Therefore it aligns quite well with business interests.
Extensibilty is another virtue of software engineering, which allows to add features while keeping a reliable core of functionality. In this sense, it is related to scalability. But here I have the same question: is computation fundamentally extensible? Or is it represented by non-extensible models which then enable the composition of extensible systems?
Is this something that we force unto computation, or something which computation affords already?
And, if this is the case, where is the shift from non-extensibility, to full extensibility?
Are those properties, or are those desires?
This reminds us of Moore's law, which is not technically a law, but is rather performatively enacted. Power is performed, and allowed (La Boétie). Maybe computation doesn't have any power, and it's just our social context which gives it power.
We might consider the extension of computation not an intrinsic property, but a desire stemming from our world in which technology in general is seen as desirable?
Computers exist to solve problems we wouldn't have without computers27
Computation in practice is a pharmakôn.28
The essence of computation is to solve problems, to decide on a solution for a problem. But there is also the fact that problem-solving begets new problems!
Then again pharmakon! Computation does not escape this, but it would be relevant to figure out which exactly are the problems that computation creates.
And we actually touched upon them in the previous section:
- the humanities showed us that continuous, open-ended phenomena cannot be handled without being discretized data
- the social sciences showed us that the epistemic power of computational models might prevent us from considering the factors that are not processed.
Just because we can, does it mean we should? 2930
So combining these ideas of expanding automation, and projected desires: if it is possible to automate something, under which conditions is this undesirable?
Jacques Ellul says that technology (the superclass of practical computation) has the property of transferring the possibility into a necessity.
I guess the question is then: how do we put our own-decided limits onto the practices of computation?
Because there might be things that we might not want to calculate: typically, a justice decision, since this kind of action is beyond what the computer can handle (this moving of goalpoast is also seen in the VAR and how it's supposed to show the truth, but just moves the truth)
La tempérance: "Platon, dans le Cratyle, donne une étymologie particulièrement intéressante de ce terme : sōphrosúnē signifie ici sṓizein tḕn phrónēsin : sauver la phrónēsis, ie l’intelligence pratique et son calcul d’utilité, et en particulier les sauver de leur dérive entropique, j’oserais même dire de leur pulsion de mort."
https://aoc.media/analyse/2024/12/09/penser-la-sobriete/"
Conclusion
Computation is powerful, as its practical applications affect non-computational things.
This capacity to act, and to act on, might have limits, but are not yet reached.
What I have sketched here is that, in the shift from the theoretical to the pratical, power takes on a new guise. In the theoretical, it is ability to act. In the practical, it is the ability to act on.
This action on has been shown in the shift from computing to programming. It has taken place within a particular context (as all practices do), and this context is that of the industrial society, with a focus on engineering and rationalization. In a sense, computation facilitated a power of rationalization that was already underway.
Finally, this practical shift, in a context, made me ask the question of the limits. If it seems that we can compute more and more things, where does it stop? Are the only limits those that are established by computer science? Are there material limits, or is the tape really infinite? Are there ethical limits that societies might pose on scientific developments?
Essentially, what I tried to sketch out is whether computation, as a scientific concept, is completely orthogonal to its practical applications. Since this is still an ongoing project, I will conclude with a few questions to open up for a discussion.
- What are the other things that make computing powerful?
- Is there something inherent to computing that calls for its practical application to more and more aspects of life?
- What are other limits of computing? Are these hard limits or soft limits?
- Should practical applications be of concern in theoretical considerations? Or are these disjointed classes of problems?
thanks!