AI Questions — Grayson Richards
-
While the premise is that AI is operationally autonomous, running without the assistance and apart from human intelligence, the fact remains (at least now) that humans still determine the conditions for the development and implementation of artificial intelligence application. Maybe it’s simply a matter of revisiting the persistent question of whether technology is neutral. If we agree that it’s not, and the artifacts indeed “have politics”, how should this inform the incorporation of artificially intelligent systems into more and more aspects of social/political life? More specifically, is this necessarily a hindrance, or can it be coopted into a space of possibility?
-
Honest question: if AI processes for scripting, populating and rendering perfectly realistic image worlds become sufficiently advanced and accessible, does there remain any incentive (in resource or philosophy) to produce new representations using lens-based media?
-
Rotman’s article discussing the role of AI in innovation clearly illustrates the presence of a systemic bias, instrumentalizing AI in pursuit of continual growth and invention demanded by late capitalism. While not the explicit intent of the article, its language emphasizes the importance of interrogating discourse and rhetoric around new technologies. Can we take a minute to consider some of the other expressions of AI discussed this week, and identify their economic or ideological implications? Is AI, as a process of technological evolution, irrevocably linked to these logics?
-
The Boeing example raises an important philosophical question regarding algorithmic decision-making. In the event that a decision made by artificial intelligence results in a loss (life, time, revenue, all three), how are we (as individuals and/or subjects to a juridical system) to assign responsibility?