Questions from How Games Move Us & Frankenstein AI
Hi All,
Here are some of the questions emerging for me.
1. “Ubiquitous connection has dramatically changed how we communicate with one another on a day-to-day basis, shaping how we understand community and copresence. Texting, Twitter and Facebook, email, and blogs offer countless ways to check in on someone—or many someones.” — Taking this assertion on its own, I’m curious about the qualitative and quantitative ways that ubiquitous connection has dramatically changed how we communicate. I’m curious about the ways that sousveilence has and continues emerge (as a response to surveilence)? I’m also interested by the ways that “checking-in” on someone (whether friend, acquaintance, or someone we dislike) has become a kind of policing of one another? In general, I think we all agree that ubiquitous connection has changed how we communicate, and certainly it has (re)shaped what we imagine “community” to be, but I’d like to take pause and really consider the ways it has.
2. In general there is a case being made that because games engage us as players more directly that we are engaged at a deeper emotional level. “I feel a sense of consequence and responsibility for my choices. In the end, I am to blame for the outcomes, because they arise from my own actions. This rich set of feelings that I have about the solo experience of running depends on the active role that I play in the experience—that is, on my own meaningful choices.” — are our choices really meaningful? I’ve certainly played games where this might be true, but I’m also curious whether this is overreaching? Whether, as a gamer, I really feel a sense of consequence and responsibility for my choices. Other peoples’ experiences? Do you feel this is true?
3. “Virtuality need not be a prison. It can be the raft, the ladder, the transitional space, the moratorium that is discarded after reaching greater freedom. We don’t have to reject life on the screen, but we don’t have to treat it as an alternative life either. We can use it as a space for growth. Having literally written our online personae into existence, we are in a position to be more aware of what we project in everyday life.” — In almost all of our readings, people have made assertions like this one. Constantly pointing us towards the possibilities that virtuality (inclusive of gaming, VR, AR, etc) can help us evaluate, transform, better understand/relate to/appreciate our everyday and the “real.” Is there something to be said about virtuality enabling something beyond our everyday? Is it simply okay if games (in this instance, or elsewhere VR, AR, etc) allow us to appreciate a different world altogether? What if it is okay that these experiences take us away, somewhere else, and don’t offer us some new insight or lens to appreciate our everyday? What if an alternate life is actually needed given the lives we are living? I hear the point on not having to reject our lives on the screen, but does it necessarily have to contribute or make deeper our lived experiences? What if it has something to teach us that is unto itself? Especially as our lived experiences and are contemporary realites can be so fraught, polarized and difficult. Not to be confused with escapism. But rather, can virtuality afford us entirely different worldviews and otherworldly embodied experiences? Yes, for growth. But can that growth take on other qualities than what we may initially understand a “space for growth” to suggest, i.e. growth as we aspire towards in our everyday; growth that is about us, for us, centring us. Not sure that makes sense… curious what you all think?
4. “Like the anthropologist returning home from a foreign culture, the voyager in virtuality can return to a real world better equipped to understand its artifices.27″ — just have to point this one out. Extremely aware of the very colonial ways that digital worlds are approached and talked about. The new frontier. The place where we can go and extract information/resources/potential/insight… Foreign worlds we can travel to, where we may find artifacts… the kinds that may better our own worlds and understandings. Revealing to us our limitations and depth. Offering us an opporunity to grow unlike anything before. Do we see the virtual as foreign worlds with fewer (no) consequences? Is that because they are made? Do we believe that the virtual is not a previously inhabited space? We have thought that before in “real life” to devastating and genocidal ends. In fact, hasn’t Facebook, social media and networked gaming shown us that the virtual is now deeply inhabited, fiercely and toxically defended — and already having serious consequences to peoples lives (both good and bad)? Then, are we in need of a digital language that is rooted in a justice framework? A framework that seeks to situate ourselves in the patterns of human history and our contemporary realities? Should we not advocate for an approach to our techno futures (with all its cool virtual possibilities, gadgets and alternate/augmented worlds) that is full of reverence, respect, and humility for what it already and can further offer us? Do we not need a relational perspective, one that highlights the ways we can cultivate good relationships with these worlds and the many people/technologies that inhabit them? Not as voyagers and pioneers, but as community members, friends, people aspiring to be allies and accomplices in future world-building? Do others fear that language like this suggests we are in danger (it is already happening), in yet another dimension of existence (the virtual), of making the same mistakes? Thoughts?
5. Following from this last point above, I’ve been thinking a lot about AI. About the Frankenstein project (and its aims to inform AI about the depth of humanistic qualities) — Are we most afraid of AI evolving beyond us because of the kind of colonial thinking that gives rise to statements like the one above. If AI evolved into something other than what we make it to be, then it is no longer a resource we can extract. Rather, we fear, we become its resource — to be manipulated and used (or worse discarded) at its will rather than ours. And to echo points that others have brought up in class, if this is the case, why are we trying to build something that has the potential to become something, if we aren’t ready to accept the consequences of enabling a life to form?
6. How like AI are we? Certainly we shape and grow from data that we absorb from the world around us, from our senses, from our experiences (micro and macro). But are we the sum of the information that goes into our inputs? Is AI? Surely we would say consciousness is multifaceted and not only data inputs. From that, and to say, clearly I’m not a computer scientist (thoughI wish I was), but still… I’m not sure I grasp how AI process the information that is fed into them. Where does that all that information go (as in, how does AI “make sense” of that info)? How does an algorithm discern? In the instances where AI have developed their own languages (only to be shut down promptly by their operators)… how is such a shift taking place? What are the ways in which decisions are made? Thinking about the Frankenstein project, where we are offering personal experiences and stories so that AI may better understand humans (and perhaps, hopefully we are imagining that therefore AI will also cultivate a humanity of its own)… what is really underneath this?
———————
looking forward to Wednesday’s class, and hope you are all well!