Tag Archives: video games

Deus Ex, a video game developer, his art, and reality

The topics of human enhancement and human augmentation have been featured here a number of times from a number of vantage points, including that of a video game seires with some thoughtful story lines known under the Deus Ex banner. (My August 18, 2011 posting, . August 30, 2011 posting, and Sept. 1, 2016 posting are three, which mention Deus Ex in the title but there may be others where the game is noted in the posting.)

A March 19, 2021 posting by Timothy Geigner for Techdirt offers a more fulsome but still brief description of the games along with a surprising declaration (it’s too real) by the game’s creator (Note: Links have been removed),

The Deus Ex franchise has found its way onto Techdirt’s pages a couple of times in the past. If you’re not familiar with the series, it’s a cyberpunk-ish take on the near future with broad themes around human augmentation, and the weaving of broad and famous conspiracy theories. That perhaps makes it somewhat ironic that several of our posts dealing with the franchise have to do with mass media outlets getting confused into thinking its augmentation stories were real life, or the conspiracy theories that centered around leaks for the original game’s sequel were true. The conspiracy theories woven into the original Deus Ex storyline were of the grand variety: takeover of government by biomedical companies pushing a vaccine for a sickness it created, the illuminati, FEMA [US Federal Emergency Management Agency] takeovers, AI-driven surveillance of the public, etc.

And it’s the fact that such conspiracy-driven thinking today led Warren Spector, the creator of the series, to recently state that he probably wouldn’t have created the game today if given the chance. [See pull quote below]

Deus Ex was originally released in 2000 but took place in an alternate 2052 where many of the real world conspiracy theories have come true. The plot included references to vaccinations, black helicopters, FEMA, and ECHELON amongst others, some of which have connotations to real-life events. Spector said, “Interestingly, I’m not sure I’d make Deus Ex today. The conspiracy theories we wrote about are now part of the real world. I don’t want to support that.”

… I’d like to focus on how clearly this illustrates the artistic nature of video games. The desire, or not, to create certain kinds of art due to the reflection such art receives from the broader society is exactly the kind of thing artists operating in other artforms have to deal with. Art imitates life, yes, but in the case of speculative fiction like this, it appears that life can also imitate art. Spector notes that seeing what has happened in the world since Deus Ex was first released in 2000 has had a profound effect on him as an artist. [See pull quote below]

Earlier, Spector had commented on how he was “constantly amazed at how accurate our view of the world ended up being. Frankly it freaks me out a bit.” Some of the conspiracy theories that didn’t end up in the game were those surrounding Denver Airport because they were considered “too silly to include in the game.” These include theories about secret tunnels, connections to aliens and Nazi secret societies, and hidden messages within the airport’s artwork. Spector is now incredulous that they’re “something people actually believe.”

It was possible for Geigner even back to an Oct. 18, 2013 posting to write about a UK newspaper that confused Deus Ex with reality,

… I bring you the British tabloid, The Sun, and their amazing story about an augmented mechanical eyeball that, if associated material is to be believed, allows you to see through walls, color-codes friends and enemies, and permits telescopic zoom. Here’s the reference from The Sun.

Oops. See, part of the reason that Sarif Industries’ cybernetic implants are still in their infancy is that the company doesn’t exist. Sarif Industries is a fictitious company from a cyberpunk video game, Deus Ex, set in a future Detroit. …

There’s more about Spector’s latest comments at a 2021 Game Developers Conference in a March 15, 2021 article by Riley MacLeod for Kotaku. There’s more about Warren Spector here. I always thought Deus Ex was developed by Canadian company, Eidos Montréal and, fter reading the company’s Wikipedia entry, it seems I may have been only partially correct.

Getting back to Deus Ex being ‘too real’, it seems to me that the line between science fiction and reality is increasingly frayed.

Promoting video games for the pursuit of science

An Oct. 6, 2016 essay by Scott Horowitz and James Bardwell for The Conversation (h/t Oct. 6, 2016 news item on Nanowerk) makes the case for more video gaming projects designed to advance science. From The Conversation’s Oct. 6, 2016 essay,

In Foldit, players attempt to figure out the detailed three-dimensional structure of proteins by manipulating a simulated protein displayed on their computer screen. They must observe various constraints based in the real world, such as the order of amino acids and how close to each other their biochemical properties permit them to get. In academic research, these tasks are typically performed by trained experts.

Thousands of people – with and without scientific training – play Foldit regularly. Sure, they’re having fun, but are they really contributing to science in ways experts don’t already? To answer this question – to find out how much we can learn by having nonexperts play scientific games – we recently set up a Foldit competition between gamers, undergraduate students and professional scientists. The amateur gamers did better than the professional scientists managed using their usual software.

This suggests that scientific games like Foldit can truly be valuable resources for biochemistry research while simultaneously providing enjoyable recreation. More widely, it shows the promise that crowdsourcing to gamers (or “gamesourcing”) could offer to many fields of study.

Horowitz and Bardwell (both crystallographers) created their own game,

We teach an undergraduate class that includes a section on how biochemists can determine what proteins look like.

When we gave an electron density map to our students and had them move the amino acids around with a mouse and keyboard and fold the protein into the map, students loved it – some so much they found themselves ignoring their other homework in favor of our puzzle. As the students worked on the assignment, we found the questions they raised became increasingly sophisticated, delving deeply into the underlying biochemistry of the protein.

In the end, 10 percent of the class actually managed to improve on the structure that had been previously solved by professional crystallographers. They tweaked the pieces so they fit better than the professionals had been able to. Most likely, since 60 students were working on it separately, some of them managed to fix a number of small errors that had been missed by the original crystallographers. This outcome reminded us of the game Foldit.

They then ran a competition between their students, two trained crystallographers, and some Foldit players,

We gave students a new crystallography assignment, and told them they would be competing against Foldit players to produce the best structure. We also got two trained crystallographers to compete using the software they’d be familiar with, as well as several automated software packages that crystallographers often use. The race was on!

Amateurs outdo professionals

The students attacked the assignment vigorously, as did the Foldit players. As before, the students learned how proteins are put together through shaping these protein structures by hand. Moreover, both groups appeared to take pride in their role in pioneering new science.

At the end of the competition, we analyzed all the structures from all the participants. We calculated statistics about the competing structures that told us how correct each participant was in their solution to the puzzle. The results ranged from very poor structures that didn’t fit the map at all to exemplary solutions.

The best structure came from a group of nine Foldit players who worked collaboratively to come up with a spectacular protein structure. Their structure turned out to be even better than the structures from the two trained professionals.

Students and Foldit players alike were eager to master difficult concepts because it was fun. The results they came up with gave us useful scientific results that can really improve biochemistry.

I first wrote about Foldit in an August 6, 2010 posting (scroll down about 50% of the way).

Deep learning and some history from the Swiss National Science Foundation (SNSF)

A June 27, 2016 news item on phys.org provides a measured analysis of deep learning and its current state of development (from a Swiss perspective),

In March 2016, the world Go champion Lee Sedol lost 1-4 against the artificial intelligence AlphaGo. For many, this was yet another defeat for humanity at the hands of the machines. Indeed, the success of the AlphaGo software was forged in an area of artificial intelligence that has seen huge progress over the last decade. Deep learning, as it’s called, uses artificial neural networks to process algorithmic calculations. This software architecture therefore mimics biological neural networks.

Much of the progress in deep learning is thanks to the work of Jürgen Schmidhuber, director of the IDSIA (Istituto Dalle Molle di Studi sull’Intelligenza Artificiale) which is located in the suburbs of Lugano. The IDSIA doctoral student Shane Legg and a group of former colleagues went on to found DeepMind, the startup acquired by Google in early 2014 for USD 500 million. The DeepMind algorithms eventually wound up in AlphaGo.

“Schmidhuber is one of the best at deep learning,” says Boi Faltings of the EPFL Artificial Intelligence Lab. “He never let go of the need to keep working at it.” According to Stéphane Marchand-Maillet of the University of Geneva computing department, “he’s been in the race since the very beginning.”

A June 27, 2016 SNSF news release (first published as a story in Horizons no. 109 June 2016) by Fabien Goubet, which originated the news item, goes on to provide a brief history,

The real strength of deep learning is structural recognition, and winning at Go is just an illustration of this, albeit a rather resounding one. Elsewhere, and for some years now, we have seen it applied to an entire spectrum of areas, such as visual and vocal recognition, online translation tools and smartphone personal assistants. One underlying principle of machine learning is that algorithms must first be trained using copious examples. Naturally, this has been helped by the deluge of user-generated content spawned by smartphones and web 2.0, stretching from Facebook photo comments to official translations published on the Internet. By feeding a machine thousands of accurately tagged images of cats, for example, it learns first to recognise those cats and later any image of a cat, including those it hasn’t been fed.

Deep learning isn’t new; it just needed modern computers to come of age. As far back as the early 1950s, biologists tried to lay out formal principles to explain the working of the brain’s cells. In 1956, the psychologist Frank Rosenblatt of the New York State Aeronautical Laboratory published a numerical model based on these concepts, thereby creating the very first artificial neural network. Once integrated into a calculator, it learned to recognise rudimentary images.

“This network only contained eight neurones organised in a single layer. It could only recognise simple characters”, says Claude Touzet of the Adaptive and Integrative Neuroscience Laboratory of Aix-Marseille University. “It wasn’t until 1985 that we saw the second generation of artificial neural networks featuring multiple layers and much greater performance”. This breakthrough was made simultaneously by three researchers: Yann LeCun in Paris, Geoffrey Hinton in Toronto and Terrence Sejnowski in Baltimore.

Byte-size learning

In multilayer networks, each layer learns to recognise the precise visual characteristics of a shape. The deeper the layer, the more abstract the characteristics. With cat photos, the first layer analyses pixel colour, and the following layer recognises the general form of the cat. This structural design can support calculations being made upon thousands of layers, and it was this aspect of the architecture that gave rise to the name ‘deep learning’.

Marchand-Maillet explains: “Each artificial neurone is assigned an input value, which it computes using a mathematical function, only firing if the output exceeds a pre-defined threshold”. In this way, it reproduces the behaviour of real neurones, which only fire and transmit information when the input signal (the potential difference across the entire neural circuit) reaches a certain level. In the artificial model, the results of a single layer are weighted, added up and then sent as the input signal to the following layer, which processes that input using different functions, and so on and so forth.

For example, if a system is trained with great quantities of photos of apples and watermelons, it will progressively learn to distinguish them on the basis of diameter, says Marchand-Maillet. If it cannot decide (e.g., when processing a picture of a tiny watermelon), the subsequent layers take over by analysing the colours or textures of the fruit in the photo, and so on. In this way, every step in the process further refines the assessment.

Video games to the rescue

For decades, the frontier of computing held back more complex applications, even at the cutting edge. Industry walked away, and deep learning only survived thanks to the video games sector, which eventually began producing graphics chips, or GPUs, with an unprecedented power at accessible prices: up to 6 teraflops (i.e., 6 trillion calculations per second) for a few hundred dollars. “There’s no doubt that it was this calculating power that laid the ground for the quantum leap in deep learning”, says Touzet. GPUs are also very good at parallel calculations, a useful function for executing the innumerable simultaneous operations required by neural networks.
Although image analysis is getting great results, things are more complicated for sequential data objects such as natural spoken language and video footage. This has formed part of Schmidhuber’s work since 1989, and his response has been to develop recurrent neural networks in which neurones communicate with each other in loops, feeding processed data back into the initial layers.

Such sequential data analysis is highly dependent on context and precursory data. In Lugano, networks have been instructed to memorise the order of a chain of events. Long Short Term Memory (LSTM) networks can distinguish ‘boat’ from ‘float’ by recalling the sound that preceded ‘oat’ (i.e., either ‘b’ or ‘fl’). “Recurrent neural networks are more powerful than other approaches such as the Hidden Markov models”, says Schmidhuber, who also notes that Google Voice integrated LSTMs in 2015. “With looped networks, the number of layers is potentially infinite”, says Faltings [?].

For Schmidhuber, deep learning is just one aspect of artificial intelligence; the real thing will lead to “the most important change in the history of our civilisation”. But Marchand-Maillet sees deep learning as “a bit of hype, leading us to believe that artificial intelligence can learn anything provided there’s data. But it’s still an open question as to whether deep learning can really be applied to every last domain”.

It’s nice to get an historical perspective and eye-opening to realize that scientists have been working on these concepts since the 1950s.

Scented video games: a nanotechnology project in Europe

Ten years ago when I was working on a master’s degree (creative writing and new media), I was part of a group presentation on multimedia and to prepare started a conversation about scent as part of a multimedia experience. Our group leader was somewhat outraged. He’d led international multimedia projects and as far as he was concerned the ‘scent’ discussion was a waste of time when we were trying to prepare a major presentation.

He was right and wrong. I think you’re supposed to have these discussions when you’re learning and exploring ideas but, in 2006, there wasn’t much work of that type to discuss. It seems things may be changing according to a May 21, 2016 news item on Nanowerk (Note: A link has been removed),

Controlled odour emission could transform video games and television viewing experiences and benefit industries such as pest control and medicine [emphasis mine]. The NANOSMELL project aims to switch smells on and off by tagging artificial odorants with nanoparticles exposed to electromagnetic field.

I wonder if the medicinal possibilities include nanotechnology-enabled aroma therapy?

Getting back to the news, a May 10, 2016 European Commission press release, which originated the news item, expands on the theme,

The ‘smellyvision’ – a TV that offers olfactory as well as visual stimulation – has been a science fiction staple for years. However, realising this concept has proved difficult given the sheer complexity of how smell works and the technical challenges of emitting odours on demand.

NANOSMELL will specifically address these two challenges by developing artificial smells that can be switched on and off remotely. This would be achieved by tagging specific DNA-based artificial odorants – chemical compounds that give off smells – with nanoparticles that respond to external electromagnetic fields.

With the ability to remotely control these artificial odours, the project team would then be able to examine exactly how olfactory receptors respond. Sensory imaging to investigate the patterns of neural activity and behavioural tests will be carried out in animals.

The project would next apply artificial odorants to the human olfactory system and measure perceptions by switching artificial smells on and off. Researchers will also assess whether artificial odorants have a role to play in wound healing by placing olfactory receptors in skin.

The researchers aim to develop controllable odour-emitting components that will further understanding of smell and open the door to novel odour-emitting applications in fields ranging from entertainment to medicine.

Project details

  • Project acronym: NanoSmell
  • Participants: Israel (Coordinator), Spain, Germany, Switzerland
  • Project Reference N° 662629
  • Total cost: € 3 979 069
  • EU contribution: € 3 979 069
  • Duration:September 2015 – September 2019

You can find more information on the European Commission’s NANOSMELL project page.

Steering cockroaches in the lab and in your backyard—cutting edge neuroscience

In this piece I’m mashing together two items, both involving cockroaches and neuroscience and, in one case, disaster recovery. The first item concerns research at the North Carolina State University where video game techniques are being used to control cockroaches. From the June 25, 2013 news item on ScienceDaily,

North Carolina State University researchers are using video game technology to remotely control cockroaches on autopilot, with a computer steering the cockroach through a controlled environment. The researchers are using the technology to track how roaches respond to the remote control, with the goal of developing ways that roaches on autopilot can be used to map dynamic environments — such as collapsed buildings.

The researchers have incorporated Microsoft’s motion-sensing Kinect system into an electronic interface developed at NC State that can remotely control cockroaches. The researchers plug in a digitally plotted path for the roach, and use Kinect to identify and track the insect’s progress. The program then uses the Kinect tracking data to automatically steer the roach along the desired path.

The June 25, 2013 North Carolina State University news release, which originated the news item, reveals more details,

The program also uses Kinect to collect data on how the roaches respond to the electrical impulses from the remote-control interface. This data will help the researchers fine-tune the steering parameters needed to control the roaches more precisely.

“Our goal is to be able to guide these roaches as efficiently as possible, and our work with Kinect is helping us do that,” says Dr. Alper Bozkurt, an assistant professor of electrical and computer engineering at NC State and co-author of a paper on the work.

“We want to build on this program, incorporating mapping and radio frequency techniques that will allow us to use a small group of cockroaches to explore and map disaster sites,” Bozkurt says. “The autopilot program would control the roaches, sending them on the most efficient routes to provide rescuers with a comprehensive view of the situation.”

The roaches would also be equipped with sensors, such as microphones, to detect survivors in collapsed buildings or other disaster areas. “We may even be able to attach small speakers, which would allow rescuers to communicate with anyone who is trapped,” Bozkurt says.

Bozkurt’s team had previously developed the technology that would allow users to steer cockroaches remotely, but the use of Kinect to develop an autopilot program and track the precise response of roaches to electrical impulses is new.

The interface that controls the roach is wired to the roach’s antennae and cerci. The cerci are sensory organs on the roach’s abdomen, which are normally used to detect movement in the air that could indicate a predator is approaching – causing the roach to scurry away. But the researchers use the wires attached to the cerci to spur the roach into motion. The wires attached to the antennae send small charges that trick the roach into thinking the antennae are in contact with a barrier and steering them in the opposite direction.

Meanwhile for those of us without laboratories, there’s the RoboRoach Kickstarter project,

Our Roboroach is an innovative marriage of behavioral neuroscience and neural engineering. Cockroaches use the antennas on their head to navigate the world around them. When these antennas touch a wall, the cockroach turns away from the wall. The antenna of a cockroach contains neurons that are sensitive to touch and smell.

The backpack we invented communicates directly to the [cockroach’s] neurons via small electrical pulses. The cockroach undergoes a short surgery (under anesthesia) in which wires are placed inside the antenna. Once it recovers, a backpack is temporarily placed on its back.

When you send the command from your mobile phone, the backpack sends pulses to the antenna, which causes the neurons to fire, which causes the roach to think there is a wall on one side. The result? The roach turns! Microstimulation is the same neurotechnology that is used to treat Parkinson’s Disease and is also used in Cochlear Implants.

This product is not a toy, but a tool to learn about how our brains work. Using the RoboRoach, you will be able to discover a number of interesting things about nature:

Neural control of Behaviour: First and foremost you will see in real-time how the brain respondes to sensory stimuli.

Learning and Memory: After a few minutes the cockroach will stop responding to the RoboRaoch microstimulation. Why? The brain learns and adapts. That is what brains are designed to do. You can measure the time to adaptation for various stimulation frequencies.

Adaptation and Habituation: After placing the cockroach back in its homecage, how long does it take for him to respond again? Does he adapt to the stimuli more quickly?

Stimuli Selection: What range of frequencies works for causing neurons to fire? With this tool, you will be able to select the range of stimulation to see what works best for your prep. Is it the same that is used by medical doctors stimulating human neurons? You will find out.

Effect of Randomness: For the first time ever… we will be adding a “random” mode to our stimulus patterns. We, as humans, can adapt easily to periodic noises (the hum a refrigerator can be ignored, for example). So perhaps the reason for adaptation is our stimulus is periodic. Now you can select random mode and see if the RoboRoach adapts as quickly.. or at all!

Backyard Brains (mentioned here in my March 28, 2012 posting* about neurons, dance, and do-it-yourself neuroscience; another mashup), the organization initiating this Kickstarter campaign, has 13 days left to make its goal  of $10,000 (as of today, June 26, 2013 at 10:00 am PDT, the project has received $9,774 in pledges).

Pledges can range from $5 to $500 with incentives ranging from a mention on their website to delivery of RoboRoach Kits (complete with cockroaches, only within US borders).

This particular version of the RoboRoach project was introduced by Greg Gage at TEDGlobal 2103. Here’s what Karen Eng had to say about the presentation in her June 12, 2013 posting on the TED [technology, entertainment, design] blog,

Talking as fast and fervently as a circus busker, TED Fellow Greg Gage introduces the world to RoboRoach — a kit that allows you create a cockroach cyborg and control its movements via an iPhone app and “the world’s first commercially available cyborg in the history of mankind.”

“I’m a neuroscientist,” says Gage, “and that means I had to go to grad school for five years just to ask questions about the brain.” This is because the equipment involved is so expensive and complex that it’s only available in university research labs, accessible to PhD candidates and researchers. But other branches of science don’t have this problem — “You don’t have to get a PhD in astronomy to get a telescope and study the sky.”

Yet one in five of us will be diagnosed with a neurological disorder — for which we have no cures. We need more people educated in neuroscience to investigate these diseases. That’s why Gage and his partners at Backyard Brains are developing affordable tools that allow educators to teach electrophysiology from university down to the fifth grade level.

As he speaks, he and his partner, Tim Marzullo, release a large South American cockroach wearing an electronic backpack — which sends an electrical current directly into the cockroach’s antenna nerves — onto the table on stage. A line of green spikes appear, accompanied by a sound like rain on a tent or popcorn popping. “The common currency of the brain are the spikes in the neurons,” Gage explains. “These are the neurons that are inside of the antenna, but that’s also what your brain sounds like. Your thoughts, your hopes, your dreams, all encoded into these spikes. People, this is reality right here — the spikes are everything you know!” As Greg’s partner swipes his finger across his iPhone, the RoboRoach swerves left and right, sometimes erratically going in a full confused circle.

So why do this? “This is the exact same technology that’s used to treat Parkinson’s disease and make cochlear implants for deaf people. If we can get these tools into hands of kids, we can start the neurological revolution.”

After Gage’s talk, Chris Anderson asks about the ethics of using the cockroaches for these purposes. Gage explains that this is microstimulation, not a pain response — the evidence is that the roach adapts quickly to the stimulation. (In fact, some high school students have discovered that they can control the rate of adaptation in an unusual way — by playing music to the roaches over their iPods.) After the experiment, he says, the cockroaches are released to go back to do what cockroaches normally do. So don’t worry — no animals were irretrievably harmed in the making of this TED talk.

Anya Kamenetz in her June 7, 2013 article for Fast Company about the then upcoming presentation also mentions insect welfare,

Attaching the electronic “backpack” to an unwitting arthropod is not for the squeamish. You must sand down the top of the critter’s head in order to attach a plug, “Exactly like the Matrix,” says Backyard Brains cofounder Greg Gage. Once installed, the system relays electrical impulses over a Bluetooth connection from your phone to the cockroach’s brain, via its antennae. …

Gage claims that he has scientific proof that neither the surgery nor the stimulation hurts the roaches. The proof, according to Gage, is that the stimulation stops working after a little while as the roaches apparently decide to ignore it.

Kamenetz goes on to note that this project has already led to a discovery. High school students in New York City found that cockroaches did not habituate to randomized electrical signals as quickly as they did to steady signals. This discovery could have implications for treatment of diseases such as Parkinson’s.

The issue of animal use/welfare vis à vis scientific experiments is not an easy one and I can understand why Gage might be eager to dismiss any suggestions that the cockroaches are being hurt.  Given how hard it is to ignore pain, I am willing to accept Gage’s dismissal of the issue until such time as he is proven wrong. (BTW, I am curious as to how one would know if a cockroach is experiencing pain.)

I have one more thought for the road. I wonder whether the researchers at North Carolina State University are aware of the RoboRoach work and are able to integrate some of those findings into their own research (and vice versa).

*’March 28, 2013′ corrected to ‘March 28, 2012’ on Oct. 9, 2017.

NASA releases a game

Who knew it was World Space Week last week, Oct. 4-10, 2011? I certainly didn’t but then I don’t work for any of the space agencies.  I’m not sure if they’re commemorating the end of space week or not but NASA (US National Aeronautics and Aerospace Administration) released an interactive video game at roughly that time according to an Oct. 12, 2011 news item on physorg.com. From the news item,

NASA has released an interactive, educational video game called NetworKing that depicts how the Space Communication and Navigation (SCaN) network operates. …

To successfully construct fast and efficient communication networks, players must first establish command stations around the world and accept clients conducting space missions, such as satellites and space telescopes. Resources are earned throughout the game as players continue to acquire more clients. Players can strategically use accumulated resources to enhance and increase their networks’ capabilities.

Players with the most integrated communications networks will have the ability to acquire more complex clients, such as the International Space Station, Hubble Space Telescope and the Kepler mission.

This sounds like it could be a very exciting game. Here’s a bit more about it and the rest of the NASA 3D Resources website where the game resides,

NetworKing is available to the public for play on the NASA 3D Resources website. Players can access the game using an Internet browser. It can be downloaded and run on both a PC and Macintosh operating system. To play the NetworKing game, visit: http://www.nasa.go … es/scan.html

In conjunction with NetworKing, the 3D Resources website also links visitors to the Station Spacewalk Interactive Game and the SCaN Interactive Demo that demonstrate the interaction between SCaN’s ground-and-space facilities and NASA spacecraft.

Interestingly, they also offer an opportunity for anyone who wants to create their own game using free 3D models for NASA. From the website NetworKing page,

Would you like to create your own video game? Visit http://www.nasa.gov/multimedia/3d_resources/ to download free 3D models from NASA.

I was not able to discover what skill set you need or what age range is considered suitable for this game. Have fun figuring it all out!