Tag Archives: GloVe

Wearable technology: two types of sensors one from the University of Glasgow (Scotland) and the other from the University of British Columbia (Canada)

Sometimes it’s good to try and pull things together.

University of Glasgow and monitoring chronic conditions

A February 23, 2018 news item on phys.org describes the latest wearable tech from the University of Glasgow,

A new type of flexible, wearable sensor could help people with chronic conditions like diabetes avoid the discomfort of regular pin-prick blood tests by monitoring the chemical composition of their sweat instead.

In a new paper published in the journal Biosensors and Bioelectronics, a team of scientists from the University of Glasgow’s School of Engineering outline how they have built a stretchable, wireless system which is capable of measuring the pH level of users’ sweat.

A February 22, 2018 University of Glasgow press release, which originated the news item, expands on the theme,

Ravinder Dahiya

 Courtesy: University of Glasgow

 

Sweat, like blood, contains chemicals generated in the human body, including glucose and urea. Monitoring the levels of those chemicals in sweat could help clinicians diagnose and monitor chronic conditions such as diabetes, kidney disease and some types of cancers without invasive tests which require blood to be drawn from patients.

However, non-invasive, wearable systems require consistent contact with skin to offer the highest-quality monitoring. Current systems are made from rigid materials, making it more difficult to ensure consistent contact, and other potential solutions such as adhesives can irritate skin. Wireless systems which use Bluetooth to transmit their information are also often bulky and power-hungry, requiring frequent recharging.

The University of Glasgow team’s new system is built around an inexpensively-produced sensor capable of measuring pH levels which can stretch and flex to better fit the contours of users’ bodies. Made from a graphite-polyurethane composite and measuring around a single square centimetre, it can stretch up to 53% in length without compromising performance. It will also continue to work after being subjected to flexes of 30% up to 500 times, which the researchers say will allow it to be used comfortably on human skin with minimal impact on the performance of the sensor.

The sensor can transmit its data wirelessly, and without external power, to an accompanying smartphone app called ‘SenseAble’, also developed by the team. The transmissions use near-field communication, a data transmission system found in many current smartphones which is used most often for smartphone payments like ApplePay, via a stretchable RFID antenna integrated into the system – another breakthrough innovation from the research team.

The smartphone app allows users to track pH levels in real time and was demonstrated in the lab using a chemical solution created by the researchers which mimics the composition of human sweat.

The research was led by Professor Ravinder Dahiya, head of the University of Glasgow’s School of Engineering’s Bendable Electronics and Sensing Technologies (BEST) group.

Professor Dahiya said: “Human sweat contains much of the same physiological information that blood does, and its use in diagnostic systems has the significant advantage of not needing to break the skin in order to administer tests.

“Now that we’ve demonstrated that our stretchable system can be used to monitor pH levels, we’ve already begun additional research to expand the capabilities of the sensor and make it a more complete diagnostic system. We’re planning to add sensors capable of measuring glucose, ammonia and urea, for example, and ultimately we’d like to see a system ready for market in the next few years.”

The team’s paper, titled ‘Stretchable Wireless System for Sweat pH Monitoring’, is published in Biosensors and Bioelectronics. The research was supported by funding from the European Commission and the Engineering and Physical Sciences Research Council (EPSRC).

Here’s a link to and a citation for the paper,

Stretchable wireless system for sweat pH monitoring by Wenting Dang, Libu Manjakkal, William Taube Navaraj, Leandro Lorenzelli, Vincenzo Vinciguerra. Biosensors and Bioelectronics Volume 107, 1 June 2018, Pages 192–202 [Available online February 2018] https://doi.org/10.1016/j.bios.2018.02.025

This paper is behind a paywall.

University of British Columbia (UBC; Okanagan) and monitor bio-signals

This is a completely other type of wearable tech monitor, from a February 22, 2018 UBC news release (also on EurekAlert) by Patty Wellborn (A link has been removed),

Creating the perfect wearable device to monitor muscle movement, heart rate and other tiny bio-signals without breaking the bank has inspired scientists to look for a simpler and more affordable tool.

Now, a team of researchers at UBC’s Okanagan campus have developed a practical way to monitor and interpret human motion, in what may be the missing piece of the puzzle when it comes to wearable technology.

What started as research to create an ultra-stretchable sensor transformed into a sophisticated inter-disciplinary project resulting in a smart wearable device that is capable of sensing and understanding complex human motion, explains School of Engineering Professor Homayoun Najjaran.

The sensor is made by infusing graphene nano-flakes (GNF) into a rubber-like adhesive pad. Najjaran says they then tested the durability of the tiny sensor by stretching it to see if it can maintain accuracy under strains of up to 350 per cent of its original state. The device went through more than 10,000 cycles of stretching and relaxing while maintaining its electrical stability.

“We tested this sensor vigorously,” says Najjaran. “Not only did it maintain its form but more importantly it retained its sensory functionality. We have further demonstrated the efficacy of GNF-Pad as a haptic technology in real-time applications by precisely replicating the human finger gestures using a three-joint robotic finger.”

The goal was to make something that could stretch, be flexible and a reasonable size, and have the required sensitivity, performance, production cost, and robustness. Unlike an inertial measurement unit—an electronic unit that measures force and movement and is used in most step-based wearable technologies—Najjaran says the sensors need to be sensitive enough to respond to different and complex body motions. That includes infinitesimal movements like a heartbeat or a twitch of a finger, to large muscle movements from walking and running.

School of Engineering Professor and study co-author Mina Hoorfar says their results may help manufacturers create the next level of health monitoring and biomedical devices.

“We have introduced an easy and highly repeatable fabrication method to create a highly sensitive sensor with outstanding mechanical and electrical properties at a very low cost,” says Hoorfar.

To demonstrate its practicality, researchers built three wearable devices including a knee band, a wristband and a glove. The wristband monitored heartbeats by sensing the pulse of the artery. In an entirely different range of motion, the finger and knee bands monitored finger gestures and larger scale muscle movements during walking, running, sitting down and standing up. The results, says Hoorfar, indicate an inexpensive device that has a high-level of sensitivity, selectivity and durability.

Hoorfar and Najjaran are both members of the Okanagan node of UBC’s STITCH (SmarT Innovations for Technology Connected Health) Institute that creates and investigates advanced wearable devices.

The research, partially funded by the Natural Sciences and Engineering Research Council, was recently published in the Journal of Sensors and Actuators A: Physical.

Here’s a link to and a citation for the paper,

Low-cost ultra-stretchable strain sensors for monitoring human motion and bio-signals by Seyed Reza Larimi, Hojatollah Rezaei Nejad, Michael Oyatsi, Allen O’Brien, Mina Hoorfar, Homayoun Najjaran. Sensors and Actuators A: Physical Volume 271, 1 March 2018, Pages 182-191 [Published online February 2018] https://doi.org/10.1016/j.sna.2018.01.028

This paper is behind a paywall.

Final comments

The term ‘wearable tech’ covers a lot of ground. In addition to sensors, there are materials that harvest energy, detect poisons, etc.  making for a diverse field.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.