Tag Archives: David Carroll

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Beating tactical experts in combat simulation—AI with the processing power of a Raspberry Pi

It looks like one day combat may come down to who has the best artificial intelligence (AI) if a June 27, 2016 University of Cincinnati news release (also on EurekAlert) by M. B. Reilly is to be believed (Note: Links have been removed),

Artificial intelligence (AI) developed by a University of Cincinnati doctoral graduate was recently assessed by subject-matter expert and retired United States Air Force Colonel Gene Lee — who holds extensive aerial combat experience as an instructor and Air Battle Manager with considerable fighter aircraft expertise — in a high-fidelity air combat simulator.

The artificial intelligence, dubbed ALPHA, was the victor in that simulated scenario, and according to Lee, is “the most aggressive, responsive, dynamic and credible AI I’ve seen to date.”

Details on ALPHA – a significant breakthrough in the application of what’s called genetic-fuzzy systems are published in the most-recent issue of the Journal of Defense Management, as this application is specifically designed for use with Unmanned Combat Aerial Vehicles (UCAVs) in simulated air-combat missions for research purposes.

The tools used to create ALPHA as well as the ALPHA project have been developed by Psibernetix, Inc., recently founded by UC College of Engineering and Applied Science 2015 doctoral graduate Nick Ernest, now president and CEO of the firm; as well as David Carroll, programming lead, Psibernetix, Inc.; with supporting technologies and research from Gene Lee; Kelly Cohen, UC aerospace professor; Tim Arnett, UC aerospace doctoral student; and Air Force Research Laboratory sponsors.

The news release goes on to provide a overview of ALPHA’s air combat fighting and strategy skills,

ALPHA is currently viewed as a research tool for manned and unmanned teaming in a simulation environment. In its earliest iterations, ALPHA consistently outperformed a baseline computer program previously used by the Air Force Research Lab for research.  In other words, it defeated other AI opponents.

In fact, it was only after early iterations of ALPHA bested other computer program opponents that Lee then took to manual controls against a more mature version of ALPHA last October. Not only was Lee not able to score a kill against ALPHA after repeated attempts, he was shot out of the air every time during protracted engagements in the simulator.

Since that first human vs. ALPHA encounter in the simulator, this AI has repeatedly bested other experts as well, and is even able to win out against these human experts when its (the ALPHA-controlled) aircraft are deliberately handicapped in terms of speed, turning, missile capability and sensors.

Lee, who has been flying in simulators against AI opponents since the early 1980s, said of that first encounter against ALPHA, “I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed.”

He added that with most AIs, “an experienced pilot can beat up on it (the AI) if you know what you’re doing. Sure, you might have gotten shot down once in a while by an AI program when you, as a pilot, were trying something new, but, until now, an AI opponent simply could not keep up with anything like the real pressure and pace of combat-like scenarios.”

But, now, it’s been Lee, who has trained with thousands of U.S. Air Force pilots, flown in several fighter aircraft and graduated from the U.S. Fighter Weapons School (the equivalent of earning an advanced degree in air combat tactics and strategy), as well as other pilots who have been feeling pressured by ALPHA.

And, anymore [sic], when Lee flies against ALPHA in hours-long sessions that mimic real missions, “I go home feeling washed out. I’m tired, drained and mentally exhausted. This may be artificial intelligence, but it represents a real challenge.”

New goals have been set for ALPHA according to the news release,

Explained Ernest, “ALPHA is already a deadly opponent to face in these simulated environments. The goal is to continue developing ALPHA, to push and extend its capabilities, and perform additional testing against other trained pilots. Fidelity also needs to be increased, which will come in the form of even more realistic aerodynamic and sensor models. ALPHA is fully able to accommodate these additions, and we at Psibernetix look forward to continuing development.”

In the long term, teaming artificial intelligence with U.S. air capabilities will represent a revolutionary leap. Air combat as it is performed today by human pilots is a highly dynamic application of aerospace physics, skill, art, and intuition to maneuver a fighter aircraft and missiles against adversaries, all moving at very high speeds. After all, today’s fighters close in on each other at speeds in excess of 1,500 miles per hour while flying at altitudes above 40,000 feet. Microseconds matter, and the cost for a mistake is very high.

Eventually, ALPHA aims to lessen the likelihood of mistakes since its operations already occur significantly faster than do those of other language-based consumer product programming. In fact, ALPHA can take in the entirety of sensor data, organize it, create a complete mapping of a combat scenario and make or change combat decisions for a flight of four fighter aircraft in less than a millisecond. Basically, the AI is so fast that it could consider and coordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA’s human opponents could blink.

So it’s likely that future air combat, requiring reaction times that surpass human capabilities, will integrate AI wingmen – Unmanned Combat Aerial Vehicles (UCAVs) – capable of performing air combat and teamed with manned aircraft wherein an onboard battle management system would be able to process situational awareness, determine reactions, select tactics, manage weapons use and more. So, AI like ALPHA could simultaneously evade dozens of hostile missiles, take accurate shots at multiple targets, coordinate actions of squad mates, and record and learn from observations of enemy tactics and capabilities.

UC’s Cohen added, “ALPHA would be an extremely easy AI to cooperate with and have as a teammate. ALPHA could continuously determine the optimal ways to perform tasks commanded by its manned wingman, as well as provide tactical and situational advice to the rest of its flight.”

Happily, insight is provided into the technical aspects (from the news release),

It would normally be expected that an artificial intelligence with the learning and performance capabilities of ALPHA, applicable to incredibly complex problems, would require a super computer in order to operate.

However, ALPHA and its algorithms require no more than the computing power available in a low-budget PC in order to run in real time and quickly react and respond to uncertainty and random events or scenarios.

According to a lead engineer for autonomy at AFRL, “ALPHA shows incredible potential, with a combination of high performance and low computational cost that is a critical enabling capability for complex coordinated operations by teams of unmanned aircraft.”

Ernest began working with UC engineering faculty member Cohen to resolve that computing-power challenge about three years ago while a doctoral student. (Ernest also earned his UC undergraduate degree in aerospace engineering and engineering mechanics in 2011 and his UC master’s, also in aerospace engineering and engineering mechanics, in 2012.)

They tackled the problem using language-based control (vs. numeric based) and using what’s called a “Genetic Fuzzy Tree” (GFT) system, a subtype of what’s known as fuzzy logic algorithms.

States UC’s Cohen, “Genetic fuzzy systems have been shown to have high performance, and a problem with four or five inputs can be solved handily. However, boost that to a hundred inputs, and no computing system on planet Earth could currently solve the processing challenge involved – unless that challenge and all those inputs are broken down into a cascade of sub decisions.”

That’s where the Genetic Fuzzy Tree system and Cohen and Ernest’s years’ worth of work come in.

According to Ernest, “The easiest way I can describe the Genetic Fuzzy Tree system is that it’s more like how humans approach problems.  Take for example a football receiver evaluating how to adjust what he does based upon the cornerback covering him. The receiver doesn’t think to himself: ‘During this season, this cornerback covering me has had three interceptions, 12 average return yards after interceptions, two forced fumbles, a 4.35 second 40-yard dash, 73 tackles, 14 assisted tackles, only one pass interference, and five passes defended, is 28 years old, and it’s currently 12 minutes into the third quarter, and he has seen exactly 8 minutes and 25.3 seconds of playtime.’”

That receiver – rather than standing still on the line of scrimmage before the play trying to remember all of the different specific statistics and what they mean individually and combined to how he should change his performance – would just consider the cornerback as ‘really good.’

The cornerback’s historic capability wouldn’t be the only variable. Specifically, his relative height and relative speed should likely be considered as well. So, the receiver’s control decision might be as fast and simple as: ‘This cornerback is really good, a lot taller than me, but I am faster.’

At the very basic level, that’s the concept involved in terms of the distributed computing power that’s the foundation of a Genetic Fuzzy Tree system wherein, otherwise, scenarios/decision making would require too high a number of rules if done by a single controller.

Added Ernest, “Only considering the relevant variables for each sub-decision is key for us to complete complex tasks as humans. So, it makes sense to have the AI do the same thing.”

In this case, the programming involved breaking up the complex challenges and problems represented in aerial fighter deployment into many sub-decisions, thereby significantly reducing the required “space” or burden for good solutions. The branches or sub divisions of this decision-making tree consists of high-level tactics, firing, evasion and defensiveness.

That’s the “tree” part of the term “Genetic Fuzzy Tree” system.

Programming that’s language based, genetic and generational

Most AI programming uses numeric-based control and provides very precise parameters for operations. In other words, there’s not a lot of leeway for any improvement or contextual decision making on the part of the programming.

The AI algorithms that Ernest and his team ultimately developed are language based, with if/then scenarios and rules able to encompass hundreds to thousands of variables. This language-based control or fuzzy logic, while much less about complex mathematics, can be verified and validated.

Another benefit of this linguistic control is the ease in which expert knowledge can be imparted to the system. For instance, Lee worked with Psibernetix to provide tactical and maneuverability advice which was directly plugged in to ALPHA. (That “plugging in” occurs via inputs into a fuzzy logic controller. Those inputs consist of defined terms, e.g., close vs. far in distance to a target; if/then rules related to the terms; and inputs of other rules or specifications.)

Finally, the ALPHA programming is generational. It can be improved from one generation to the next, from one version to the next. In fact, the current version of ALPHA is only that – the current version. Subsequent versions are expected to perform significantly better.

Again, from UC’s Cohen, “In a lot of ways, it’s no different than when air combat began in W.W. I. At first, there were a whole bunch of pilots. Those who survived to the end of the war were the aces. Only in this case, we’re talking about code.”

To reach its current performance level, ALPHA’s training has occurred on a $500 consumer-grade PC. This training process started with numerous and random versions of ALPHA. These automatically generated versions of ALPHA proved themselves against a manually tuned version of ALPHA. The successful strings of code are then “bred” with each other, favoring the stronger, or highest performance versions. In other words, only the best-performing code is used in subsequent generations. Eventually, one version of ALPHA rises to the top in terms of performance, and that’s the one that is utilized.

This is the “genetic” part of the “Genetic Fuzzy Tree” system.

Said Cohen, “All of these aspects are combined, the tree cascade, the language-based programming and the generations. In terms of emulating human reasoning, I feel this is to unmanned aerial vehicles what the IBM/Deep Blue vs. Kasparov was to chess.”

Here’s a link to and a citation for the paper,

Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions by Nicholas Ernest, David Carroll, Corey Schumacher, Matthew Clark, Kelly Cohen, and Gene Lee. J Def Manag [Journal of Defense Management]  6:144. doi:10.4172/2167-0374.1000144 Published: March 22, 2016

This is an open access paper.

Segue

The University of Cincinnati’s president, Santa Ono, recently accepted a job as president of the University of British Columbia (UBC), which is located in the region where I live. Nassif Ghoussoub, professor of mathematics at UBC, writes about Ono and his new appointment in a June 13, 2016 posting on his blog (Note: A link has been removed),

By the time you read this, UBC communications will already have issued the mandatory press release [the official announcement was made June 13, 2016] describing Santa Ono’s numerous qualifications for the job, including that he is a Canuck in the US, born in Vancouver, McGill PhD, a highly accomplished medical researcher, who is the President of the University of Cincinnati.

So, I shall focus here on what UBC communications may not be enclined [sic] to tell you, yet may be quite consequential for UBC’s future direction. After all, life experiences, gender, race, class, and character are what shape leadership.

President Ono seems to have had battles with mental illness, and have been courageous enough to deal with it and to publicly disclose it –as recently as May 24 [2016]– so as to destigmatize struggles that many people go through. It is interesting to note the two events that led the president to have suicidal thoughts: …

The post is well worth reading if you have any interest in Ono, UBC, and/or insight into some of the struggles even some of the most accomplished academics can encounter.

‘Feeling’ the power; thermoelectric device converts body heat to electricity

From time to time I read about these harvesting technologies designed to take advantage of the fact that human beings produce electricity which could be used to power devices such as mobile (cell) phones. I love the idea but I’ve been waiting over four years now for something to get to market.  It appears my wait is going to continue despite this encouraging Feb. 22, 2012 news item on physorg.com,

Never get stranded with a dead cell phone again. A promising new technology called Power Felt, a thermoelectric device that converts body heat into an electrical current, soon could create enough juice to make another call simply by touching it.

Developed by researchers in the Center for Nanotechnology and Molecular Materials at Wake Forest University [located in North Carolina], Power Felt is comprised of tiny carbon nanotubes locked up in flexible plastic fibers and made to feel like fabric. The technology uses temperature differences – room temperature versus body temperature, for instance – to create a charge.

Cost has prevented thermoelectrics from being used more widely in consumer products. Standard thermoelectric devices use a much more efficient compound called bismuth telluride to turn heat into power in products including mobile refrigerators and CPU coolers, but researchers say it can cost $1,000 per kilogram. Like silicon, they liken Power Felt’s affordability to demand in volume and think someday it could cost only $1 to add to a cell phone cover.

Currently, 72 stacked layers in the fabric yield about 140 nanowatts of power. The team is evaluating several ways to add more nanotube layers and make them even thinner to boost the power output.

Although there’s more work to do before Power Felt is ready for market, Hewitt [Corey Hewitt] says, “I imagine being able to make a jacket with a completely thermoelectric inside liner that gathers warmth from body heat, while the exterior remains cold from the outside temperature. If the Power Felt is efficient enough, you could potentially power an iPod, which would be great for distance runners. It’s definitely within reach.”

Wake Forest is in talks with investors to produce Power Felt commercially.

This work is being done under the auspices of David Carroll, director of Wake University’s Center for Nanotechnology and Molecular Materials.  I did find information about the industrial partners involved in the research, from the Carroll Research Group webpage,

The “Power Fabrics” project has several industrial partners:

FiberCell Inc. Winston-Salem NC
NanotechLabs Inc. Yadkinville NC
Sineurop Inc. Stuttgart Germany

I find the mention of industrial partners and investors promising.