Tag Archives: Twitter

Sifting through Twitter with your computer cluster of more than 600 nodes named Olympus—one of the Top 500 fastest supercomputers in the world.

Here are two (seemingly) contradictory pieces of information (1) the US Library of Congress takes over 24 hours to complete a single search of tweets archived from 2006 – 2010, according to my Jan. 16, 2013 posting, and (2) Court (Courtney) Corley, a data scientist at the US Dept. of Energy’s Pacific Northwest National Laboratory (PNNL), has a system (SALSA; SociAL Sensor Analytics) that analyzes billions of tweets in seconds. It’s a little hard to make sense out of these two very different perspectives on accessing data from tweets.

The news from Corley and the PNNL is more recent and, before I speculate further, here’s a bit more about Corley’s work, from the June 6, 2013 PNNL news release (also on EurekAlert)

If you think keeping up with what’s happening via Twitter, Facebook and other social media is like drinking from a fire hose, multiply that by 7 billion – and you’ll have a sense of what Court Corley wakes up to every morning.

Corley, a data scientist at the Department of Energy’s Pacific Northwest National Laboratory, has created a powerful digital system capable of analyzing billions of tweets and other social media messages in just seconds, in an effort to discover patterns and make sense of all the information. His social media analysis tool, dubbed “SALSA” (SociAL Sensor Analytics), combined with extensive know-how – and a fair degree of chutzpah – allows someone like Corley to try to grasp it all.

“The world is equipped with human sensors – more than 7 billion and counting. It’s by far the most extensive sensor network on the planet. What can we learn by paying attention?” Corley said.

Among the payoffs Corley envisions are emergency responders who receive crucial early information about natural disasters such as tornadoes; a tool that public health advocates can use to better protect people’s health; and information about social unrest that could help nations protect their citizens. But finding those jewels amidst the effluent of digital minutia is a challenge.

“The task we all face is separating out the trivia, the useless information we all are blasted with every day, from the really good stuff that helps us live better lives. There’s a lot of noise, but there’s some very valuable information too.”

I was getting a little worried when I saw the bit about separating useless information from the good stuff since that can be a very personal choice. Thankfully, this followed,

One person’s digital trash is another’s digital treasure. For example, people known in social media circles as “Beliebers,” named after entertainer Justin Bieber, covet inconsequential tidbits about Justin Bieber, while “non-Beliebers” send that data straight to the recycle bin.

The amount of data is mind-bending. In social media posted just in the single year ending Aug. 31, 2012, each hour on average witnessed:

  • 30 million comments
  • 25 million search queries
  • 98,000 new tweets
  • 3.8 million blog views
  • 4.5 million event invites
  • 7.1 million photos uploaded
  • 5.5 million status updates
  • The equivalent of 453 years of video watched

Several firms routinely sift posts on LinkedIn, Facebook, Twitter, YouTube and other social media, then analyze the data to see what’s trending. These efforts usually require a great deal of software and a lot of person-hours devoted specifically to using that application. It’s what Corley terms a manual approach.

Corley is out to change that, by creating a systematic, science-based, and automated approach for understanding patterns around events found in social media.

It’s not so simple as scanning tweets. Indeed, if Corley were to sit down and read each of the more than 20 billion entries in his data set from just a two-year period, it would take him more than 3,500 years if he spent just 5 seconds on each entry. If he hired 1 million helpers, it would take more than a day.

But it takes less than 10 seconds when he relies on PNNL’s Institutional Computing resource, drawing on a computer cluster with more than 600 nodes named Olympus, which is among the Top 500 fastest supercomputers in the world.

“We are using the institutional computing horsepower of PNNL to analyze one of the richest data sets ever available to researchers,” Corley said.

At the same time that his team is creating the computing resources to undertake the task, Corley is constructing a theory for how to analyze the data. He and his colleagues are determining baseline activity, culling the data to find routine patterns, and looking for patterns that indicate something out of the ordinary. Data might include how often a topic is the subject of social media, who is putting out the messages, and how often.

Corley notes additional challenges posed by social media. His programs analyze data in more than 60 languages, for instance. And social media users have developed a lexicon of their own and often don’t use traditional language. A post such as “aw my avalanna wristband @Avalanna @justinbieber rip angel pic.twitter.com/yldGVV7GHk” poses a challenge to people and computers alike.

Nevertheless, Corley’s program is accurate much more often than not, catching the spirit of a social media comment accurately more than three out of every four instances, and accurately detecting patterns in social media more than 90 percent of the time.

Corley’s educational background may explain the interest in emergency responders and health crises mentioned in the early part of the news release (from Corley’s PNNL webpage),

B.S. Computer Science from University of North Texas; M.S. Computer Science from University of North Texas; Ph.D. Computer Science and Engineering from University of North Texas; M.P.H (expected 2013) Public Health from University of Washington.

The reference to public health and emergency response is further developed, from the news release,

Much of the work so far has been around public health. According to media reports in China, the current H7N9 flu situation in China was highlighted on Sina Weibo, a China-based social media platform, weeks before it was recognized by government officials. And Corley’s work with the social media working group of the International Society for Disease Surveillance focuses on the use of social media for effective public health interventions.

In collaboration with the Infectious Disease Society of America and Immunizations 4 Public Health, he has focused on the early identification of emerging immunization safety concerns.

“If you want to understand the concerns of parents about vaccines, you’re never going to have the time to go out there and read hundreds of thousands, perhaps millions of tweets about those questions or concerns,” Corley said. “By creating a system that can capture trends in just a few minutes, and observe shifts in opinion minute to minute, you can stay in front of the issue, for instance, by letting physicians in certain areas know how to customize the educational materials they provide to parents of young children.”

Corley has looked closely at reaction to the vaccine that protects against HPV, which causes cervical cancer. The first vaccine was approved in 2006, when he was a graduate student, and his doctoral thesis focused on an analysis of social media messages connected to HPV. He found that creators of messages that named a specific drug company were less likely to be positive about the vaccine than others who did not mention any company by name.

Other potential applications include helping emergency responders react more efficiently to disasters like tornadoes, or identifying patterns that might indicate coming social unrest or even something as specific as a riot after a soccer game. More than a dozen college students or recent graduates are working with Corley to look at questions like these and others.

As to why the US Library of Congress requires 24 hours to search one term in their archived tweets and Corley and the PNNL require seconds to sift through two years of tweets, only two possibilities come to my mind. (1) Corley is doing a stripped down version of an archival search so his searches are not comparable to the Library of Congress searches or (2) Corley and the PNNL have far superior technology.

Tweet your nano

Researchers at the University of Wisconsin-Madison have published a study titled, “Tweeting nano: how public discourses about nanotechnology develop in social media environments,”  which analyses, for the first time, nanotechnology discourse on Twitter social media. From the Life Sciences Communication University of Wisconsin-Madison research webpage,

The study, “Tweeting nano: how public discourses about nanotechnology develop in social media environments,” mapped social media traffic about nanotechnology, finding that Twitter traffic expressing opinion about nanotechnology is more likely to originate from states with a federally-funded National Nanotechnology Initiative center or network than states without such centers.

Runge [Kristin K. Runge, doctoral student] and her co-authors used computational linguistic software to analyze a census of all English-language nanotechnology-related tweets expressing opinion posted on Twitter over one calendar year. In addition to mapping tweets by state, the team coded sentiment along two axes: certain vs. uncertain, and optimistic-neutral-pessimistic. They found 55% of nanotechnology-related opinions expressed certainty, 41% expressed pessimistic outlooks and 32% expressed neutral outlooks.

In addition to shedding light on how social media is used in communicating about an emerging technology, this study is believed to be the first published study to use a census of social media messages rather than a sample.

“We likely wouldn’t have captured these results if we had to rely on a sample rather than a complete census,” said Runge. “That would have been unfortunate, because the distinct geographic origins of the tweets and the tendency toward certainty in opinion expression will be useful in helping us understand how key online influencers are shaping the conversation around nanotechnology.”

It’s not obvious from this notice or the title of the study but it is stated clearly in the study that the focus is the world of US nano, not the English language world of nano. After reading the study (very quickly), I can say it’s interesting and, hopefully, will stimulate more work about public opinion that takes social media into account. (I’d love to know how they limited their study to US tweets only and how they determined the region that spawned the tweet. )

The one thing which puzzles me is they don’t mention retweets (RTs) specifically. Did they consider only original tweets? If not, did they take into account the possibility that someone might RT an item that does not reflect their own opinion? I occasionally RT something that doesn’t reflect my opinion when there isn’t sufficient space to include comment indicating otherwise because I want to promote discussion and that doesn’t necessarily take place on Twitter or in Twitter’s public space. This leads to another question, did the researchers include direct messages in their study? Unfortunately, there’s no mention in the two sections  (Discussion and Implications for future research) of the conclusion.

For those who would like to see the research for themselves (Note: The article is behind a paywall),

Tweeting nano: how public discourses about nanotechnology develop in social media environments by Kristin K. Runge, Sara K. Yeo, Michael Cacciatore, Dietram A. Scheufele, Dominique Brossard, Michael Xenos, Ashley Anderson, Doo-hun Choi, Jiyoun Kim, Nan Li, Xuan Liang, Maria Stubbings, and Leona Yi-Fan Su. Journal of Nanoparticle Research; An Interdisciplinary Forum for Nanoscale Science and Technology© Springer 10.1007/s11051-012-1381-8. Published online Jan. 4, 2013

It’s no surprise to see Dietram Scheufele and Dominique Brossard who are both located the University of Wisconsin-Madison and publish steadily on the topic of nanotechnology and public opinion listed as authors.

Researching tweets (the Twitter kind)

The US Library of Congress, in April 2010, made a deal with Twitter (microblogging service where people chat or tweet in 140 characters) to acquire all the tweets from 2006 to April 2010 and starting from April 2010, all subsequent tweets (the deal is mentioned in my April 15, 2010 posting [scroll down about 60% of the way]). Reading between the lines of the Library of Congress’ Jan. 2013 update/white paper, the job has proved even harder than they originally anticipated,

In April, 2010, the Library of Congress and Twitter signed an agreement providing the Library the public tweets from the company’s inception through the date of the agreement, an archive of tweets from 2006 through April, 2010. Additionally, the Library and Twitter agreed that Twitter would provide all public tweets on an ongoing basis under the same terms. The Library’s first objectives were to acquire and preserve the 2006-10 archive; to establish a secure, sustainable process for receiving and preserving a daily, ongoing stream of tweets through the present day; and to create a structure for organizing the entire archive by date. This month, all those objectives will be completed. To date, the Library has an archive of approximately 170 billion tweets.

The Library’s focus now is on confronting and working around the technology challenges to making the archive accessible to researchers and policymakers in a comprehensive, useful way. It is clear that technology to allow for scholarship access to large data sets is lagging behind technology for creating and distributing such data. Even the private sector has not yet implemented cost-effective commercial solutions because of the complexity and resource requirements of such a task. The Library is now pursuing partnerships with the private sector to allow some limited access capability in our reading rooms. These efforts are ongoing and a priority for the Library. (p. 1)

David Bruggeman in his Jan. 15, 2013 posting about this Library of Congress ‘Twitter project’ provides some mind-boggling numbers,

… That [170 billion tweets] represents the archive Twitter had at the time of the agreement (covering 2006-early 2010) and 150 billion tweets in the subsequent months (the Library receives over half a million new tweets each day, and that number continues to rise).  The archive includes the tweets and relevant metadata for a total of nearly 67 terabytes of data.

Gayle Osterberg, Director of Communications for the Library of Congress writes in a Jan. 4, 2013 posting on the Library of Congress blog about ‘tweet’ archive research issues,

Although the Library has been building and stabilizing the archive and has not yet offered researchers access, we have nevertheless received approximately 400 inquiries from researchers all over the world. Some broad topics of interest expressed by researchers run from patterns in the rise of citizen journalism and elected officials’ communications to tracking vaccination rates and predicting stock market activity.

The white paper/update offers a couple of specific examples of requests,

Some examples of the types of requests the Library has received indicate how researchers might use this archive to inform future scholarship:

* A master’s student is interested in understanding the role of citizens in disruptive events. The student is focusing on real-time micro-blogging of terrorist attacks. The questions focus on the timeliness and accuracy of tweets during specified events.

* A post-doctoral researcher is looking at the language used to spread information about charities’ activities and solicitations via social media during and immediately following natural disasters. The questions focus on audience targets and effectiveness. (p. 4)

At least one of the reasons  no one has received access to the tweets is that a single search of the archived (2006- 2010) tweets alone would take 24 hours,

The Library has assessed existing software and hardware solutions that divide and simultaneously search large data sets to reduce search time, so-called “distributed and parallel computing”. To achieve a significant reduction of search time, however, would require an extensive infrastructure of hundreds if not thousands of servers. This is cost prohibitive and impractical for a public institution.

Some private companies offer access to historic tweets but they are not the free, indexed and searchable access that would be of most value to legislative researchers and scholars.

It is clear that technology to allow for scholarship access to large data sets is not nearly as advanced as the technology for creating and distributing that data. Even the private sector has not yet implemented cost-effective commercial solutions because of the complexity and resource requirements of such a task. (p. 4)

David Bruggeman goes on to suggest that, in an attempt to make the tweets searchable and more easily accessible, all this information could end up behind a paywall (Note: A link has been removed),

I’m reminded of how Ancestry.com managed to get exclusive access to Census records.  While the Bureau benefitted from getting the records digitized, having this taxpayer-paid information controlled by a private company is problematic.

As a Canuck and someone who tweets (@frogheart), I’m not sure how I feel about having my tweets archived by the US Library of Congress in the first place, let alone the possibility I might have to pay for access to my old tweets.

FrogHeart’s 2012, a selective roundup of my international online colleagues, and other bits

This blog will be five years old in April 2013 and, sometime in January or February, the 2000th post will be published.

Statisticswise it’s been a tumultuous year for FrogHeart with ups and downs,  thankfully ending on an up note. According to my AW stats, I started with 54,920 visits in January (which was a bit of an increase over December 2011. The numbers rose right through to March 2012 when the blog registered 68,360 visits and then the numbers fell and continued to fall. At the low point, this blog registered 45, 972 visits in June 2012 and managed to rise and fall through to Oct. 2012 when the visits rose to 54,520 visits. November 2012 was better with 66,854 visits and in December 2012 the blog will have received over 75,000 visits. (ETA Ja.2.13: This blog registered 81,0036 in December 2012 and an annual total of 681,055 visits.) Since I have no idea why the numbers fell or why they rose again, I have absolutely no idea what 2013 will bring in terms of statistics (the webalizer numbers reflect similar trends).

Interestingly and for the first time since I’ve activated the AW statistics package in Feb. 2009, the US ceased to be the primary source for visitors. As of April 2012, the British surged ahead for several months until November 2012 when the US regained the top spot only to lose it to China in December 2012.

Favourite topics according to the top 10 key terms included: nanocrystalline cellulose for Jan. – Oct. 2012 when for the first time in almost three years the topic fell out of the top 10; Jackson Pollock and physics also popped up in the top 10 in various months throughout the year; Clipperton Island (a sci/art project) has made intermittent appearances; SPAUN (Semantic Pointer Arichitecture Unified Network; a project at the University of Waterloo) has made the top 10 in the two months since it was announced); weirdly, frogheart.ca has appeared in the top 10 these last few months; the Lycurgus Cup, nanosilver, and literary tattoos also made appearances in the top 10 in various months throughout the year, while the memristor and Québec nanotechnology made appearances in the fall.

Webalizer tells a similar but not identical story. The numbers started with 83, 133 visits in January 2012 rising to a dizzying height of 119, 217 in March.  These statistics fell too but July 2012 was another six figure month with 101,087 visits and then down again to five figures until Oct. 2012 with 108, 266 and 136,161 visits in November 2012. The December 2012 visits number appear to be dipping down slightly with 130,198 visits counted to 5:10 am PST, Dec. 31, 2012. (ETA Ja.2.13: In December 2012, 133,351 were tallied with an annual total of 1,660,771 visits.)

Thanks to my international colleagues who inspire and keep me apprised of the latest information on nanotechnology and other emerging technologies:

  • Pasco Phronesis, owned by David Bruggeman, focuses more on science policy and science communicati0n (via popular media) than on emerging technology per se but David provides excellent analysis and a keen eye for the international scene. He kindly dropped by frogheart.ca  some months ago to challenge my take on science and censorship in Canada and I have not finished my response. I’ve posted part 1 in the comments but have yet to get to part 2. His latest posting on Dec. 30, 2012 features this title, For Better Science And Technology Policing, Don’t Forget The Archiving.
  • Nanoclast is on the IEEE (Institute of Electrical and Electronics Engineers) website and features Dexter Johnson’s writing on nanotechnology government initiatives, technical breakthroughs, and, occasionally, important personalities within the field. I notice Dexter, who’s always thoughtful and thought-provoking, has cut back to a weekly posting. I encourage you to read his work as he fills in an important gap in a lot of nanotechnology reporting with his intimate understanding of the technology itself.  Dexter’s Dec. 20, 2012 posting (the latest) is titled, Nanoparticle Coated Lens Converts Light into Sound for Precise Non-invasive Surgery.
  • Insight (formerly TNTlog) is Tim Harper’s (CEO of Cientifica) blog features an international perspective (with a strong focus on the UK scene) on emerging technologies and the business of science. His writing style is quite lively (at times, trenchant) and it reflects his long experience with nanotechnology and other emerging technologies. I don’t know how he finds the time and here’s his latest, a Dec. 4, 2012 posting titled, Is Printable Graphene The Key To Widespread Applications?
  • 2020 Science is Dr. Andrew Maynard’s (director of University of Michigan’s Risk Science Center) more or less personal blog. An expert on nanotechnology (he was the Chief Science Adviser for the Project on Emerging Nanotechnologies, located in Washington, DC), Andrew writes extensively about risk, uncertainty, nanotechnology, and the joys of science. Over time his blog has evolved to include the occasional homemade but science-oriented video, courtesy of one of his children. I usually check Andrew’s blog when there’s a online nanotechnology kerfuffle as he usually has the inside scoop. His latest posting on Dec. 23, 2012 features this title, On the benefits of wearing a hat while dancing naked, and other insights into the science of risk.
  • Andrew also produces and manages the Mind the Science Gap blog, which is a project encouraging MA students in the University of Michigan’s Public Health Program to write. Andrew has posted a summary of the last semester’s triumphs titled, Looking back at another semester of Mind The Science Gap.
  • NanoWiki is, strictly speaking, not a blog but the authors provide the best compilation of stories on nanotechnology issues and controversies that I have found yet. Here’s how they describe their work, “NanoWiki tracks the evolution of paradigms and discoveries in nanoscience and nanotechnology field, annotates and disseminates them, giving an overall view and feeds the essential public debate on nanotechnology and its practical applications.” There are also Spanish, Catalan, and mobile versions of NanoWiki. Their latest posting, dated  Dec. 29, 2012, Nanotechnology shows we can innovate without economic growth, features some nanotechnology books.
  • In April 2012, I was contacted by Dorothée Browaeys about a French blog, Le Meilleur Des Nanomondes. Unfortunately, there doesn’t seem to have been much action there since Feb. 2010 but I’m delighted to hear from my European colleagues and hope to hear more from them.

Sadly, there was only one interview here this year but I think they call these things ‘a big get’ as the interview was with Vanessa Clive who manages the nanotechnology portfolio at Industry Canada. I did try to get an interview with Dr. Marie D’Iorio, the new Executive Director of Canada’s National Institute of Nanotechnology (NINT; BTW, the National Research Council has a brand new site consequently [since the NINT is a National Research Council agency, so does the NINT]), and experienced the same success I had with her predecessor, Dr. Nils Petersen.

I attended two conferences this year, S.NET (Society for the Study of Nanoscience and Emerging Technologies) 2012 meeting in Enschede, Holland where I presented on my work on memristors, artificial brains, and pop culture. The second conference I attended was in Calgary where I  moderated a panel I’d organized on the topic of Canada’s science culture and policy for the 2012 Canadian Science Policy Conference.

There are a few items of note which appeared on the Canadian science scene. ScienceOnlineVancouver emerged in April 2012. From the About page,

ScienceOnlineVancouver is a monthly discussion series exploring how online communication and social media impact current scientific research and how the general public learns about it. ScienceOnlineVancouver is an ongoing discussion about online science, including science communication and available research tools, not a lecture series where scientists talk about their work. Follow the conversation on Twitter at @ScioVan, hashtag is #SoVan.

The concept of these monthly meetings originated in New York with SoNYC @S_O_NYC, brought to life by Lou Woodley (@LouWoodley, Communities Specialist at Nature.com) and John Timmer (@j_timmer, Science Editor at Ars Technica). With the success of that discussion series, participation in Scio2012, and the 2012 annual meeting of the AAAS in Vancouver, Catherine Anderson, Sarah Chow, and Peter Newbury were inspired to bring it closer to home, leading to the beginning of ScienceOnlineVancouver.

ScienceOnlineVancouver is part of the ScienceOnlineNOW community that includes ScienceOnlineBayArea, @sciobayarea and ScienceOnlineSeattle, @scioSEA. Thanks to Brian Glanz of the Open Science Federation and SciFund Challenge and thanks to Science World for a great venue.

I have mentioned the arts/engineering festival coming up in Calgary, Beakerhead, a few times but haven’t had occasion to mention Science Rendezvous before. This festival started in Toronto in 2008 and became a national festival in 2012 (?). Their About page doesn’t describe the genesis of the ‘national’ aspect to this festival as clearly as I would like. They seem to be behind with their planning as there’s no mention of the 2013 festival,which should be coming up in May.

The twitter (@frogheart) feed continues to grow in both (followed and following) albeit slowly. I have to give special props to @carlacap, @cientifica, & @timharper for their mentions, retweets, and more.

As for 2013, there are likely to be some changes here; I haven’t yet decided what changes but I will keep you posted. Have a lovely new year and I wish you all the best in 2013.

Mice crash ScienceOnline Vancouver’s May 2012 event at Science World

The second ScienceOnline Vancouver event (a May 15, 2012 event mentioned in my May 14, 2012 posting, which has links to speakers’ blogs and also mentions a few still upcoming science events [May 22 and May 29, 2012]) with Eric Michael Johnson and Raul Pacheco-Vega discussing how to use social media effectively went well.

I can see the organizers refined their approach and the integration of technology (livestreaming, tweeting, etc.)  with a live event was smoother than the last one plus the transition from listening to the speakers to participating in discussion was smoother too.

Both Johnson and Pacheco-Vega highlighted how their use of social media has enhanced professional and personal connections and/or opened up new opportunities. For example, Johnson was asked to do a cover story for Times Higher Education (UK publication) that started with a tweet he wrote about bonobos (a primate found in the Congo only and his field of study for one of his degrees). After years of blogging, Johnson’s efforts were recognized in other ways as well,  his blog is now part of the Scientific American blogging network. Also present at the May event, but in the audience, was another local scientist and Scientific American blogger, Dr. Carin Bondar, who too has had opportunities open up as a consequence of social media. (BTW, she’s auditioning to be a TED speaker soon. I’m not sure which of the major TEDs but she has expressed her excitement about this on Twitter (#SoVan).

Pacheco-Vega focused more heavily on Twitter, Pinterest (consolidates your various social media efforts on a ‘bulletin or pin’ board), and timely.is (a software that allows you to schedule your tweets and allows you to analyze the best timing for releasing them during the day)  and offered tips and suggestions for other tools. (He maintains two identities online, a professional one and a personal one.) He also offered some insight into the nature of the doubts many scientists have about engaging in social media. Lack of time, why bother?, how does this help me professionally?, this is going to hurt me professionally, etc.

There were fewer people (about 1/2 the number they had at the April 2012 event) resulting in a crowd of about 30. Happily they had a liquor licence this time,  so libations were available.

As for the mice (or perhaps one very active mouse excited by the liquor licence), I had several sightings. Hopefully, Science World will have addressed the problem before the next ScienceOnline Vancouver event.

It’s going to be interesting to see how this evolves. To this point, I like the direction they’re taking.

Ask a museum curator today, Sept. 1, 2010

There’s a special Twitter event called, Ask a museum curator. From the article on the Fast Company website by Jenara Nerenberg,

“The inspiration is really a frustration I guess; in the museum world you have a movement towards more open and engaging museums which is often referred to as Museum 2.0 — the idea that a museum can evolve and get better by interacting and involving the public,” says [Jim] Richardson [Sumo design company], whose design firm works mostly in the arts and creative sciences.

“In too many institutions social media is seen only as a marketing tool, and people like curators don’t seem to be given the chance or want to use this kind of digital tool to engage with the public. With Ask a Curator we are, on mass, taking Twitter out of the marketing department and putting it in the hands of curators, and at the same time giving the public the chance to hear about interesting subjects from these passionate individuals,” he continues.

Richardson has pulled together all kinds of museums (art, science, health, police, etc.) from around the world  for today.  You visit the Ask a Curator website here or if you want to view the conversations you can go here.  You do have to be a member (or join) to participate in the conversations.

ETA: Took a quick look at the convo, not too excited right now but maybe it will pick up. I was surprised at the amount of obscenity but then I wasn’t expecting to see any on this type of a feed. Who knew?

FrogHeart now tweets as well as ribbits

I have a couple of announcements. First, I finally took the plunge and started a Twitter feed here. Once I figure how to put some sort of click-through or even the feed onto this site I will do it.

This brings me to my second announcement, I don’t expect to be publishing on this blog at the same vigourous pace that I’ve maintained for some months now. I think it’s time for a break and I want to make a few background (i.e., readers may not notice anything much) changes to this site.

Happy summer!

Interacting with stories and/or with data

A researcher, Ivo Swarties, at the University of Twente in The Netherlands is developing a means of allowing viewers to enter into a story (via avatar) and affect the plotline in what seems like a combination of what you’d see in 2nd Life and gaming. The project also brings to mind The Diamond Age by Neal Stephenson and its intelligent nanotechnology-enabled book along with Stephenson’s latest publishing project, Mongoliad (which I blogged about here).

The article about Swarties’ project on physorg.com by Rianne Wanders goes on to note,

The ‘Virtual Storyteller’, developed by Ivo Swartjes of the University of Twente, is a computer-controlled system that generates stories automatically. Soon it will be possible for you as a player to take on the role of a character and ‘step inside’ the story, which then unfolds on the basis of what you as a player do. In the gaming world there are already ‘branching storylines’ in which the gamer can influence the development of a story, but Swartjes’ new system goes a step further. [emphasis mine]The world of the story is populated with various virtual figures, each with their own emotions, plans and goals. ‘Rules’ drawn up in advance determine the characters’ behaviour, and the story comes about as the different characters interact.

There’s a video with the article if you want to see this project for yourself.

On another related front, Cliff Kuang profiles in an article (The Genius Behind Minority Report’s Interfaces Resurfaces, With Mind-blowing New Tech) on the Fast Company site describes a new human-computer interface. This story provides a contrast to the one about the ‘Virtual Storyteller’ because this time you don’t have to become an avatar to interact with the content. From the article,

It’s a cliche to say that Minority Report-style interfaces are just around the corner. But not when John Underkoffler [founder of Oblong Industries] is involved. As tech advistor on the film, he was the guy whose work actually inspired the interfaces that Tom Cruise used. The real-life system he’s been developing, called g-speak, is unbelievable.

Oblong hasn’t previously revealed most of the features you see in the later half of the video [available in the article’s web page or on YouTube], including the ability zoom in and fly through a virtual, 3-D image environment (6:30); the ability to navigate an SQL database in 3-D (8:40); the gestural wand that lets you manipulate and disassemble 3-D models (10:00); and the stunning movie-editing system, called Tamper (11:00).

Do go see the video. At one point, Underkoffler (who was speaking at the February 2010 TED) drags data from the big screen in front of him onto a table set up on the stage where he’s speaking.

Perhaps most shockingly (at least for me) was the information that this interface is already in use commercially (probably in a limited way).

These developments and many others suggest that the printed word’s primacy is seriously on the wane, something I first heard 20 years ago. Oftentimes when ideas about how technology will affect us are discussed, there’s a kind of hysterical reaction which is remarkably similar across at least two centuries. Dave Bruggeman at his Pasco Phronesis blog has a posting about the similarities between Twitter and 19th century diaries,

Lee Humphreys, a Cornell University communications professor, has reviewed several 18th and 19th century diaries as background to her ongoing work in classifying Twitter output (H/T Futurity). These were relatively small journals, necessitating short messages. And those messages bear a resemblance to the kinds of Twitter messages that focus on what people are doing (as opposed to the messages where people are reacting to things).

Dave goes on to recommend The Shock of the Old; Technology and Global History since 1900 by David Edgerton as an antidote to our general ignorance (from the book’s web page),

Edgerton offers a startling new and fresh way of thinking about the history of technology, radically revising our ideas about the interaction of technology and society in the past and in the present.

I’d also recommend Carolyn Marvin’s book, When old technologies were new, where she discusses the introduction of telecommunications technology and includes the electric light with these then new technologies (telegraph and telephone). She includes cautionary commentary from the newspapers, magazines, and books of the day which is remarkably similar to what’s available in our contemporary media environment.

Adding a little more fuel is Stephen Hume in a June 12, 2010 article about Shakespeare for the Vancouver Sun who asks,

But is the Bard relevant in an age of atom bombs; a world of instant communication gratified by movies based on comic books, sex-saturated graphic novels, gory video games, the television soaps and the hip tsunami of fan fiction that swashes around the Internet?

[and answers]

So, the Bard may be stereotyped as the bane of high school students, symbol of snooty, barely comprehensible language, disparaged as sexist, racist, anti-Semitic, representative of an age in which men wore tights and silly codpieces to inflate their egos, but Shakespeare trumps his critics by remaining unassailably popular.

His plays have been performed on every continent in every major language. He’s been produced as classic opera in China; as traditional kabuki in Japan. He’s been enthusiastically embraced and sparked an artistic renaissance in South Asia. In St. Petersburg, Russia, there can be a dozen Shakespeare plays running simultaneously. Shakespeare festivals occur in Austria, Belgium, Finland, Portugal, Sweden and Turkey, to list but a few.

Yes to Pasco Phronesis, David Edgerton, Carolyn Marvin, and Stephen Hume, I agree that we have much  in common with our ancestors but there are also some profound and subtle differences not easily articulated.  I suspect that if time travel were possible and we could visit Shakespeare’s time we would find that the basic human experience doesn’t change that much but that we would be hardpressed to fit into that society as our ideas wouldn’t just be outlandish they would be unthinkable. I mean literally unthinkable.

As Walter Ong noted in his book, Orality and Literacy, the concept of a certain type of list is a product of literacy. Have you ever done that test where you pick out the item that doesn’t belong on the list? Try: hammer, saw, nails, tree. The correct answer anybody knows is tree since it’s not a tool. However, someone from oral culture would view the exclusion of the tree as crazy since you need both tools and  wood to build something and clearly the tree provides wood. (I’ll see if I can find the citation in Ong’s book as he provides research to prove his point.) A list is a particular way of organizing information and thinking about it.

nanoAlberta tweets

Joel Burford at nanoAlberta contacted me yesterday with the information that nanoAlberta can now be followed on Twitter the usual way (having a Twitter account of your own and ‘following them’) or by RSS feed if you don’t have an account.

http://twitter.com/nanoalberta

Here’s a sampling of their latest tweets,

# Handling nanomaterials? Check out the GoodNanoGuide: http://tinyurl.com/2c5n5ej about 19 hours ago via web

# Dr. Paul Burrows in Edmonton, Alberta speaking about nanotechnology on Citytv: http://www.youtube.com/watch?v=uk-iASnf9nY about 19 hours ago via web

# state of the art electron microscopy product development centre established at the NINT. http://tinyurl.com/2b8rah5 about 19 hours ago via web

# Gary Albach and Dr. Paul Burrows speak on the Global News “Morning Edition” : http://tinyurl.com/2g56b26 about 19 hours ago via web

Dr. Wei Lu, the memristor, and the cat brain; military surveillance takes a Star Trek: Next Generation turn with a medieval twist; archiving tweets; patents and innovation

Last week I featured the ‘memristor’ story mentioning that much of the latest excitement was set off by Dr. Wei Lu’s work at the University of Michigan (U-M). While HP Labs was the center for much of the interest, it was Dr. Lu’s work (published in Nano Letters which is available behind a paywall) that provoked the renewed interest. Thanks to this news item on Nanowerk, I’ve now found more details about Dr. Lu and his team’s work,

U-M computer engineer Wei Lu has taken a step toward developing this revolutionary type of machine that could be capable of learning and recognizing, as well as making more complex decisions and performing more tasks simultaneously than conventional computers can.

Lu previously built a “memristor,” a device that replaces a traditional transistor and acts like a biological synapse, remembering past voltages it was subjected to. Now, he has demonstrated that this memristor can connect conventional circuits and support a process that is the basis for memory and learning in biological systems.

Here’s where it gets interesting,

In a conventional computer, logic and memory functions are located at different parts of the circuit and each computing unit is only connected to a handful of neighbors in the circuit. As a result, conventional computers execute code in a linear fashion, line by line, Lu said. They are excellent at performing relatively simple tasks with limited variables.

But a brain can perform many operations simultaneously, or in parallel. That’s how we can recognize a face in an instant, but even a supercomputer would take much, much longer and consume much more energy in doing so.

So far, Lu has connected two electronic circuits with one memristor. He has demonstrated that this system is capable of a memory and learning process called “spike timing dependent plasticity.” This type of plasticity refers to the ability of connections between neurons to become stronger based on when they are stimulated in relation to each other. Spike timing dependent plasticity is thought to be the basis for memory and learning in mammalian brains.

“We show that we can use voltage timing to gradually increase or decrease the electrical conductance in this memristor-based system. In our brains, similar changes in synapse conductance essentially give rise to long term memory,” Lu said.

Do visit Nanowerk for the full explanation provided by Dr. Lu, if you’re so inclined. In one of my earlier posts about this I speculated that this work was being funded by DARPA (Defense Advanced Research Projects Agency) which is part of the US Dept. of Defense . Happily, I found this at the end of today’s news item,

Lu said an electronic analog of a cat brain would be able to think intelligently at the cat level. For example, if the task were to find the shortest route from the front door to the sofa in a house full of furniture, and the computer knows only the shape of the sofa, a conventional machine could accomplish this. But if you moved the sofa, it wouldn’t realize the adjustment and find a new path. That’s what engineers hope the cat brain computer would be capable of. The project’s major funder, the Defense Advanced Research Projects Agency [emphasis mine], isn’t interested in sofas. But this illustrates the type of learning the machine is being designed for.

I previously mentioned the story here on April 8, 2010 and provided links that led to other aspects of the story as I and others have covered it.

Military surveillance

Named after a figure in Greek mythology, Argos Panoptes (the sentry with 100 eyes), there are two new applications being announced by researchers in a news item on Azonano,

Researchers are expanding new miniature camera technology for military and security uses so soldiers can track combatants in dark caves or urban alleys, and security officials can unobtrusively identify a subject from an iris scan.

The two new surveillance applications both build on “Panoptes,” a platform technology developed under a project led by Marc Christensen at Southern Methodist University in Dallas. The Department of Defense is funding development of the technology’s first two extension applications with a $1.6 million grant.

The following  image, which accompanies the article at the Southern Methodist University (SMU) website, features an individual who suggests a combination of the Geordi character in Star Trek: The Next Generation with his ‘sensing visor’ and a medieval knight in full armour wearing his helmet with the visor down.

Soldier wearing helmet with hi-res "eyes" courtesy of Southern Methodist University Research

From the article on the SMU site,

“The Panoptes technology is sufficiently mature that it can now leave our lab, and we’re finding lots of applications for it,” said ‘Marc’ Christensen [project leader], an expert in computational imaging and optical interconnections. “This new money will allow us to explore Panoptes’ use for non-cooperative iris recognition systems for Homeland Security and other defense applications. And it will allow us to enhance the camera system to make it capable of active illumination so it can travel into dark places — like caves and urban areas.”

Well, there’s nothing like some non-ccoperative retinal scanning. In fact, you won’t know that the scanning is taking place if they’re successful  with their newest research which suggests the panopticon, a concept from Jeremy Bentham in the 18th century about prison surveillance which takes place without the prisoners being aware of the surveillance (Wikipedia essay here).

Archiving tweets

The US Library of Congress has just announced that it will be saving (archiving) all the ‘tweets’ that have been sent since Twitter launched four years ago. From the news item on physorg.com,

“Library to acquire ENTIRE Twitter archive — ALL public tweets, ever, since March 2006!” the Washington-based library, the world’s largest, announced in a message on its Twitter account at Twitter.com/librarycongress.

“That’s a LOT of tweets, by the way: Twitter processes more than 50 million tweets every day, with the total numbering in the billions,” Matt Raymond of the Library of Congress added in a blog post.

Raymond highlighted the “scholarly and research implications” of acquiring the micro-blogging service’s archive.

He said the messages being archived include the first-ever “tweet,” sent by Twitter co-founder Jack Dorsey, and the one that ran on Barack Obama’s Twitter feed when he was elected president.

Meanwhile, Google made an announcement about another twitter-related development, Google Replay, their real-time search function which will give you data about the specific tweets made on a particular date.  Dave Bruggeman at the Pasco Phronesis blog offers more information and a link to the beta version of Google Replay.

Patents and innovation

I find it interesting that countries and international organizations use the number of patents filed as one indicator for scientific progress while studies indicate that the opposite may be true. This news item on Science Daily strongly suggests that there are some significant problems with the current system. From the news item,

As single-gene tests give way to multi-gene or even whole-genome scans, exclusive patent rights could slow promising new technologies and business models for genetic testing even further, the Duke [Institute for Genome Sciences and Policy] researchers say.

The findings emerge from a series of case studies that examined genetic risk testing for 10 clinical conditions, including breast and colon cancer, cystic fibrosis and hearing loss. …

In seven of the conditions, exclusive licenses have been a source of controversy. But in no case was the holder of exclusive patent rights the first to market with a test.

“That finding suggests that while exclusive licenses have proven valuable for developing drugs and biologics that might not otherwise be developed, in the world of gene testing they are mainly a tool for clearing the field of competition [emphasis mine], and that is a sure-fire way to irritate your customers, both doctors and patients,” said Robert Cook-Deegan, director of the IGSP Center for Genome Ethics, Law & Policy.

This isn’t an argument against the entire patenting system but rather the use of exclusive licenses.