We all agree (or seem to) that the old models or status quo are not the answer anymore. How do we help people deconstruct so they can reconstruct new models?
To know, but not to understand
One of the edges in the context of big data is asking the right questions. There are many implications for media, distribution, curation, and business trade. Data is free, what you do with it will cost you, though. There is no free ride.
So far, any one simple explanation or solution has eluded connection to reality.
The three stories that caught my eye this week are:
The Atlantic publishes about David Weinerger and the Science on Big Data, from which I extracted the title for this post. Like all other kinds of data, scientific data has increased exponentially thanks to the changed economics of deletion, sharing, and the fact that computers have gotten smarter:
The problem -- or at least the change -- is that we humans cannot understand systems even as complex as that of a simple cell.
[...] Models this complex -- whether of cellular biology, the weather, the economy, even highway traffic -- often fail us, because the world is more complex than our models can capture. But sometimes they can predict accurately how the system will behave. At their most complex these are sciences of emergence and complexity, studying properties of systems that cannot be seen by looking only at the parts, and cannot be well predicted except by looking at what happens.
[...] We can climb the ladder of complexity from party games to humans with the single intent of getting outside of a burning building, to phenomena with many more people with much more diverse and changing motivations, such as markets. We can model these and perhaps know how they work without understanding them. They are so complex that only our artificial brains can manage the amount of data and the number of interactions involved.
The article, which is an excerpt of Weinberger larger discussion on the topic, concludes that model-based knowing has many well-documented difficulties, especially when we are attempting to predict real-world events subject to the vagaries of history. Is this a new form of knowing?
Technology Review says the way people copy each other's linguistic style reveals their pecking order. The father of this idea is John Kleinberg, a computer scientist now at Cornell University in Ithaca. To find the answer they've analyzed types of text in which the writers have specific goals in mind:
[...] editorial discussions between Wikipedia editors (a key bound in this work is that the conversations cannot be idle chatter; something must be at stake in the discussion).
Wikipedia editors are divided between those who are administrators, and so have greater access to online articles, and non-administrators who do not have such access. Clearly, the admins have more power than the non-admins.
By looking at the changes in linguistic style that occur when people make the transition from non-admin to admin roles, Kleinberg and co cleverly show that the pattern of linguistic co-ordination changes too. Admins become less likely to co-ordinate with others. At the same time, lower ranking individuals become more likely to co-ordinate with admins.
Of course, this is continues to be a topic of huge interest in social media: A new meaning of marching words into battle?
There's a race going on to find a common model of trust. Welcome to the reputation economy on CNBC magazine looks at the repercussion of what people say in social media on their brand equity. Rachel Botsman, author of What's Mine Is Yours: The Rise of Collaborative Consumption is quoted:
Every day billions of dollars ride on the decisions we make about firms and people, whether it is job recruitment, marketing campaigns, flat-renting, swap exchanges and so on. In making those judgement calls, we place great faith in our own intuitions and those of our immediate social circles.
We rely, in other words, on a random accumulation of localised knowledge about people, their backgrounds and various behavioural signals. What if we could pool all those circles of wisdom together and extract a common currency for evaluating everyone's levels of expertise, social resonance and, above all, such critical attributes as trustworthiness? Well, that race is now on.
A number of tech start-ups are working to bring a 'reputation graph' to life. Can we build products to change the way people hire, form teams, and start companies? Botsman is the first to admit that far more research is needed to ensure that recommendation engines can keep motivating good behaviours while at the same time quickly weed out the bad.
Trust needs to be there. It's not going away, according to Damian Kimmelman, a New Yorker who interned on Wall Street as a risk-assessment analyst before moving to London and starting Duedil.
New models it is. We've pretty much mined the old ones. And we do need to trade our way out of this mess... With new models of thinking to new ways of doing.
Are we doing new models, though, or just doing old models, another way?
Follow the discussion over on Conversation Agent Google+ Page.
Have a great weekend everyone.