mardi 25 février 2014

Screenification, the Digital Self, and Robots

A lire sur:

An Interview with Futurist Brian David Johnson

Oct 25th 2013 at 7:17pm
Interested in peering into the future of technology?  Read science fiction and talk to 13-year-olds, advises Brian David Johnson, Intel’s first and only futurist. Tasked with mapping out how people will interact with computers and technology 10 to 15 years from now, Johnson sees a world where digital data lives a secret life, computational intelligence vanishes like a ghost in everyday objects, and the future of fear becomes something that we can not only manage but perhaps even master.
Intel® Software Adrenaline (ISA) recently spoke with Brian David Johnson to get insight into these and other topics.
ISA: Computers are getting smaller, to the point where they are essentially disappearing. You have called this the Ghost of Computing. In practical terms, how far is this likely to go in the next ten years?
BDJ: As we approach the year 2020, the [physical] size of meaningful computational power begins to approach zero. Intel just announced 14nm-manufacturing processes for chips. And my architect friends see us getting to 5nm, which is just 12 atoms across. That means we can essentially turn anything into a computer.
Today we live in a world with devices all around us—that's where computation lives. But when we live in a world where the components of a chip are 12 atoms across and you have the ability to turn anything into a computer, then that means computation will reside outside of devices. It'll be in our buildings, our cities, our cars, and our infrastructure.
From a practical standpoint, architecturally, that looks very different. A world where we're surrounded by computational intelligence—and where computing becomes a ghost—looks very different from a world with network protocols and security. I think we will then find computation in places where you would never have expected to find it before, possibly in our bodies, possibly in the soil, and possibly all around us.
I look for areas where computation would be both mundane and a bit challenging. Computation in dirt, for instance, might be mundane but being able to monitor environmental conditions, soil, and crops offers huge benefits. I’m doing a lot of work on the future of farming.
ISA: Related to the previous question, what is the ultimate boundary—or interface—between humans and technology?  Does technology literally become a part of us, or vice-versa, in the near future?
BDJ: We're in the midst of a world where we're seeing the “screenification” of computing; that is, if you have a screen, you have a computer. It’s something that I wrote about in my book, Screen Future, where, in looking out to 2015, the world around us becomes a world of screens. How do you interact with those pieces of technology and those screens?
Sometimes it will be typing; certainly that won’t go away for a while. Sometimes it will be touch or voice. And then sometimes it will simply involve living with the device. This is where it gets really exciting. Having the device on your body or in your pocket becomes an interface as the device begins to learn about you, and you learn about it. Actual life itself becomes the mode of interaction.
As you look out a little further, we can begin thinking about non-screen-based input and output. For example, we already hear a lot of talk about wearables, which I think is the first instantiation of non-screen-based input-output modality.
It's very exciting, though we're still in the very early days. But when the majority of computing lives outside of our devices, it also means that computation will live outside of screens.
ISA: You have said that data will likely “take on a life of its own” without human involvement. What are some of the potential pitfalls of this scenario, and what are some of the proactive steps that we can take to ensure that, on balance, the benefits outweigh the drawbacks?
BDJ: The secret life of data is something that I've been talking and writing about for quite some time. The idea is that data takes on a life of its own and does things. In the past year, we've begun to see that the general public is starting to develop awareness that data does have a secret life. Certainly, with a lot in the news media about government agencies and corporations, people are beginning to understand that they have a “second self.”
This second self is digital and it lives online; it's a collection of data about ourselves. That is the first big step—for people to understand that there’s a version of them that lives in the digital world. Sometimes it's accurate, and sometimes it's not.
The pitfalls are the same as those in the physical world. And the ways we avoid these pitfalls include some of the same things we do in the physical world. Because we now have a second self that is digital, and we have data that could go off and do things, we need to start asking what’s acceptable with our online second self. What should it share about me and what should it learn about others?
I think we can also begin to apply a bit of social science to our second self. And we can begin to understand the complexity and the subtleties of that data. At that point, we can begin to have conversations in our communities, in our businesses, and in our media about what is acceptable.
ISA: Even without big data, nearly everyone appears to be facing information overload. How can technology help us wrap our arms (and minds) around this overabundance and help keep information useful?
BDJ: This relates to my work, Vintage Tomorrows, which is about steampunks. Working with cultural historian James H. Carrott, we went on a journey to understand what steampunks could teach us about the future. We learned that people want a different relationship with technology; people want the technology to have a sense of humanity. They want their technology to know them.
This is incredibly important when it comes to information overload. If your technology knows you—literally knows you as John or knows me as Brian—then it knows what I'm interested in and what I'm not interested in. But it also knows my schedule. It knows what times of the day I am busy and could be overloaded, and what times of the day I'm bored and that perhaps overload wouldn't be a bad thing, so bring it on.
The smartphone is the perfect example. A typical smartphone has more computational power than what got America to the moon and back. So if it's that smart, why can't it know me? Why can't it understand who I am as an individual, what my day is like, how I'm feeling, and what my schedule is like?  Once we have that, then the device could be smart enough to help filter all of this information and help prioritize it for us.
ISA: Privacy and security can sometimes appear at odds, though both are unquestionably important. How do you see technology helping to balance these needs?
BDJ: The thing I find very interesting about privacy and security—and this comes out of the work we've done around the Future of Fear—is that people talk about privacy and security as if it were a physical thing that exists in the world. It’s not a thing. Security and privacy are legal, cultural, and social constructs. And they’re not binary.
Instead, security and privacy are incredibly complex and nuanced, and different all over the world. It's different in the United States than it is in the European Union than it is in China. We need to understand and embrace that complexity. If we understand it as being an incredibly complex mix, it allows us to more fully understand what’s acceptable and what’s not acceptable. And it shows us places where we need to do more investigation.
ISA: You mentioned in the past that you are a big fan of science fiction. What is the practical role of science fiction to a futurist?
BDJ: I am a huge science-fiction nerd and an unabashed geek. I'm a science-fiction author, but also a huge fan. As a futurist, I use science fiction as a development tool for the products we're creating at Intel. The 2019 specification that I finished last year, for example, contains two science-fiction stories that can actually become a way for us to understand the human impact of the technology that we're developing.
Any good science-fiction story is about people, so a science-fiction story based on science fact gives you the playground to imagine both the positive and negative sides. This allows us to map out the futures that we want—what we want technology to do—along with what we want to avoid.
I think science fiction is also an incredibly powerful tool for sharing technology. I’m not a synthetic biologist, but I could read a science-fiction story—based on science fact—about synthetic biology and then have a meaningful conversation about that future.
Gene Roddenberry is a great example. Roddenberry didn't create Star Trek to simply show cool gadgets and interplanetary travel. He created it because he saw a future that was radically different. Then he built his technology to support it. Roddenberry created Star Trek with the intent to show a very different society. And that might actually lead us to create not only that society, but hopefully also the technology that supports it. 
ISA: A good deal of modern science fiction could be labeled as dystopian with technology running amok and humanity at risk. And, of course, the scenarios are not entirely implausible. Why do you feel the future could, or will, be more positive?
BDJ: Well, I'm an optimist. Probably one of the most radical things that I've ever done as a futurist is declare myself to be an optimist. It turns out that people prefer futurists to be pessimistic, but I don't subscribe to that. The future is made every day by the actions of people, so if we’re going to build our future, why would we build a bad one? Let's build an awesome future. In that way, you have to be an optimist.
With that said, I don't shy away from the darker areas of where we might go. Dystopian science fiction and dystopian visions are incredibly important. They are possibly even more important than optimistic visions of science fiction because they give us a language to discuss the future. They give us a language to discuss things that we want to avoid.
ISA: Independent of dystopian fiction, machines continue to replace humans in many endeavors including, increasingly, intelligent activities. You address this in your Future of Fear work. What should we fear, and what should we not fear?
BDJ: Machines do not replace humans. Let's be clear. A machine is a system or device that does the work of human beings.  Machines will not replace human beings. Human beings are amazing and complicated. We can do things that machines will never do. We have emotional intelligence and we have the ability to be creative. Machines don't do that, and machines won't do that for a very long time.
Our machines are wonderful extensions of ourselves. And when we make these machines, we actually imbue them with our humanity. We imbue them with our ethics, with our morals, and with our hopes of the future. For me, technology and machines are an extension of our humanity. And if we design them correctly, they can allow us to be not only more human, but also better humans.
ISA: It can be argued that technology changes what we do—and even who we are—as humans. Are we becoming more like the machines that serve us, or are we making machines more in our image?
BDJ: The wonderful thing about humans is that we are adaptive and have a wonderful relationship with technology and tools. It frees us to make more tools that change us even further, which then allows us to continue to innovate. And this change isn't a bad thing.
For example, research last year from Columbia University showed that humans are offloading their memory to the Internet. We now remember less, and actually depend on the Internet to remember more. Of course, human beings have been offloading their oral histories for some time—to something called books.  And that's a very good thing. That brought about the Enlightenment. We need to understand that there's no single state of humans and humanity. Humans are constantly evolving, changing, and adapting, and that's a very good thing.
Ultimately, the morality and ethics come from us. Machines don’t get to make those decisions. We do, and we program the boundaries. Machines can act autonomously, but we are doing the programming.
ISA: It has been said that while nearly everyone uses the Internet, young people tend to live in the Internet. Does the future belong to young people?  Put another way, how do we continue to make technology more accessible to a broader range of people?
BDJ: I get questions all the time about how to prepare for the future. What do we need to do? I usually suggest that they get a 13-year-old mentor. The future belongs to young people. I believe it’s my job to be the toolmaker for the future, to enable those young minds to create the future.
For me, it's a beautiful thing that you have an entire generation who has never known a time where there wasn't an Internet. I remember the time before the Internet, and it was cold and dark and boring. The Internet is awesome. And it's not just the Internet; it’s the machines, the microcomputers, and the cell phones that allow us to connect to people and information. It's a wonderful time to be alive.
This is the world that I'm working on with 21st Century Robots, a project I recently launched. Imagine a world where robots aren’t something scary, aren’t something that live in a lab, or that can only be made by people who have a lot of money. Imagine if you only ever knew a time when anyone could build, print, and program their own robot. Imagine that.
It's our job to work with young people. I spend most of my time now talking to young people more than old people; I consider myself one of those old people. I believe it’s our job to create tools for them, to ask them what kind of future they want—and what kind of future they want to avoid—so that we can help them build it.
- See more at:

1 commentaire:

  1. will provide clients with more options and new approaches that are being made available in this competitive market. statlook