Les nouveaux usages d'aujourd'hui seront les business de demain.
Revue de presse sur les tendances et évolutions technologiques utiles.
http://theitwatcher.fr/.
A lire sur: http://www.itespresso.fr/microsoft-travaille-interfaces-homme-machine-futur-74722.html Microsoft évoque ses projets à 10 ans en matière d’interface utilisateur. Le projet consiste à unir les capacités du Kinect et des écrans multi-touches pour contrôler la machine par les gestes et la voix.
Microsoft confie sa vision du futur des interfaces homme / machine et admet travailler sur des technologies que l’on pourrait croire sorties du film Minority Report.
Souvenez-vous, dans ce film d’anticipation, Tom Cruise impressionnait par sa dextérité et sa capacité à communiquer avec la machine. Chaque geste effectué sur un écran holographique, chaque parole prononcée devant ce dernier, étaient interprétés et conduisait à un résultat encore plus futuriste : voir l’avenir.
Rassurez-vous, Microsoft n’en n’est pas encore là, mais grâce à son détecteur de mouvements Kinect et aux écrans produits par Perceptive Pixel (rachetée par le groupe en 2012), la firme de Redmond envisage désormais l’arrivée d’interfaces homme / machine presque aussi intuitives que dans le film.
Dans une note publiée sur le site de Microsoft et dont les propos sont repris par TechWeek Europe, Michael Mott, principal responsable de l’écosystème Xbox (dont est issu le capteur Kinect) parle de cette association technologique ainsi : « Cette association apporte le meilleur de ce qui se produit en matière d’interfaces utilisateurs naturelles et dans le domaine écrans larges multitouches. Cela nous permet de voir si un plus un ne peut pas être égal à trois pour l’utilisateur. »
Car en effet, même si pour Microsoft il ne s’agit que d’associer le potentiel de deux technologies, la firme estime que le résultat pourrait être une interface d’une ergonomie jamais égalée pouvant révolutionner la communication entre l’homme et l’ordinateur. Un projet qui est désormais évoqué par Microsoft comme le « Saint Graal » en matière d’interface utilisateur naturelle (NUI).
Microsoft imagine une interface homme machine futuriste entièrement dirigée par les gestes et la voix. Crédit photo : Microsoft
A lire sur: http://business.lesechos.fr/entrepreneurs/juridique/4099551-ventes-a-distance-14-jours-pour-annuler-un-achat-62862.php
Par Laurence Le Goff, journaliste | 17/04/2014 Bientôt, le consommateur bénéficiera d’un délai de 14 jours (contre 7 actuellement) pour exercer son droit de rétractation suite à un achat conclu à distance, après un démarchage téléphonique ou hors établissement.
À compter du 14 juin 2014, dans le cadre d'une vente conclue à distance, à la suite d'un démarchage téléphonique ou hors établissement (lors d'un démarchage à domicile par exemple), le délai de rétractation dont bénéficie le consommateur passera de 7 à 14 jours, le délai courant à compter du jour de la conclusion du contrat pour les prestations de services et du jour de la réception du bien par le consommateur pour les contrats de vente. Ce délai sera prolongé de 12 mois lorsque l'information sur le droit de rétractation n'aura pas été donnée.
Le consommateur devra informer le professionnel de sa décision de rétractation en lui adressant un formulaire de rétractation ou toute autre déclaration, dénuée d'ambiguïté, exprimant sa volonté de se rétracter. Il n'aura pas à motiver sa décision, ni à supporter d'autres coûts que les coûts directs de renvoi des biens (sauf si le professionnel accepte de les prendre à sa charge ou s'il a omis d'informer le consommateur que ces coûts sont à sa charge), et les frais supplémentaires s'il a expressément choisi un mode de livraison plus coûteux que le mode de livraison standard proposé par le professionnel. Il devra renvoyer ou restituer les biens 14 jours au plus après la communication de sa décision.
De son côté, le professionnel devra rembourser le consommateur dans les 14 jours au maximum à compter de la date à laquelle il sera informé de la décision du consommateur de se rétracter. Pour les contrats de vente de biens, à moins qu'il ne propose de récupérer lui-même les biens, le professionnel pourra différer le remboursement jusqu'à récupération des biens ou jusqu'à ce que le consommateur ait fourni une preuve de l'expédition de ces biens, la date retenue étant celle du premier de ces faits.
Au-delà, les sommes dues seront de plein droit majorées : - du taux d'intérêt légal si le remboursement intervient au plus tard 10 jours après l'expiration des délais précédemment cités ; - de 5 % si le retard est compris entre 10 et 20 jours ; - de 10 % si le retard est compris entre 20 et 30 jours ; - de 20 % si le retard est compris entre 30 et 60 jours ; - de 50 % entre 60 et 90 jours ; - et de 5 points supplémentaires par nouveau mois de retard jusqu'au prix du produit, puis du taux d'intérêt légal.
Précision : les acheteurs professionnels employant cinq salariés au plus bénéficieront du droit de rétractation en cas de vente hors établissement dont l'objet n'entrera pas dans leur champ d'activité principale. Actuellement, ils ne bénéficient pas du droit de rétractation lorsque le contrat de vente est en rapport direct avec leur activité.
Tecnalia is now demonstrating a new home system which is able to detect the first symptoms of neuro-degenerative illnesses.
Coping with ageing populations is certainly going to be a key challenge in the near future, as indicated by the first report on the Silver Economy in France, on which L’Atelier reported last week. Among the structural adaptations that are going to be required, the report points to an increasing need for homes that are suitably equipped and adapted for senior citizens. With this in mind, the Spanish research centre Tecnalia has now come up with a design for a smart home system. The basic idea is to be able to make an early diagnosis of neuro-degenerative illnesses such as Alzheimer’s disease, predicted by research by R. Brookmeyer, E. Johnson, K. Ziegler-Graham and HM Arrighi at Johns Hopkins University in Maryland in the United States to affect one person in 85 by 2050, i.e. four times the number of sufferers in 2005. Tecnalia’s system is designed to spot changes in the behaviour of the householder, which may be the first symptoms of a neuro-degenerative illness.
Connected home monitoring system
The Tecnalia system uses a vast network of sound and other sensors to detect the presence of the occupant in the various rooms around the house – opening and closing doors, windows and drawers; turning light switches on and off; and using household appliances; plus monitoring how much TV they watch, how much time they spend lying in bed or sitting on the sofa, how frequently they turn water taps on and off, etc. Using these sensors spread throughout the house, the system is designed to quickly alert a friend or family member if there is a change in the person’s habits. This might be a change in sleeping patterns, or eating habits – if, for instance, an elderly person stops preparing and eating hot meals or becomes increasingly inactive. Such changes in an older person’s day-to-day activities are very often symptoms of disorders linked to neuro-degenerative disease and detecting such illness at an early stage of cognitive deterioration can significantly increase the chances of improving the patient’s quality of life.
Helping to maintain senior citizens’ independence
In the initial phase of deployment of the system, Tecnalia is focusing on retirement homes and supervised apartments, with a view to improving care and quality of the residents’ lives. However, Tecnalia quotes data from IMSERSO, the Spanish Institute for Older People and Social Services, which reveals that 70% of people over 70 still prefer to live in their own houses rather than be placed in a specialised care home. Based on this finding, the research centre is now aiming to develop technology specifically to provide for the needs of those more independent older folks. This would mean enabling senior citizens to receive assistance in their everyday activities, for example installing smart alarms and employing household robots. The prototype of the current system has taken three years to develop and has just been installed for demonstration purposes at Tecnalia’s premises at Zamudio in the Basque region.
A lire sur: http://www.nextinpact.com/news/86661-les-offres-internet-fr-next-inpact-devoile-son-comparateur-doffres-fai.htm
Le comparateur qui fait LOI !
Alors que Free Mobile arrivait sur le marché, nous avions entrepris de comparer tous les forfaits mobiles « low cost » qui affichaient de nombreuses différences, malgré des tarifs assez similaires. Cela avait donné naissance à Tous les forfaits. Désormais, la même guerre commence à se faire sur le terrain de l'internet fixe, nous avons décidé aussi de vous aider à y voir plus clair avec un nouvel outil : Les offres internet.fr.
Le plus compliqué lorsque l'on traite d'un sujet complexe au quotidien, c'est de réussir à proposer une manière simple de le présenter à des internautes qui ne cherchent que la bonne information : celle qui va les aider à comprendre et à choisir. Lorsque les opérateurs de téléphonie mobile se sont lancés dans une guerre acharnée afin de préparer l'arrivée de Free Mobile, ils ont tous mis sur le marché de nouvelles marques, de nouvelles offres, le tout avec des prix parfois assez similaires, mais des avantages parfois très différents, qui avaient tendance à changer constamment.
L'aventure Tous les forfaits n'était qu'un début
Si dans un premier temps nous avions mis en ligne un dossier permanent et mis à jour pour référencer toutes ces offres, cela n'a rapidement plus été assez. Nous avions donc décidé de lancer, dès janvier 2012, un comparateur de forfaits mobiles qui respecterait quelques règles. Ses informations sont issues du travail d'analyse des FIS (fiches informations standardisées) de tous les opérateurs du marché par les membres de la rédaction et ses filtres sont ceux que nous jugeons comme les plus pertinents. Le but est de mettre en avant des informations utiles et pas uniquement le prix, le système de tri ne souffrant d'aucun avantage pour telle ou telle marque, qui ne peuvent communiquer que via les espaces publicitaires, sans mise en avant dans l'ordre proposé (malgré de nombreuses demandes en ce sens).
Pendant deux ans, nous avons écouté vos demandes, amélioré notre outil, et alors que nous préparons constamment des nouveautés à lui apporter, nous avons décidé de prendre le temps de lui adjoindre un petit frère : Les offres internet.fr. En effet, les mouvements récents du marché de l'internet fixe ont montré que ce terrain était lui aussi au centre d'une guerre acharnée, avec la même problématique : tout le monde se cale au même prix en apparence, mais dans le détail, les offres sont nombreuses.
Un outil pour comparer les abonnements de l'accès à internet fixe
Nous avons donc repris la structure appréciée de Tous les forfaits, adapté le design et constitué une nouvelle base de données dédiée. Tous les filtres sont spécifiques aux offres d'accès internet fixe via trois technologies : l'ADSL, le VDSL et la fibre, qui représentent le gros de l'offre actuelle. Aujourd'hui, nous rendons publique la première version bêta du site, pour laquelle nous avons fait quelques choix restrictifs de manière volontaire :
Nous ne référençons que les offres triple play au maximum
Nous ne prenons pas en compte le tarif avec abonnement téléphonique Orange classique
Nous affichons par défaut le tarif le moins cher pour une offre donnée
Le but est ici de proposer un service aussi simple que possible, qui ne part pas dans des mélanges de tarifs à ne plus savoir qu'en faire, et qui vous affiche le prix qui correspond à votre demande. Ainsi, si une option est sélectionnée, comme la TV, seuls les forfaits qui la proposent seront affichés et son coût sera intégré au calcul du prix affiché. Vous pourrez aussi connaître le tarif sur 12 ou 24 mois « Tout compris » qui tiendra compte de tous les frais annexes : options, coût de la box, frais de résiliation, etc. Vous pourrez ainsi vraiment comparer le prix total d'une offre à une autre.
Un site qui va évoluer dans les mois à venir, selon vos demandes
Bien entendu, des évolutions arriveront assez rapidement, d'autres prendront un peu plus de temps, notamment pour ce qui est de l'intégration des offres Quad Play ou de tous les détails concernant les différentes box par exemple. Comme pour Tous les forfaits, notre but est ici de construire un outil utile à nos lecteurs sur le long terme, en écoutant leurs demandes et leurs besoins. N'hésitez donc pas à nous faire part de vos remarques au sein des commentaires. Notez que nous n'avons aussi pour le moment pas fait le choix d'intégrer un outil de gestion de l'éligibilité, puisque certains FAI imposent que ce soit le leur qui soit utilisé. Si nous avons déjà quelques pistes d'évolution sur ce point, cela ne sera implémenté que lorsque nous aurons trouvé une solution qui nous semblera adaptée à nos critères.
Notre but est en effet avant tout de vous informer sur les offres, leurs conditions, leurs options et leur coût. Vous pourrez donc pour chaque forfait afficher des informations complémentaires via la flèche située en bout de ligne, via la fiche forfait, ou via l'outil de comparaison qui vous permettra de savoir quel FAI correspond le plus à votre besoin. Si jamais vous détectez une erreur, vous pouvez aussi le faire en envoyant un message à la rédaction via le bouton dédié. Attention, nous ne pourrons vous répondre que si vous laissez une adresse email.
N'hésitez pas à suivre les réseaux sociaux dédiés à ce service :
A lire sur: http://www.ecommercetimes.com/edpick/80315.html
By Jeff Kagan
E-Commerce Times
04/17/14 12:07 PM PT
We are in the very early stages of an ultra high-speed Internet revolution that will benefit everyone: carriers, cities, companies and customers. Cities want it because they see it as a way to attract companies. That means they increase their tax base and have a strong growth economy. Companies want it because they see this as a competitive advantage, at least for a while, until everyone has it.
Want to watch a new tech race? Keep your eyes on the new 1 Gbps ultra high-speed Internet race. Over the next few years, this will continue to grow and become one of the hottest races around. So who will the leaders be? Today, entrants like Google, AT&T, C Spire and CenturyLink already have started their race for the gold.
First, it's important to understand this race, so let's pull back the camera. We can see that this race actually has been running for quite a long while. It is not new. Every year, local telephone companies like AT&T, Verizon and CenturyLink continue to increase Internet speeds. So do cable television companies like Comcast, Time Warner Cable and Cox.
However, as fast as these speeds are -- and they are extremely fast -- Google wanted more. Google pointed to other countries where Internet speeds were even faster. Google wanted to speed up the process. So in typical Google fashion, it entered the race in Kansas City with its 1 Gbps service and challenged the existing providers.
Trend in the Making
Google did not do anything the others weren't already moving toward. It just turned up the competitive heat. Remember, the other competitors have huge national infrastructures to maintain. They each spend many billions of dollars upgrading their networks and increasing their speeds. All of that takes much more time than rolling out service to just one city, like Google did.
Kansas City was a success. Customers loved it. The media loved to write about it. Kansas City gained a competitive advantage, and suddenly many other cities wanted to be next on the list.
Over the last few quarters, we have seen a handful of other big-time companies jump into ultra high-speed race:
AT&T announced it would bring GigaPower to its first ultra high-speed city, Austin Texas;
CenturyLink announced its first ultra high-speed service in the Las Vegas area; and
C Spire jumped into the race by offering its Fiber to the Home ultra-fast Internet service in several Mississippi cities to start.
However, many competitors -- including Verizon, Comcast, Time Warner Cable, Cox and others -- havea been silent. Will they eventually join the ultra high-speed race? I would hope so, since this is the future.
However some companies are leaders and others are followers. In this case, the leaders are Google, AT&T, CenturyLink and C spire. The others fall into the follower category -- hopefully, anyway.
The excitement is far from over. A few weeks ago, Google announced expansion of its Google Fiber to a few other cities. That news started getting lots more cities interested. Last week, AT&T announced its moves in North Carolina. It will roll out its ultra high speed U-verse Internet service with GigaPower to six communities in the Research Triangle and Piedmont Triad regions of North Carolina. It will begin as soon as it gets final approval.
What's next? AT&T CEO Randall Stephenson said they would roll out this service to markets around the country. What this says to me is stay tuned, there is much more to come from AT&T. I would say AT&T looks like it is about ready to put the pedal to the metal on growth in this area.
So today it looks like AT&T and Google are the two largest and most aggressive players in this new race.
Here is a nagging Google question. Will it stay in this game as a player? I don't know. Initially I thought it wanted to use Kansas City as a showpiece to help jump-start the industry to a much faster speed. However it is now expanding.
Will Google Fiber stay in the competitive game to keep others building faster, or will it jump out at some point? We'll have to wait and see.
I also watched how Mississippi cities did their research, wrote their proposals, and created compelling arguments to win the first cities in the C Spire region for ultra high-speed service.
What this says to me is ultra high speed, 1 Gigabit Internet service is going to be one of the strong growth engines going forward.
Which Companies Will Catch the Wave?
Cities want it because they see it as a way to attract companies. That means they increase their tax base and have a strong growth economy. Companies want it because they see this as a competitive advantage, at least for a while, until everyone has it -- that will take years. Consumers want it, because they will be attracted to ultra high-speed cities for work and as great places to live and raise their families.
The first stage of this new growth opportunity looks like it will be individual companies moving into individual market areas. I don't yet see multiple operators competing in the same space yet. It will likely be this way for several years, until at least the first wave of cities have one ultra high-speed provider. As the future unfolds, we will see more companies moving into each market space.
That could mean prices for this service will start out higher. However, we can't blame companies for trying to recover their very high build-out expenses. This is the way it has always worked over time. Eventually, as competition grows, prices will come down.
Do you remember how expensive cellular phone service was 20 years ago? I predict the same thing here.
It looks like we are in the very early stages of an ultra high-speed Internet revolution that will benefit everyone: carriers, cities, companies and customers. It seems like everyone will win in this new environment.
Everyone who is a player will win, anyway. That's why sooner or later I see every service provider moving into this ultra high-speed race. It will be interesting to watch other companies jump in and join the fray -- and it also will be interesting to watch the companies that don't.
- See more at: http://www.ecommercetimes.com/edpick/80315.html#sthash.UdnCw2g4.dpuf
E-Commerce Times columnist Jeff Kagan is a technology industry analyst and consultant who enjoys sharing his colorful perspectives on the changing industry he's been watching for 25 years. Email him at jeff@jeffKAGAN.com. - See more at: http://www.ecommercetimes.com/edpick/80315.html#sthash.UdnCw2g4.dpuf
A lire sur: http://www.technewsworld.com/edpick/80316.html By Richard Adhikari
TechNewsWorld 04/17/14 1:08 PM PT
Americans seem more eager to imagine distant future possibilities like time travel than to engage in those just around the corner, like driverless cars, suggests new research from Pew. Many of the futuristic ideas that Americans hold dear -- like flying personal cars a la Jetsons -- are relatively antique concepts that were dazzling New York World's Fair visitors 50 years ago.
The pace of technological change is getting faster, and many Americans are optimistic about the results, although a sizable minority are concerned, Pew Research has found.
Nearly 60 percent think technological and scientific advances will make life in the future better, but 30 percent fear they will make life worse than it is today.
Sparking the most anxiety: designer babies; robots as primary caregivers for the ailing and elderly; personal and commercial drones flying U.S. skies; and implants or other wearable devices (such as Google Glass) that constantly give users information about the world around them.
"In many ways, this is a classic American story. We are generally optimistic about our ability to overcome obstacles and for things to work out well in the end, even as we envision many challenges to overcome on our way to getting there," Alex Smith, senior researcher at the Pew Research Center, told TechNewsWorld.
However, what the results show is that "we really have no idea what is coming and that there is a significant number of people that will resist the changes if they violate their privacy or religious beliefs," contended Rob Enderle, principal analyst at the Enderle Group -- or if they believe the changes will make them obsolete.
Princeton Data Source interviewed 1,001 people aged 18 and above throughout the United States Feb. 13-18 for the study, in English and Spanish, over landlines and cellphones.
Some Highlights of the Survey
More than 80 percent of the respondents expected that replacement human organs would be custom-grown in a lab within the next 50 years.
Computers will be able to create art that can't be distinguished from that produced by humans, 51 percent of the respondents expected.
While 50 percent said they didn't want to ride in a driverless car, 48 percent said they did.
More than 70 percent would not want to get a brain implant to improve their memory or mental capacity, but 26 percent would.
Forget about teleporting any time soon -- only about 40 percent of respondents expected scientists would develop the technology to teleport objects within the next half century.
However, many would like to experience a future much like the one envisioned in the cartoon series The Jetsons, complete with flying cars and personal spacecrafts.
They also would like time travel to become a reality -- and, despite the general opposition to creating designer babies, they desired health improvements that would extend human life or cure major diseases.
Groundhog Day Was Just the Beginning
"The thing that struck me about the report was how, in many ways, the visions people have of the technological future are surprisingly static," Patrick McCray, a professor in the history department of the University of California at Santa Barbara, told TechNewsWorld.
"If you put forth some of these technological possibilities to people at the 1964 World's Fair -- space colonization, smarter computers, driverless cars -- they wouldn't have been puzzling over them," McCray continued. "GM had an entire exhibit at the fair about driverless cars called 'Futurama.'"
As for computers generating art, Harold Cohen has been doing that for decades, McCray pointed out.
Differentiating Science From Technology
Science and technology are conflated throughout the survey, and they are "absolutely not the same thing," said McCray, who teaches undergraduate classes on the history of science and the history of technology.
Scientific research does not automatically lead to advances in technology, he pointed out.
The World Outside of Boffintopia
The survey ignored the deeper implications of changes in science and technology, McCray observed.
"There's very little thought given to what these changes mean in terms of societal implications," he explained. "Often, when people are asked to speculate as to what they think the world is going to be like, they very often default to thinking about science and technology, and very rarely do they think about social or political or economic changes."
The 30 percent or so of respondents who felt scientific and technological advancements would make things worse in the future constitute "a big group," Enderle told TechNewsWorld. They "will likely resist change for a variety of reasons," including religious dogma.
Richard Adhikari has written about high-tech for leading industry publications since the 1990s and wonders where it's all leading to. Will implanted RFID chips in humans be the Mark of the Beast? Will nanotech solve our coming food crisis? Does Sturgeon's Law still hold true?
In case you haven't heard, the Internet of Things is going to be a really big deal. At least, that's the prediction from the folks at IDC. At their Directions conference last month, a good portion of the sessions were devoted to discussion of Internet of Things projects, whereas there was barely a whisper at last year's show.
For the uninitiated, the objective of Internet of Things is to connect just about every electronic device we interact with on a daily basis to the Internet. People in the world of big data analytics are excited because this could deliver a massive trove of data to feed into predictive models. IDC analysts think this is one reason why the Internet of Things concept is set to explode.
In this edition of Talking Data, Tech Target editors Ed Burns and Jack Vaughan recap IDC Directions and the reasons why analysts are hopeful when it comes to the Internet of Things. There are certainly some hurdles to jump before we start seeing smart refrigerators and toasters, but there are signs of life. For example, Google made a big splash recently when it bought smart thermostat manufacturer Nest. Take a listen to the podcast to hear why IDC analysts are starting to become more bullish on Internet of Things projects.
A lire sur: http://www.relationclient-infos.com/info_article/m/9547/les-consommateurs-du-monde-entier-utilisent-de-plus-en-plus-le-mobile-pour-enrichir-leur-experience-dachat-en-magasin.html
09/04/2014 Le 3ème Baromètre Annuel de l’Expérience Marchande Connectée réalisé par l’agence digitale et technologique DigitasLBi Paris delivre ses chiffres. Sans trop de surprises, le m-commerce progresse, le Showrooming aussi..
Est-ce une surprise ? Les smarphones influencent de plus en plus notre vie quotidienne. La moitié des répondants de l'étude réalisée dans 12 pays* déclare que l'utilisation du smartphone a ainsi changé leur façon de faire leurs achats, ils sont 37 % à l’affirmer en France. 34 % des possesseurs de smartphones ont effectué un achat sur leur smartphone ces trois derniers mois et 72 % des utilisateurs de smartphones déclarent l’utiliser à l'intérieur d'un magasin (75 % en France, tendance équivalente à 2013 au regard des possesseurs de Smartphone). La Chine est largement en avance en matière de m-commerce avec 76 % des possesseurs de smartphones ayant fait un achat mobile durant les trois derniers mois, versus en moyenne 35 % aux États-Unis, au Royaume-Uni et en Allemagne. Le nombre de m-acheteurs français est lui de 20 %, proche des pratiques de nos voisins du Pays-Bas (18 %) et de Belgique (15 %).
19 % des consommateurs reconnaissent avoir déjà quitté un magasin après avoir consulté leur smartphone et y avoir comparé des prix, consulté des reviews, des informations (Showrooming), c’est le cas pour 16 % des Français et 29 % l’envisageraient, essentiellement dans les secteurs de l’High Tech, l'électroménager et les produits culturels. Le prix est la composante essentielle : près de la moitié des consommateurs français interrogés ont déclaré qu'une différence de prix d'au moins 5 % les feraient quitter le magasin; si la différence des prix atteignait -10 %, 84 % des consommateurs français partiraient (88 % au niveau monde, vs 82 % des consommateurs français en 2013).
Face à ce constat, les magasins physiques doivent se positionner pour répondre à ces nouveaux comportements en croissance dans ce contexte économique difficile. Si le magasin perd de son influence, il dmeure néanmoins un point de contact fondamental. C'est la troisième source d'information privilégiée par les consommateurs à travers le monde (14 %). En France, Internet est la référence essentielle en matière de recherche d’informations produit, néanmoins, le magasin reste le point de contact préféré par 10 % des Français. (4ème position alors qu’il était en 2nde position en 2013).
Le digital et l’innovation étant les clés de cette transformation : 42 % des consommateurs Français déclarent avoir déjà utilisé des outils multimédia d’aide à l’achat en magasin (ils étaient 40 % en 2013) et 74 % des consommateurs Français pensent que les vendeurs en magasin seraient plus efficaces s’ils étaient équipés de tablettes avec les informations produits. 74 % des français serait incité à se rendre en magasin pour utiliser une carte de fidélité sur mobile, ils seraient 60 % à se déplacer pour essayer un miroir magique, et 62 % prêts à utiliser outils leurs permettant de connaitre en temps réel les stocks disponibles en magasin sur leur téléphone mobile.
Pour Vincent Druguet, DGA de DigitasLBi et responsable du centre d’excellence dédiée au commerce connecté : « Cette étude démontre le besoin de faire converger les bénéfices du magasin et de l’e-commerce. 15 années d'usage e-commerce ont changé les attentes et comportements des consommateurs. Les magasins doivent intégrer les bénéfices du e-commerce in-store en amenant de la valeur aux consommateurs. Une conception intégrée (On & Off) du retail permet d'offrir aux consommateurs une expérience shopping optimale. Le consommateur peut alors effectuer ses achats auprès du même retailer au travers d’une large gamme de points de contacts tant digitaux que physiques, avec le même confort d’achat et sans rupture d’expérience. C’est ce que nous avons appelé le Responsive retail. ».
(*) Cette étude intitulée « Connected Commerce : A Snapshot of the Modern Shopper » a examiné les habitudes et tendances d’achat des consommateurs à travers 12 pays dans le réseau mondial de DigitasLBi : la France, la Belgique, la Chine, le Danemark, l'Allemagne, l'Italie, les Pays-Bas, Singapour, l'Espagne, la Suède, le Royaume-Uni et les USA.
A lire sur: http://spectrum.ieee.org/automaton/robotics/military-robots/repurposed-military-drones-mobile-wireless-hotspots-in-new-darpa-project/
By Evan Ackerman, Posted
Photo: The National Guard via Flickr
Launch of an RQ-7 Shadow UAV at Volk Field in Juneau County, Wisc., in 2010.
The military is pouring a huge amount of resources into unmanned systems like UAVs. Every year, drones get fancier and more capable, which means that there's an increasing number of slightly less fancy and slightly less capable drones gathering dust and feeling lonely in a hangar somewhere. The U.S. Defense Advanced Research Projects Agency (DARPA) has an idea of what these drones might be good for: not delivering weapons, not surveillance, but instead providing mobile high-speed network connectivity for deployed troops.
DARPA’s Mobile Hotspots program will develop a "reliable, on-demand capability for establishing long-range, high-capacity reachback that is organic to tactical units." In the practical sense, this means building a pod crammed with networking equipment that can fit on the wing of a spare RQ-7 Shadow UAV. Inside the pod are steerable millimeter-wave antennae that act as a relay, providing a local wireless network with a 1 Gb/s capacity.
In March, Phase 2 of the Mobile Hotspots program granted funding to several private companies to integrate the necessary technology into both UAV pods and associated ground vehicles. This phase will conclude with a demonstration of all of the pieces working together, while the final phase should showcase a mature, deployable system of multiple SRQ-7 Shadow UAVs providing a robust mobile network.
Drones are certainly one way to go if you need to deploy a temporary wireless network over a large area flexibly and quickly, but systems like these aren't efficient for long-term use. For a more permanent solution, options like blimps or HALE (High Altitude Long Endurance) UAVs powered by the sunare likely a better way to go until infrastructure on the ground can be established. On the other hand, ground infrastructure is expensive to build and maintain and difficult to upgrade, which makes it worth asking if there might come a point at which it would make sense to replace things like cell towers with autonomous aerial relays even in developed areas.
A lire sur: http://spectrum.ieee.org/energy/environment/google-earth-engine-brings-big-data-to-environmental-activism By Eliza Strickland
Posted
When a tree falls in the forest these days, it doesn’t just make a sound—it causes a computer program to generate an alert that’s sent out to activists, researchers, and environmental policymakers around the planet. An online tool to map deforestation is applying big-data processing techniques to massive troves of satellite imagery, and in the process it is making possible a new kind of environmental activism.
The tool, Global Forest Watch, was launched by the World Resources Institute in February to provide monitoring of deforestation around the world. Users can explore the global map to see trends since the year 2000 and can zoom in to examine forest clearing at a resolution of 30 meters. The tropical zones of the map are refreshed every 16 days, frequent enough to track deforestation hot spots in places like Indonesia and Brazil. Users can also sign up for alerts, which are generated when the system detects signs of illegal logging or slash-and-burn agriculture in the tropics.
The site is powered by Google Earth Engine, which crunches image data drawn from several NASA and U.S. Geological Survey (USGS) satellites. Google is developing this platform to host petabytes of Earth science data and to give researchers a straightforward way to use it. “They log on, access all the data, and run their own algorithms,” explains David Thau, the senior developer advocate for Google Earth Engine. Thau and his colleagues work with scientists to develop useful analysis functions, and then they “get out of the way,” he says, and let researchers conduct their investigations. Google Earth Engine is currently available to thousands of research partners, and the company plans a general release down the line.
Global Forest Watch is the result of a convergence of projects. The World Resources Institute’s Data Lab had been working on a forest-clearing-alert system for the tropics based on data from MODIS (Moderate Resolution Imaging Spectroradiometer), instruments that ride aboard NASA’s Terra and Aqua satellites. Meanwhile, Matthew Hansen, a professor of geographical sciences at the University of Maryland, had been collaborating with Google Earth Engine on a global map of deforestation; his project used images from the Landsat satellites operated by NASA and the USGS. Both data sets are now used to create Global Forest Watch; MODIS provides better temporal resolution, while Landsat provides exemplary spatial resolution.
The researchers’ algorithms create the site’s dramatic map of forest loss using the satellites’ visible light and infrared data. Each pixel of satellite imagery is characterized by both its color and its infrared signature, and the algorithms then compare the data for that pixel across time to detect changes. A switch from green to brown, for example, is a bad sign. Hansen pioneered this technique in his earlier research on land use in the Congo Basin, where the ground was very often obscured by clouds. Rather than throw out the cloudy images, Hansen developed ways to create composite pictures using many days’ worth of images. “We learned how to work pixel by pixel,” he says.
When Landsat data became freely available in 2008, Hansen worked with Google Earth Engine to apply his model globally, looking at 143 billion pixels of 30 meters each. By tracking the pixels over months and years, the model corrects for seasonal changes to forests and can distinguish between crops and woodlands. The collaborators published their results last November, revealing a net loss of 1.5 million square kilometers of forest between 2000 and 2012. Those calculations, the researchers noted, took 1 million CPU-core hours on 10 000 computers.
Thau says that Google Earth Engine aims to take the pain out of this big-data research. In typical cloud computing, he says, researchers have to manage the distribution of their computing tasks across the network. With Earth Engine, however, researchers simply use a programming interface to enter their queries, which get “parallelized” automatically, Thau says.
By creating the public-facing Global Forest Watch website, the World Resources Institute aims to give the public access to all that big data too. Dan Hammer, chief data scientist at the organization’s Data Lab, says he expects that government agencies, businesses, researchers, and advocacy groups will use the site to get a better picture of forest management.
At the environmental action group Rainforest Action Network, agribusiness campaigner Gemma Tillack says the new tool may be particularly useful in Indonesia, where rain forests are falling to make way for palm oil plantations. Her group is asking 20 big food corporations toguarantee that the palm oil used in their products is being grown on legal, sustainable plantations. Some companies are establishing responsible palm oil procurement policies, she says, and Global Forest Watch could help them implement those policies. “They need to find out where the palm oil they’re buying is coming from, and then they’ll need to monitor the actions of their supply-chain partners,” she says.
Time will tell if the site will make any difference in the seemingly inexorable advance of bulldozers, but Hammer is optimistic. “I’m consistently surprised by how much open access to data can fix things in the world,” he says. Many policy decisions are delayed by arguments over conflicting information, Hammer notes, but the objective data provided by Global Forest Watch has the potential to eliminate such confusion. “That would let people get to the hard questions about what should be going on in the forests rather than what is going on,” he says.
A correction was made to this article on 16 April 2014.