Tag Archives: technology

Paparazzi drones (coffee delivery upgrade optional)

The Wall Street Journal reports on the inevitable migration of UAV drone technology into non-military spheres of life:

Personal drones aren’t yet plying U.S. flyways. But an arms race is building among people looking to track celebrities, unfaithful lovers or even wildlife. Some organizations would like them for emergency operations in areas hit by natural disasters. Several efforts to develop personal drones are scheduled for completion in the next year.

“If the Israelis can use them to find terrorists, certainly a husband is going to be able to track a wife who goes out at 11 o’clock at night and follow her,” said New York divorce lawyer Raoul Felder.

Drones now are associated with the unmanned Predator craft the Central Intelligence Agency uses to fire Hellfire missiles at militants in Pakistan’s tribal areas. But the essential technology is increasingly available beyond military circles, and spreading fast. An unmanned aircraft that can fly a predetermined route costs a few hundred bucks to build and can be operated by iPhone.

That’s pretty cheap and accessible; club together with a few neighbours, sketch out a rota, pay the kids pocket-money for manning a few shifts a week. Top marks to Randall “FuturePundit” Parker for this bit of close-range speculation:

The ability of surveillance drones to record high-res images could be combined with a wireless link to a criminal face matching computer server. So convicted rapists and muggers could be identified. Crowd sourcing becomes a real possibility. Many different personally owned drones could (along with cameras mounted in cars and outside of stores and houses) all pass info to servers that could then track the movement of known dangerous people (why they are out on the street is another subject). Also, after a crime is committed as soon as, say, a victim of rape or robbery reports the crime all recent drone feed logs in the vicinity could be scoured to identify possible suspects and start tracking them. Neighborhood watches could signal people to all send out their drones to do a massive sweep of the area.

I can imagine flying drones being sent off to a drug store to land on the roof to be loaded with a drug prescription or other light item. The energy costs would probably be lower than the energy costs of driving a car to the store. Wouldn’t work for a large grocery load. But would work for trips to get smaller items.

A bigger flying drone operated by, say, Starbucks or 7/11 could deliver coffee to a number of houses on a route. Or how about drones that deliver newspapers? A delivery truck could drive along with a flat bed where the drones lift off and deliver newspapers down side streets. Reduced labor costs, faster delivery.

Lots of potential apps there… each of them with their own potential shortcomings, exploit opportunities and failure consequences. (The intimidatory power of police drones will be somewhat negated when the rough neighbourhoods they’re intended to patrol can field their own jerry-built squadron of flying camera platforms; who will watch the watchmen, indeed. Won’t be long before some geek firebrand starts mounting Gauss weapons and scramblers on them, either, so plenty of potential for an escalating robot turf war between governors and governed; the street finds its own use for yadda yadda yadda.)

Definitely a potential plank in David Brin’s “Transparent Society” platform, too; the participatory panopticon becomes a lot more powerful when your cameras can move in more than one or two dimensions. And a perfect excuse to dig up one of Anders Sandberg’s classic near-future hazard signs from 2006:

Ubiquitous surveillance hazard sign

Projected success for holographic telepresence

The Guardian strikes back with a another sci-fi pop-culture reference in a new-tech article; this time the holographic projections from Star Wars: A New Hope get the nod as the “just like that” examplar of new research from the University of Arizona:

Until now, scientists have been able to create holograms that display static 3D images, but creating video has not been easy. Two years ago, Peyghambarian’s team demonstrated a device that was able to refresh a holographic image once every few minutes – it took around three minutes to produce a single-colour image, followed by a minute to erase that image before a new one could be written into its place.

In his latest project, Peyghambarian’s team reduced that image refresh time to two seconds. They also showed it was possible to use full colour and demonstrated parallax, whereby people looking at the image from different angles will see different views of the image, just as if they were looking at the original object.

Note, however, this is not a true 3D hologram:

Whereas the image of Princess Leia in Star Wars is projected in three-dimensional space, the new technology uses a 2D screen to create the illusion of 3D. At the heart of Peyghambarian’s system is his team’s invention of a new type of plastic known as a photorefractive polymer. The material, which is used to make the screen, allows the researchers to record and erase images quickly.

Naturally enough, the predicted market for this technology is telepresence for business meetings… which is the very same market that was meant to have made videophones ubiquitous by now. Given the amount of hardware and expense involved in this holographic telepresence set-up, I figure videotelephonics and/or metaverse meetings will get taken up much more quickly, if at all.

Still kinda cool, though.

The chip in Murcheson’s eye

The Guardian reports on a successful cyborg vision implant procedure; bonus points for the industry-standard soundbite disclaimer:

“The visual results they were able to achieve were, up until now, thought to be in the realms of science fiction,” said MacLaren.

The guy must read some pretty strict Mundane SF if he thinks this represents the apogee of artificial vision acuity as portrayed in science fiction…

A man left blind by a devastating eye disease has been able to read letters, tell the time and identify a cup and saucer on a table after surgeons fitted him with an electronic chip to restore his vision.

Whoa.

Snark aside, it’s actually a pretty impressive step along the path to full-on artificial vision.

Miikka Terho, 46, began losing his eyesight as a teenager and was completely blind when he joined a pilot study to test the experimental eye chip at the University of Tübingen in Germany.

[…]

“I’ve been completely blind in the central area for about 10 years. I had no reading ability and no way of recognising anybody any more. When the chip was first turned on, I just saw flashes and flickering. It didn’t make any sense. But in a matter of hours, everything started to get clearer and clearer,” Terho said.

“When I looked at people for the first time, they looked like ghosts. I knew it was a person, but they were hazy. Then things got sharper.

“It was such a good feeling to be able to focus on something, to see something right there, and maybe even reach out and grab it. I wasn’t able to identify what was in front of me on the street, but I knew when something was there, so I didn’t walk into it,” he added.

Interesting to note it took a while for the guy to start making sense of the input; neuroplasticity in action, maybe? Or just long-dormant visual centres slowly reopening for business? Whichever it is, it’s nice to find a story where technology is demonstrably improving people’s lives.

Quantum computing for dummies

Heard people talking about quantum computing, but not really sure you understand what they mean? Well, you’re far from alone (as the late great Richard Feynman once said, “anyone who claims to understand quantum physics doesn’t understand quantum physics”), but why let that stop you from trying to get a layman’s grasp of the basic ideas?

That, one assumes, is the spirit in which this brief introduction to quantum computing at Silicon.com has been written [via SlashDot]… though I’m in no position to comment on how accurate or useful it is. Input from passing physicists is, as always, more than welcome. 🙂

Hang on, what’s quantum entanglement when it’s at home?

I was afraid you were going to ask. Quantum entanglement is the point where scientists typically abandon all hope of being understood because the thing being described really does defy the classical logic we’re used to.

An object is said to become quantumly entangled when its state cannot be described without also referring to the state of another object or objects, because they have become intrinsically linked, or correlated.

No physical link is required however – entanglement can occur between objects that are separated in space, even miles apart – prompting Albert Einstein to famously dub it “spooky action at a distance”.

The correlation between entangled objects might mean that if the spin state of two electrons is entangled, their spin states will be opposites – one will be up, one down. Entangled photons could also share opposing polarisation of their waveforms – one being horizontal, the other vertical, say. This shared state means that a change applied to one entangled object is instantly reflected by its correlated fellows – hence the massive parallel potential of a quantum computer.

Accuracy aside, what’s interesting to me is seeing this sort of bluffer’s guide in a venue like Silicon.com, which is more of a business organ than a tech one. Prepping the Valley VCs for upcoming investment decisions, perhaps?

Implanted obsolescence

We privileged early-adopter types are increasingly accustomed to our technology becoming obsolete… but what happens when the technology in question is actually a physically-embedded part of you? Suddenly your upgrade path is a little trickier than hopping on a Boris-Bike and going to your nearest Apple store. Tim Maly points out the risky side of early-adopter human augmentation tech:

On the ground, the realities of the only brain-mounted interface I know of – cochlear implants – are brutal. Here’s a taste: You can’t hear music. For a sense of what that’s like, try these demos. The terrifying truth is that once you’ve signed up for one kind of enhancement (say, the 16 electrode surgery) it’s very hard to upgrade, even if Moore’s law ends up applying to electrode counts and the fidelity of hearing tech.

If you are an early adopter for this kind of thing, the only thing we can say for sure about it is that it’ll be slow and out of date very soon. Unless they find a way to make easily-reversible surgery, your best strategy is to wait for the interface that’s whatever the brain-linkage equivalent is to 300dpi, full colour, high refresh screens.

[…]

Medical advancements demand sacrifices. Someone needs to wear the interim devices. Desperation is one avenue for adoption. Artificial hearts are still incomplete and dicey-half measures, keeping people alive while they wait for a transplant or their heart heals. This is where advances in transplants and prosthetics find their volunteers and their motivation for progress. It’s difficult to envision a therapeutic brain implant – they are almost by definition augmentations.

An avenue to irreversible early adoption is arenas where short term enhancement is all that’s required. The military leaps to mind. With enlistment times measured in a few short years, rapid obsolescence of implants doesn’t matter as much; they can just pull virgin recruits and give them the newest, latest. If this seems unlikely, consider that with the right mix of rhetoric about duty and financial incentives, you can get people to do almost anything including join an organization where they will be professionally shot at.

Picture burnt-out veterans of the Af-Pak drone wars haunting the shells of long-deserted strip-malls, sporting rusty cranial jacks for which no one makes the proprietary plugs or software any longer… you can probably torrent some cracked warez that’ll run on your ageing wetware, but who knows what else is gonna be zipped into that self-installing .deb?

Meanwhile, Adam Rothstein brings a bit of Marxist critique to the same issue, and points out that the same problems apply to external augmentations:

It is easy to envision these uncanny lapses between classes occurring when we start fusing bodies with machines, because to imply that our bodies can easily be obsolete machines threatens a certain humanist concept of our bodies as a unifying quality to our species. But we don’t have to start invading the body to find differences that affect our ability to stratify ourselves into classes. If the equilibriums of the relations of production can develop a rift between first and third world without personal technology, between upper class and lower class both before, and as we start to use computers to identify ourselves as class member, why would one not also occur between “cutting-edge” and “deprecated” classes as technology becomes more “personal”–magnetizing that one kernel social structure not yet susceptible to fracture and evolution? At what point will our devices themselves reinforce the equilibriums of choice they themselves provide, by being the motive force for separating individuals into groups? If not by lasting only as long as their minimal service contracts in a planned obsolesce that intensifies the slope of device turnover, then by active means? An app only for the iPhone 8, that can detect models of the iPhone 5 and below–letting you know that you’ve wandered into an area with a “less than savory technological element?” When will emergency services only guarantee that they can respond to data transponder calls, and not voice requests? The local watchman has been phased out, in favor of centrally dispatched patrols that require phones to access. Isn’t it only a matter of time before central dispatch is phased out for distributed drone network policing? The ability to use a computer is a requirement for many jobs. When will the ability to data uplink hands-free be a requirement?

Insert unevenly-distributed-future aphorism here.