Tag Archives: warfare

The ten rules of infowar

By now you’re probably all familiar with the notion of 4th Generation Warfare, even if the name doesn’t ring a bell: it’s network vs. nation-state, the sort of seemingly unwinnable cluster-f*ck that keeps sucking up money and sending back bodybags from Afghanistan. But what about 5th Generation Warfare?

5GW is the next step along: network vs. network. It’s the sort of war that’s happening right now in the media channels and websites of the United States: an information war, largely bloodless but savagely partisan, driven by irreconcilable ideologies. It’s politics, in other words; politics in a networked age. According to Umair Haque, there are ten rules to learn if you want to win.

He’s directing them at the Obama Administration in light of the public drubbing they’re getting from the obfuscatory tactics of the opposition, but the rules probably apply just as well anywhere, from the world stage to the office where you work. Here are a couple of samples, but you’ll want to read them all:

5. Darwinian counterattacks. What happens after a networked offense? A counter-attack: the remaining nodes link up, share resources, and then launch a portfolio of different counterattacks. The fittest ones — those most threatening to the enemy — survives. It’s like what hedge funds do, except it’s not lame. To enable a Darwinian counter-attack, you’ve got to offer suggestions, tools, and methods for a range of potential counterattacks.

6. Hack your enemy’s weapons. In a 3G or 4G war, you can’t hack the enemy’s guns, bombs, or knives. In a 5G war, you can hack the enemy’s information weapons — and that’s an often explosively powerful tactic. “Death Panels”? Call them “Life Panels” instead, explain that old Republican Senators already benefit from them — and enjoy your rise to the top of Google.

[via Global Guerrillas]

Eyes in the sky: ubiquitous real-time aerial surveillance

Watchkeeper - British unmanned aerial vehicleThe United States Army has seen a lot of success with airborne surveillance systems in recent years, and it’s given them the taste for more. Wired’s Danger Room blog takes a look at the current state of the art as well as the latest ground surveillance specifications DARPA is bandying around to potential contractors:

In February we reported on Darpa’s Autonomous Real-time Ground Ubiquitous Surveillance – Imaging System (ARGUS-IS ), a 1.8 gigapixel flying eye which will be mounted in a 500-pound pod carried by a Predator or A160 Hummingbird robocopter. The ARGUS-IS makes for an impressive camera, with the resolution and processing power to track a large number of separate items including “dismounts” — people on foot — over a wide area, as well as “a real-time moving target indicator for vehicles throughout the entire field of view in real-time.”

But ARGUS-IS is already looking old. Now the Army is asking for something even more powerful. In a new request for solicitations, it outlined the concept for a novel visible/infrared sensor that will cover a much larger area on the ground — with much higher resolution.

The sensor is required to be lightweight with low power consumption and to have significantly lower operating costs compared to existing systems, and must be able to operate from small aircraft, either manned or unmanned. In terms of specifics, the Army is looking for 2.3 gigapixels running at two frames per second. By my reckoning, this suggests continuous coverage of area of around sixty-two square miles at 0.3m resolution with a single sensor. That’s quite a step up from Angel Fire, which covers a tenth of the area at much lower resolution.

That’s a lot of detail, for sure. And we can probably assume that the bulk of the aircraft carrying the hardware will in fact be unmanned; The Guardian reports that the US is now training more drone operators than bomber and fighter pilots combined.

Three years ago, the service was able to fly just 12 drones at a time; now it can fly more than 50. At a trade conference outside Washington last week, military contractors presented a future vision in which pilotless drones serve as fighters, bombers and transports, even automatic mini-drones which attack in swarms.

Five thousand robotic vehicles and drones are deployed in Iraq and Afghanistan. By 2015, the Pentagon’s $230bn (£140bn) arms procurement programme Future Combat Systems expects 15% of America’s armed forces to be robotic. A recent study ‘The Unmanned Aircraft System Flight Plan 2020-2047’ predicted a boom in drone funding to $55bn by 2020 with the greatest changes coming in the 2040s.

“The capability provided by the unmanned aircraft is game-changing,” said General Norton Schwartz, the air force chief of staff. “We can have eyes 24/7 on our adversaries.”

The article has a jaw-dropping closer, too:

In Wired for War, author Pete Singer speculates the machines are harbingers of a new era of “cost-free war”.

“It’s an historic change,” said Singer. “Going to war has meant the same thing for 5,000 years. Now going to war means sitting in front of a computer screen for 12 hours. Then you go home and talk to your kids about their homework.”

Yeah, cost-free war! Awesome! Well, it’s not cost-free for the brown people caught in the crossfire, but hey, it’s hard to care about them so much when they’re just pixels on a screen, AMIRITE? [main story via NextBigFuture; image by skuds]

Terrorist strategy as an auto-immune response

terrorismAlex at the Yorkshire Ranter reviews The Accidental Guerrilla by David Kilcullen and discusses how the strategy behind Al-Qa’ida-inspired terrorism can be thought of in the same terms as an auto-immune disease:

Specifically, auto-immune war is a strategy, but its tactical implementation is the creation of false positive responses. Security obsession gums up the economy with inefficiencies. Terrorism terrorises the public; security theatre keeps them that way. As Kilcullen points out, every day, millions of travellers are systematically reminded of terrorism by government security precautions. Profiling measures subject entire communities to indignity and waste endless hours of police time. Vast sums of money are spent on counterproductive equipment programs and unlikely techno-fixes. National identity cards and monster databases are the specific symptoms of this pathology in the UK, just as idiotic militarism is in the US.

It is the best description of how terrorism actually works as a method of warfare I have come across. Interested readers might also be interested in Wasp by Eric Frank Russell, which deals with terrorism in a practical and humorous fashion.

[image from Dagfinn Ilmari Mannsåker on flickr]

And I, for one, welcome our new robot scientists

robot with laptopRobots are ideal for doing human tasks that are repetitive, like screwing lids on cosmetic bottles, welding car panels… and now making scientific discoveries. Columbia University’s “Adam” machine is “the first automated system to complete the cycle from hypothesis, to experiment, to reformulated hypothesis without human intervention”.

The demonstration of autonomous science breaks major ground. Researchers have been automating portions of the scientific process for decades, using robotic laboratory instruments to screen for drugs and sequence genomes, but humans are usually responsible for forming the hypotheses and designing the experiments themselves. After the experiments are complete, the humans must exert themselves again to draw conclusions.

[snip]

They armed Adam with a model of yeast metabolism and a database of genes and proteins involved in metabolism in other species. Then they set the mechanical beast loose, only intervening to remove waste or replace consumed solutions. […]

Adam sought out gaps in the metabolism model, specifically orphan enzymes, which scientists think exist, but which haven’t been linked to any parent genes. After selecting a desirable orphan, Adam scoured the database for similar enzymes in other organisms, along with the corresponding genes. Using this information, it hypothesized that similar genes in the yeast genome may code for the orphan enzyme.

The process might sound simple — and indeed, similar “scientific discovery” algorithms already exist — but Adam was only getting started. Still chugging along on its own, it designed experiments to test its hypotheses, and performed them using a fully automated array of centrifuges, incubators, pipettes, and growth analyzers.

After analyzing the data and running follow-up experiments — it can design and initiate over a thousand new experiments each day — Adam had uncovered three genes that together coded for an orphan enzyme. King’s group confirmed the novel findings by hand.

Score one for the Singularitarians – autonomous systems that can follow the scientific method without supervision would surely be a component of an emergent self-improving artificial intelligence, if I understand the theory correctly. [image by jurvetson]

And why not outsource our more tedious scientific tasks to robot underlings? After all, we’ve been fairly unhesitating in our rush to do the same with warfare… no matter how ethically blurred an idea that may be: