Tag Archives: security

Fabber viruses

Among the obligatory swathe of spoof posts for 1st April this year was one from 3D printing outfit Shapeways, who claimed to have fallen victim to the first proof-of-concept virus for fabricators[via Fabbaloo].

The best spoofs always have an element of truth, or at least truthiness. While Shapeways have fabricated this particular incident (arf!), its believability hinges on the fact that 3D printing is a networked technology, and that everything can and will be hacked.

Sven Johnson has already sent back reports from an imperfect future regarding 3D spam, which is likely to be as ubiquitous as it is for email and fax machines (which some people really do still use, apparently), but is there any scope for piggybacking illegal or exploitative content on legitimate 3D design files (like some form of steganography)? I don’t know enough about viruses or 3D design software to be certain, but my guess would be that if someone can think of a way to make a fast buck from it, it’s going to happen eventually.

Merry Christmas; I got you a panopticon

Two quick links; I’ll leave you to do the math yourself. First up – ‘smart’ CCTV system learns to spot suspicious behaviour with a little help from its human operators:

… a next-generation CCTV system, called Samurai, which is capable of identifying and tracking individuals that act suspiciously in crowded public spaces. It uses algorithms to profile people’s behaviour, learning about how people usually behave in the environments where it is deployed. It can also take changes in lighting conditions into account, enabling it to track people as they move from one camera’s viewing field to another.

[…]

Samurai is designed to issue alerts when it detects behaviour that differs from the norm, and adjusts its reasoning based on feedback. So an operator might reassure the system that the person with a mop appearing to loiter in a busy thoroughfare is no threat. When another person with a mop exhibits similar behaviour, it will remember that this is not a situation that needs flagging up.

And secondly – a facial recognition door lock system retailing for under UK£300.

… can store and register up to 500 faces thanks to an internal dual sensor and two cameras. This, claims the manufacturer, “allows it to establish an incredible facial recognition algorithm in a fraction of a second”. Importantly, the system also works at night. A 3.5 inch screen and touch keypad are also included.

The system can also be used to record attendance in an office. There’s a USB and Ethernet port so that managers can download or keep track of who arrives and leaves the office when.

I have the sudden urge to talk at length to people about the findings of the Stanford Prison Experiment.

The Hail Mary Cloud: slow but steady brute-force password-guessing botnet

Hail MaryDid you hear about the recent exploit of jailbroken jesusPhones? Yeah, the Rick-rolling one (though that wasn’t strictly the original exploit, rather some Australian script-kiddie’s repurposing of a Dutch exploit from earlier in the month); to sum it all up in a sentence, bad things can happen to your hardware if you install software without changing the default password. As a sensible and experienced web denizen, you knew that already, of course.

But when you set or change a password, you’d better make the effort to think up a good one. Countless studies have shown how easy it is for black-hat types to guess the most common passwords (or alternatively social-engineer them out of you), but the ease of guessing is going to increase rapidly very soon, thanks to something one free software geek from Norway is calling the Hail Mary Cloud. [image by Anna Gay]

Yeah, I know, the pop-culture reference is a bit obscure, so I’ll sum it up for you: the Hail Mary Cloud is essentially a brute-force password-guessing botnet that has been scraping away at SSH daemons in recent months. A Mechanical Turk method of botnet expansion, in other words; why wait for someone to click on a spam email link when you can prise open a back-door on a webserver somewhere? [via SlashDot]

Each attempt in theory has monumental odds against succeeding, but occasionally the guess will be right and they have scored a login. As far as we know, this is at least the third round of password guessing from the Hail Mary Cloud, but there could have been earlier rounds that escaped our attention.

The fact that we see the Hail Mary Cloud keeping up the guessing is a strong indicator that there are a lot of guessable passwords and possibly badly maintained systems out there, and that even against the very long odds they are succeeding often enough in their attempts to gain a foothold somewhere that it is worth keeping up the efforts. For one thing, the cost of using other people’s equipment is likely to be quite low.

There are a lot of things about the Hail Mary Cloud and its overseers that we do not know. People who responded to the earlier articles with reports of similar activity also reported pretty consistently something like a sixty to seventy percent match in hosts making the attempts.

With 1767 hosts in the current sample it is likely that we have a cloud of at least several thousand, and most likely no single guessing host in the cloud ever gets around to contacting every host in the target list. The busier your SSH deamon is with normal traffic, the harder it will be to detect the footprint of Hail Mary activity, and likely a lot of this goes undetected.

If you’re worried, you’re thinking right – even the most complex of passwords can be guessed if you’ve got enough processor cycles (and available attempts) to spare. If the Hail Mary Cloud grows big enough, the era of the password as an even partially effective security method may be over… so start genning up on public key encryption now and avoid the rush.

Private security forces on the rise in Detroit

run-down Michigan real estateDetroit is arguably the reluctant poster-child for the bleeding edge of economic decline in the US, and as such it’s the place to watch to see how things might begin developing elsewhere. Which means that as the police – stretched by underfunding and escalating workload – concentrate their attentions elsewhere, we might start seeing an increase in private security firms patrolling some neighbourhoods. [via GlobalGuerillas; image by jessicareader]

Detroit’s problems come chiefly from its huge number of vacant foreclosed properties, which act as a magnet for criminality. The residents of those neighbourhoods who’ve managed to hold on to their homes (despite their plunge in value) are keen to see that their value doesn’t drop further, and so they’re willing to pay for surveillance and a visible presence – up to $30 per month, apparently.

But the line between surveillance and enforcement is a thin one, and as new security outfits proliferate, it’s not a stretch to imagine persons of less than scrupulous morals getting in on the only booming business in a broken town. And then it’s a short step to checkpoints at the barricades between neighbourhoods, armed patrols, CCTV saturation as only ever previously seen in the happy-go-lucky UK… sure, it’s a pessimistic scenario, but I wouldn’t say unrealistically so.

More likely is that the potential  threat of such an outcome will allow the gated community business model to proliferate. After all, property is cheap in the worst affected areas… so you go in, you by up a few streets, repair the broken buildings, install security infrastructure, and let out the properties with privacy and law enforcement served as a side-dish, paid for as part of the rent or mortgage payments. Hello, burbclaves.

Secure your privacy: tell everyone everything

Privacy please!What if the best way to protect against identity theft was not to hide the fingerprints of your digital daily life, but to expose them to public scrutiny? It sounds like an Orwellian contradiction, but Alex Pentland of MIT’s Human Dynamics Lab believes that allowing limited access to logs of our electronic acitivities is actually much safer than relying on passwords or keys which can be phished or stolen. [image by hyku]

“You are what you do and who you do it with,” says Pentland. Researchers and corporations have realised the potential of such data mining, he points out. “It is already happening and it is time for people to get a stake.”

If people gain control of their own personal data mines, rather than allowing them to be built and held by corporations, they could use them not only to prove who they are but also to inform smart recommendation systems, Pentland says.

He recognises that allowing even limited access to detailed logs of your actions may seem scary. But he argues it is safer than relying on key-like codes and numbers, which are vulnerable to theft or forgery.

If I understand my cryptographic principles correctly (and I may well not, so do put me straight in the comments if I’m wrong), Pentland is proposing something a little bit like a public key verification system. Perhaps in this case “your best defence is a good offence”… the sort of thing that could easily be combined with some sort of reputation-based currency like whuffie? And hey, he’s advising we take our data back from the corporations that already scrape at it when we’re not watching. Makes sense, right?

“It is not feasible for a single organisation to own all this rich identity information,” Pentland says. What he envisages instead is the creation of a central body, supported by a combination of cellphone networks, banks and government bodies.

That bank could provide “slices” of data to third parties that want to check a person’s identity. That information could be much like that required to verify high-level security clearance in government, says Pentland.

Uh-oh… suddenly I’m not so keen on this idea, at least in the way Pentland is thinking about it. A peer-to-peer system, fine, I’m down with that… but handing the reins of identity verification over to banks and quangos, after having already admitted that private corporations are prone to abusing the crumbs of data we drop behind us all the time? That’s got to be a step sideways, if not backwards. Pentland has thought about ways to monetise the system, too:

An individual could also allow their data to be used by services like apps on their smartphone to provide personalised recommendations such as restaurant suggestions or driving directions. This has the potental to be much more powerful than the recommender systems built into services like Netflix and iTunes, and would help familiarise users with the value of the approach, says Pentland.

Pentland’s carrot seems to be much the same as the one dangled by the people behind Phorm: “if you’ve nothing to hide, there’s nothing to fear, and we’ll even be able to recommend you stuff that you’re more likely to want to buy!” Maybe I’m just being paranoid; I remain convinced that a certain degree of personal transparency is not only a societal good but a useful tool for personal security, but something about this particular formulation smells very bad indeed.