Dark Fiction Magazine: monthly moody audio fiction

Paul Raven @ 29-10-2010

What’s better than plugging the launches of exciting new genre fiction websites? Plugging the launches of exciting new genre fiction websites put together by people who are a) awesome and b) your friends, that’s what. So, press release time:

Beginning Oct 31st (Halloween), Dark Fiction Magazine will be launching a monthly magazine of audio short stories. This is a free service designed to promote genre short fiction to an audience of podcast and radio listeners. A cross between an audio book, an anthology and a podcast, Dark Fiction Magazine is designed to take the enjoyment of short genre fiction in a new and exciting direction.

Dark Fiction Magazine publishes at least four short stories a month: a mix of award-winning shorts and brand new stories from both established genre authors and emerging writers. Each episode will have a monthly theme and feature complementary tales from the three main genres – science fiction, fantasy and horror.

The theme of Dark Fiction Magazine’s first episode is The Darkness Descends and will feature four fantastical stories:

  • ‘Maybe Then I’ll Fade Away’ by Joseph D’Lacey (exclusive to Dark Fiction Magazine)
  • ‘Pumpkin Night’ by Gary McMahon
  • ‘Do You See?’ by Sarah Pinborough (awarded the 2009 British Fantasy Society Short Story Award)
  • ‘Perhaps The Last’ by Conrad Williams

Which is a pretty good way to start, I’d say. And there’s more good stuff heading down the pike:

Lined up for future episodes are Pat Cadigan, Cory Doctorow, Jon Courtenay Grimwood, Ramsey Campbell, Rob Shearman, Kim Lakin-Smith, Ian Whates, Lauren Beukes, Mark Morris, Adam Nevill, Gareth L Powell, Jeremy C Shipp, Adam Christopher, and Jennifer Williams, among others.

Sweeeeeet… bravo to Del and Sharon, who’ve been grafting away at this project between their day-jobs for ages now. Go get yourself an ear-full of genre fiction on Sunday night, why don’t you?

[ Disclosure: I have been invited to do some narration for DFM. Don’t let that put you off, though. 🙂 ]


Bill Gibson reads from Zero History

Paul Raven @ 26-10-2010

Had an email from some nice people at KQED, who wanted us all to know that they’ve a fifteen-minute audio chunk of William Gibson reading an excerpt from his latest novel Zero History. And best of all for us click-lazy webhounds, there’s a nifty little embedded player for it which I’m gonna drop right here:

Enjoy!


Dunesteef – podcast genre fiction zine

Paul Raven @ 10-03-2010

Here’s a heads-up for podcast fans from the Futurismic mailbag – Dunesteef is an audio fiction magazine that mainly deals in material with SF/F/H tropes, and they’ve just run a version of Jason Stoddard’s “Willpower”.

Looks like they’re knocking out about ten stories per quarter, which is pretty respectable… so those of you with the (enviable) spare time in which to listen to great stories read aloud should probably add it to your podcast aggregator, RSS reader or preferred software of equivalent function. 🙂


Software that learns to recognise faces and voices like a child

Paul Raven @ 01-12-2009

camera-head stencilsA computer scientist at the University of Pennsylvania has decided to mimic the way children learn to recognise faces and voices in order to speed up the artificial learning curve of intelligent systems:

Using novel learning algorithms that combine audio, video, and text streams, Taskar and his research team are teaching computers to recognize faces and voices in videos. Their system recognizes when someone in the video or audio mentions a name, whether he or she is talking about himself or herself, or whether he or she is talking about someone in the third person. It then maps that correspondence between names and faces and names and voices.

“An intelligent system needs to understand more than just visual input, and more than just language input or audio or speech. It needs to integrate everything in order to really make any progress,” Taskar says.

The information Taskar’s team feeds into the system is free training data harvested from the Internet. Attempts to teach computers visual recognition in the pre-Internet age were hampered in large part by a lack of training content. Today, Taskar says, the Internet provides a “massive digitization of knowledge.” People post videos, comments, blogs, music, and critiques about their favorite things and interests.

Hah! And they said YouTube would never do any real good! Taskar’s computer seems destined for a life of increasing frustration with irresolvable plot lines, though, as they’re training it by showing it episodes of Lost:

As Tasker’s team feeds more data about Lost into the computer—such as video clips, scripts, or blogs—the system improves at identifying people in the video. If, for example, a clip contains footage of characters Kate and Anna Lucia, after being taught, the computer will recognize their faces.

“The alogorithm is learning this from what people say, or from screenplays as well,” Taskar adds. “The screenplay doesn’t tell you who is who, but it tells you there’s a scene with [two characters] talking to each other.”

Taskar says the information the research has produced can be helpful in many ways, particularly in searching videos for content. Currently, if a father is searching for a photo of his daughter playing with the family dog in his gigabytes of photos and videos on his hard drive, unless the photo is tagged “daughter playing with dog,” chances are he isn’t going to be able to find it.

Well, that’s your consumer-level pitch, sure, but the system will be too large and ungainly (and expensive) for Joe Average for a long time. Tasker should probably talk to the UK government… that panoply of CCTV cameras keeps growing, and it costs big money to hire people to watch their output. And what could possibly go wrong with putting an automated recognition system in charge of crime prevention? [image by bixentro]


Next-gen hearing aids have iPod jacks

Paul Raven @ 13-08-2009

earFile under “elective implant technology that I don’t need but really wish I could afford anyway”: the bone-anchored next-generation hearing aid with audio jack input options.

Older-style hearing aids amplify all sounds, making it almost impossible for wearers to hear conversations in noisy environments. They also interfere with frequencies used by mobile and fixed phones and often emit high-pitched whistling sounds. But the newer processors, costing about $6000 each, shut out background noise, giving users up to 25 per cent better hearing, and can be attached directly to MP3 music players or wireless headsets for talking on the phone, Cochlear’s territory manager, Katrina Martin, said.

They were useful for people with congenitally blocked middle ears, chronic infections that had eaten away tiny bones in the middle ear used for sound conduction, or babies born with closed ear canals, she said.

The processors must be removed for showers or swimming but can last up to 15 years.

Once you’ve got that basic hardware installed, the sky’s the limit for crazy bolt-ons and extras. Real-time digital signal processing, on-board recording and playback… the first person to write an open-source filter for screening out people on public transport talking loudly into their phones is going to be very popular. [via BoingBoing; image by jessicafm]


Next Page »