Tuesday, 3 November 2009

4 Ideas for Building Pirate Havens in #birmingham #bigdebate


Here's my reflections on The Big Debate event. Speaker-wise Charles Leadbetter and Toby Barnes were great, but David Harris must have thought he was speaking at an advertising event circa 2000. Far too long was spent on them talking though, and not enough on the rest of us having some structured discussion. The final "ideas" list could have been done in the first 10 minutes and had no hard actions.

But I don't want to spend this post griping, I want to contribute. So here is my "Top 4" actions (sorry couldn't get to 5!) that I'd like to see taken as part of an "invite the pirates" initiative, and which would certainly help the pirates like myself who are already here. The first three should be do'able quickly and cheaply, the other might take a bit longer but can at least be projectised..

1.Open up the local HEI student cafe areas to SMEs, making them co-working spaces where collaboration can naturally happen.

I'm lucky that I spend a lot of time visit regional and other HEIs. Most have nice cafes full of students working in ad-hoc groups, and wi-fi and power. The Saltire Centre at Glasgow Caledonian University even has lightweight igloos and tents that students can drop over the tables to make ad-hoc project spaces. Whilst my meetings usually involve staff rather than students there is a) a nice buzz about being around students and b) from screens and books and posters you pick up a bit about what is going on.

So why don't we "formalise" this. Why not let SMEs make use of these same spaces - so that we can build informal links with the students - growing their awareness of us and of real business, and our awareness of their individual and collective capability. And we might even agree to give the odd guest lecture for the privilege!

2.Ensure that every HEI has an RSS feed or web service giving a) key details of each member of their research/development staff and b) details of each paper/report produced by the HEI

One area that we really struggle with is finding out who in our local HEIs are expert in the areas we are interested in. Over 5 years we're getting there, but it takes a lot of effort and I'm sure is not 100% complete. The Index Voucher scheme has been useful quite apart from the cash in finding out who's who, but we need something better.

So why doesn't every HEI clearly advertise an RSS feed or web service where I can track by keyword any paper, project, report, activity or people in the areas that interest me. We're not talking about a portal, something they have to maintain separately or I have to go to separately, but a feed straight out of what one would hope would be existing systems, and straight into my existing RSS reader.

3. Every company should post an RSS feed of student project ideas

The flipside of 2. When I visit HEIs I'm often asked if I have ideas for student projects - but chances are that I have my ideas at a different time from when they want the projects. And the lecturers appear to be crying out for real commercial projects. So every SME should post ideas on their web site as they occur to them, make them available on an RSS feed, and let the HEIs aggregate and search then as needed. Again no portal, but a great mash-up of project ideas. The benefits should be obvious - students and lecturers get real ideas and commercial input, companies get interesting work done for free.

We could even extend the model to embrace freelancers and bounty projects (Pirates again), and maybe even peer-to-peer B2B collaboration.

This is one we can do for ourselves, and the Daden project feed will go live by the end of the week.

4.Create urban (and rural) based hubs

The recent debate out about the Digital District had a lot of people calling for the emphasis to be city-wide, not just Digbeth. And talking to people in the Birmingham Library Service there's a lot of talk about how neighbourhood libraries are integrated with the new Central Library. And with others there's been talk about how we can make better use of school premises - particularly as they get revamped for the the 21st Century under Building Schools for the Future. And Moseley now has Moseley Exchange, and I've always been a great fan of the old Telecottage concept - no longer as a tech hub but as a co-working hub. So whilst I see the merit for a focussed physical centre for our Digital efforts (and Digbeth is nicely on the #50 bus route), I can also see the benefit in finding a way to federate that District out through a series of hubs based on existing infrastructures into the urban villages and neighbourhoods of Birmingham, and into the market towns and rural villages of the West Midlands.


***Imported from old blog***

Tuesday, 27 October 2009

Loebner 2009 Video

Great video report from Erwin Van Lun of the 2009 Loebner prize. Having read (and analysed) the transcripts it's hardly surprising that none came close to winning. Looking closely at entering in 2010.


***Imported from old blog***

Where will NASA send its astronauts next?

Nice summary by New Scientist of the Augustine report and what NASA's priority should be. The scores work out as:

  • 1: Status Quo: -15
  • 2: ISS and Minimum Moon: -5
  • 3: Moon but no ISS: -1
  • 4: Moon and ISS: 4 - 5
  • 5: Deep Space (Moon, asteroids, Mars): 7 - 9

Full scoring (PDF)


***Imported from old blog***

Friday, 2 October 2009

Virtually Useless - A Response #fote09 #fote09vw


Seeing as I am debating about SL and Virtual Worlds with @AJCann on a panel at FOTE09 today I thought I ought to a point by point response of his infamous blog post - as he does do a pretty good job of summarising typical objections to Second Life.

Badly designed user interface.

Fair point, I'm always accidentally closing windows I had open, and users often miss controls because they've slid off the bottom of the screen. But:

  • SL has a new user interface coming and its very different and based on the demo I saw from Tom Hale ( LL CPO) a few months ago it will make a BIG difference to usability. We should see it in the next few months.
  • Increasingly we're developing HUDs when we develop projects in SL, so the HUD actually becomes the primary user interface (after "in-world"), not all the SL menu choices. That means you can tailor the HUD and UI to exactly what the user needs for that exercise (whilst keeping to common conventions), significantly increasing usability.
  • If you really don't like the LL interface then since the client is open source you can write your own. Many people use browsers like Emerarld and Impromptu and Katherine's excellent work. And there's Snowglobe if you want a more "official" open source browser. In fact one of the things that we've been talking about is creating a "training" client which actually rips out almost every existing SL menu option since most students don't need them, and leaves you to use HUDs to "skin" the client to what you want. Forterra's Olive takes this very slimmed down client approach and it works well for ordinary, non-power users.

Ports and Security

Got us there. Helping commercial and educational clients to get SL actually through there firewall is always a major headache, and LL know it but I can't see a solution any time soon.However you can always move to OpenSim and put that behind your firewall, or LL's own Project Nebraska. But yes, it's not an issue that some other virtual worlds have (although most commercial ones are designed for behind firewall operation anyway), and it is one that LL ultimately needs to crack. There again I remember the first days of the web when my employer (a FTSE100 plc) would only let us have the web on standalone PCs with their own dial-up connection to the Internet and no LAN connectivity for "security reasons".

New Client Versions

The days of monthly mandatory updates are long gone. I think there have only been two major releases (1.22 and 1.23) in the past year, and I don't think either was mandatory (unless you wanted to access Adult content!). Mind you I'm never again goign to complain about downloading the SL client and its PC requirements after downloading the 1.2GB Blue Mars virtual world client and discovering it failed to run on my laptop (which happily runs two SL Windlight sessions)

Lack of Imagination

Yep, we appear to spend half our lives at Daden persuading clients NOT to build a duplicate copy of their RL campus. Yes put something in their to "anchor" the experience (for Southampton University it was actually the stream through the campus), but otherwise just build the spaces you need for the sort of training and teaching you want to do. And you can even use Holodeck approaches to repurpose a space instantly (as at Coventry University), or use the same simulation space for mutliple subjects (a street scene we did for UFI worked for both customer training and basic communications skills.) We are though still only seeing quite "normal" uses of SL. We talk about a spectrum that runs something like the following:

  • Level 1 - Using the virtual world purely as a remote communications/meeting tool, just using the VOIP and IM chat features, and powerpoint slides and maybe a white board
  • Level 2- Using the virtual world to create simulations which would be just too costly to stage in RL - whether its a paramedic street scene, a moon landing, the inside of a jet engine or strands of DNA hundreds of meters high.
  • Level 3 - "Not Possible In Real Life", doing stuff that you can ONLY do in a virtual world, usually because in the real world we have things like gravity, or the inability to create 3D holographic projections or perfect robots.

I'd guess that about 50% of educational projects are at Level 1, 45% at level 2 and only 5% at level 3. But this isn't an issue with SL, it about people lack of familiarity with the possibilities of the system - but that familiarity will come, and with it more creative applications.

Cost

There can only be two real commercial reasons for using virtual worlds:

  • They let you do the same level of education for less whole cost
  • They let you do better education for the same whole cost

The commercial sector appears to be better at tracking ROI for virtual world education than the academic sector. The recent Serious Virtual Worlds conference at the Serious Games Institute in Coventry had a whole host of commercial ROI studies - things like the Highways Agency saving around £65,000 for each day of training they did in the virtual world instead of in RL. Even in Academia there are some good quantitative studies emerging - one from Imperial College (presented at ReLive I think) showed that for their "operating theatre induction" students who undertook the induction in Second Life not only did better than those who had the traditional lecture based induction, but also did better than those who had their induction in a real operating theatre.

As a "firm of developer building stuff" (and probably having worked in SL with more UK educational institutions than any other company) you could expect us to take exception to the idea that universities are "sub-contracting pedagogy" and "can't use the tools". We always work closely with the front-line educators, and the closer we are to the educators then usually the better the project. We understand some of the pedagogic principles, but there is no way that we manage that side - these are joint projects. Also almost every project we have done has involved skills transfer - so at the end of the project the university staff (and even some students) can manage, maintain and develop the system. Indeed one of the students from the original Coventry project now runs her of SL development business.

And of course through the support of JISC we have been able to create tools like PIVOTE that make creating and maintaining virtual world exercises even easier, and release them as licence-free open-source back to the community. PIVOTE has now had over 120 downloads of its server code, and we host over 20 institutions on our PIVOTE hosting service, and we've had emails from institutional users as far away as Canada and Argentina!

Online Identity

It is a fascinating debate about whether students should have a different identity in SL, or their RL indentity. But SL hasn't forced you down the different identity route for a couple of years. Through the Registration API you (or we!) can create a set of SL registration pages on your own Intranet. You then ask students just to enter their real world first/last name, and then you then automatically concatenate them to create the SL account name with a common bespoke surname. So for instance as David Burden I could have an "office" account of DavidBurden Daden (eat your heart out Lindens!). You could and even bypass the sign-up process completley and just fire your whole student database at the RegAPI and auto-create all the accounts - and bypass any issues about creating multiple accounts from the same IP address! And with OpenSim the possibilities are even greater.

.

And if you really want your own face on your avatar there are several third party applications which wil let you do it. BUT the results are generally awful. I still think that well crafted SL avatar still looks far better than any of the virtual worlds that try and do "real image" faces. Image is important though - the QWAQ business meeting virtual world started out with the "head in a jar" model (actually more like "head on a Lego Minifig") and vehemently defended their decision to avoid "real" avatars. But they have now introduced real avatars and in their recent publicity shots to accompany their re-branding as Telepace (an equally awful name but for different reasons) only about 1 avatar in 10 is a minifig.

Data Portability

Does he mean Data or 3D Objects?

For 3D objects yes its been that case that what's built in SL stays in SL, but thats mainly becuase the prim based model was needed to make the build process an in-world experience that could be learnt in minutes - and that is why SL has been such a creative place and I for one would rather have creativity than portability - at least at first. But SL is growing up. In OpenSim we're already seeing import and exprt tools emerging, and some can be used with SL itself. And in 2010 we've been promised mesh import into SL itself- I've seen the demo and it looks awesome. SO we might have the best of both worlds - high definition builds being done in 3D Studio Max and imported into SL, but nice simple build tools still in SL itself. And remember that none of the other virtual worlds let you export buildings - you create them outside the platform in Collada or 3DS or whatever and then bring them into the world, and re-import if you want to change anything.

I'm actually more concerned with data, and here the trick is to actually keep as much code and ALL data on the web, and treat SL purely as a user interface. LSL is a SCRIPTING language, you aren't meant to be building big applications with it. Use robust tools on web servers (like Java or C# or even Perl) to create your applications and store your data, and then use the excellent llHTTPRequest call and the new HTTP-In event to move commands and data as required between SL and the web. We've used that approach for years and found it pretty much 100% reliable. That is how we built PIVOTE and our chatbots, and Datascape and our Navigator web browser. And of course the by-product of this is that your "application" is now a web service, so you can potentially create interfaces for ANY virtual world, or for the web, or for FLASH or even an iPhone.

Sleaze

I hate to have to say it yet again but have you seen some of the sites on the web? We don't not use the web because of the porn sites - we just avoid them. The same in SL - and even easier now that adult content has its own continent. In fact we are increasingly seeing our clients build not only on private islands, but also with their own registration pages (see above) and their own orientation - so their users just aren't exposed to sleaze. You can take things even further, there's a switch we can throw in the RegAPI which means the student accounts can't even leave the University island - making things safe still. And for the ultimate level of safety you can even customise the client as described above so there aren't any search or map functions that might show students content that you want to hide from them, even if they can't get to it.

There are however two remaining issues, both of which we've flagged with Linden Lab and hope to progress with them:

  • Even if you create an account through the RegAPI you have to maintain it (like changing password) through the main SL web site. This might not have any sleaze but its focus is undoubtably on dancing and partying not studying. So we've asked LL if ultimately the RegAPI can be extended to cover account management functions (so we can do client branded account management pages), and if in the mean time they can put up a "neutral" skin for the account management pages on the SL@Work/SLGrid microsite that we can deep link to - and maybe even "hide" in a client branded frame.
  • The separation of Teen and Main Grid causes educators (and others) real problems, and with the move of Adult content off of the main grid is almost an anachronism. Other similar virtual worlds don't have this distinction (Active Worlds is probably closest to SL in many ways, and has run far longer that SL with purely a voluntary flagging of sims as PG/15/18). Whilst we hope that ultimately LL merges the two grids there is in the meantime a lot of scope to let private islands owners use the RegAPI to lock accounts to their sim and to manage their own users - regardles of age.

Other Points

AJCanns does recognise that virtual worlds do have their merits - in particular for carbion reduction. He doesn't though think that they make good conferencing tools, he prefers Skype and Eluminate. But they are different types of tools for different uses. Yes I use Skype a lot, mainly for 1:1 calls, in group calls it suffers the same problems as any audio conference (all disembodied voices), and you need additional apps if you want to start sharing presentations, documents and the like. With virtual worlds you get a far better sense of the participants, of it being a shared event, you can share information and 3D models (the next SL iteration will have a fully usable web browser including Google Docs sharing), and anecdotal evidence is suggesting that just by having a virtual location for the meeting you remember the content far better.

Google Lively - well that died. The idea of a simpler 2.5D virtual world is fine, and can work alongside a richer environment like SL. We've been playing with Metaplace and really enjoy it. More importantly we can take our applications like PIVOTE and chatbots and drop them straight into Metaplace - so Metaplace and SL users are accessing the same applications and the same data. Others have already done SL-to-Metaplace chat bridges, and with the new Media API it is conceivable that you could have Metaplace running on a screen inside of SL!

World of Warcraft - as I game I find it boring (even my kids do), but lots of people like it, and yes I can see that for some requirements its an interesting space to teach and learn it. But its NOT a true virtual world. You can't create content or applications, you can only bend the game so far.

Conclusions

OK I've been writing this since Birmingham and we're almost at Euston so time to finish off!

The feeling I get reading AJCanns post is that he is just not aware of what SL can really do nowadays. It sounds like he hasn't been really USING it much recently, and that he's only been exposed to some relatively basic "Level 1" educational builds. Yes we have a long way to go with how we use virtual worlds, but to my mind, and to many people I speak to, SL may have its faults but it's a far better implementation of the sort of virtual world we want to end up with than ANYTHING else out there. Interestingly none of the new virtual worlds that have luanched in the last 5 years have tried to do what SL did - create a completely user malleable virtual world. That's a tough thing to do (which is why SL used to have the stability issues it did) - but it's the RIGHT thing to do. And with with growing OpenSim community to push Second Life even further forward then the future can only be bright.

I'm often asked "is Second Life going to be 'the one', or will it be replaced by something else?". I don't know if SL will be the one, but I am sure that when we reach the "final" virtual world solution, the 3D equivalent of the web, we will see more of Second Life in its spirit and design than any other virtual world.




***Imported from old blog***

Saturday, 12 September 2009

Augustine Commission Summary Report

augustine_summary.jpg

The Augustine Commission has released its summary report on the future of US manned spaceflight. The key points appear to be:

  • Finish Space Shuttle ops as planned in 2011
  • Extend IISS to 2020
  • Abandon Ares 1 and probably Ares V to develop a new Ares V Lite mid-range booster
  • Hand man to LEO operations over to private contractors
  • Shift the emphasis from Moon then Mars to "flexibility", which may include both, plus L5, near earth asteroids and even Phobos and Deimos, but no dates

Looking to commercial contractors to do the lift to orbit is a nice step - but:

  • The assumption still appears to be that the contractors will use rockets, and not any form of re-usable system (a super Spaceship 1)
  • Although there is some talk of in-orbit refueling for lunar/mars injection, it appears to be discounted in favour of the single lift approach
  • Although Orion (the new crew capsule - a big Apollo CM) is seen as possibly too big and in need of a trim (everything is "lite" in this report), there still appears to be an acceptance of the "apollo" model rather than going for a spacecraft that only travels orbit to orbit and never has to re-enter

Be interesting to see what extra detail/decisions there are in the final version.


***Imported from old blog***

Thursday, 3 September 2009

A 2010 - 2260 Future History Timeline

I've started work on a pet Futurology project called Five by Five - five themes over the next five half-centuries.

As a first element I've just updated my future timeline - mainly focussed on space exploration and AI development. Other issues like poverty. the environment and politics will be added as part of the larger project.

BTW the wiki that the timeline on is one of the reasons there are less posts to the blog - the wiki is a better tool for some of the writing I'm doing at the moment - and easier to use from the iPhone!


***Imported from old blog***

Friday, 26 June 2009

Virtual Worlds Roadmap - 2 Years Ago

road map chart

In response to one of the comments on my Semantic Web post I dug up the roadmap diagram I did 2 years ago. I still think the areas and steps are pretty valid, but we've not made as much progress in the last 2 years as I'd thought we would. Really little has changed since then, although signs are that by 2010 we might actually start tikcing some of the things off. I'll try and find time to update the whole docuemnt this came from.


***Imported from old blog***

Wednesday, 17 June 2009

Digital Britain Report - Underwhelmed

Wordle from Guardian

Underwhelmed by the Digital Britain report, but still a pity I won't be able to make the regional launch at the ICC today. The Wordle above nicely summarises the issues:

- dominance of Government
- Radio far stronger than TV or Internet (talk about aiming low)
- Ditto spectrum
- No mention of web 2.0 brands, technologies or principles

Overall it looks more like an "old media" report than a "new media" report, failing to grasp the possibly seismic change in communications that high speed broadband and mobile broadband could bring, and the changes in consumption (and production) patterns already happening.

Perhaps a better approach would have been a 3 part one:
- tidying up the old guard (radio, spectrum, BBC, Channel 4)
- rapid build-out of a next-gen infrastructure
- exploiting that infrastructure


***Imported from old blog***

Wednesday, 13 May 2009

Toward Semantic Virtual Worlds - A Thinkpiece


One of the things that has struck me over the last few weeks in discussions about where virtual worlds are going is that we are in danger of making the same mistake made by application development and the web, by concentrating on the wrong thing. With virtual worlds, and the moves towards interoperability and standards, there is a real opportunity to get things right first time.

The "mistake" is that we tend to concentrate on the visual. It's only natural, it's probably our most powerful sense and the one that most of us would least wish be without - and hardly a surprising one for virtual worlds!

Since the first computer read-out and green-screen VDU we have developed computer applications and their user interface as a single entity. Even with the move to client-server this did not change - one application, one user interface. However with the arrival of the web, and of mobile phones, things began to change a bit. Users wanted access to the same application (be it a CRM system or Facebook) regardless of the device they were using. In fact almost 10 years ago I had a slide in my Aseriti slide pack (selling utility billing systems) that called for "Different Users, Different Needs", showing a user interface less application engine surrounded by optimised user interfaces for mobile, consumer, web and call-centre users.

The development of the mash-up culture has pushed this even further. Once we replace applications by user interface-less application engines, and then create user interfaces (and even other application engines) which talk to the application through an agreed API (typically a web service) we can unleash a whole new set of creativity in how we create applications.

The web unfortunately made a similar mistake, hardly surprising since its was based around HTML, but disappointing given Sir Tim Berners-Lee's own original vision, and that of Alan Kay and the Dynabook. HTML is mostly about marking up the display, not the content. David Burden means nothing more but the characters D-a-v-i-d-%20-B-u-r-d-e-n displayed in bold type. If you search for "David Burden" on Google you'll find lots of the same characters in the same order, but you'll have to guess that they actually refer to different people.

The "solution" of course is the Semantic Web - championed by Sir Tim Berners-Lee. But trying to retrofit it to the Petabytes of text strings that make up most of the web is an enormous challenge. Formats like RSS, and even Twitter hashtags, begin to give us some sort access to a semantic web, but the true semantic web languages of RDF and OWL (which at Daden we are using to give our chatbots semantic understanding) are woefully under-used. Even less used are things like Info URIs - agreed semantic codes, like an ISBN number, that say that info:people/davidburden/515121 is me, and not the CIO of the Post Office. If every mention of me on the web was marked up semantically then finding all references to me on the web becomes trivial. It's good to see that Google is beginning to introduce aspects of semantics into its search results, but without the original content being semantically marked up its only a small step - the mistake has already been made.

So what's all this got to do with virtual worlds? Almost any initial assessment of a virtual world starts with how well it looks - how good are the textures, the avatars, the sky, water and shadows. After that it's about functionality - how naturally does the avatar move, how can you interact with objects, can you view the web or MS Office - and about deployment issues (does it run on a low spec PC, can it run behind the firewall, can we protect children). There is active debate at the moment about standards in virtual worlds - Collada, X3D etc - and whether virtual worlds should be downloads or browser based (and this itself offers a spectrum of solutions as pointed out by Babbage Linden at the recent Apply Serious Games conference).

But to me all this is missing the point. Virtual worlds are NOT about what they look like, but about what's in them.

Let's not repeat the mistake of application development and the web. Let's start thinking about virtual worlds in terms of how we semantically mark up their content, and then treat the display issue as a second order problem. The virtual world is not HOW you see it, it's WHAT you see (or more precisely what you sense and interact with).

Some examples. These are all based around Second Life, since with libsecondlife/libomv we can actually get access to the underlying object models (which is as close to a semantic virtual world model as you can get).


  • With Open Sim you not only have a different application sharing the same object model as SL, but also different clients using different graphics engines to render "SL" in subtly different ways.

  • We have been working with the University of Birmingham to use their expertise in robotics to help create autonomous avatars in SL. The University uses a standard robot simulation application to visualise and model physical world spaces and test robot software, before downloading the code to the physical-world robots. To work in SL they've taken the SL object/scene description and dynamically fed it to the bot modelling tool - so SL "appears" as a wireframe model in the simulation application just as their physical world spaces do.

  • On my iPhone I have Sparkle, a great little app which lets me log my avatar into SL. No graphics yet, just text (and not even a list of nearby avatars) but adding a radar scan of nearby people - and objects - would be almost trivial, and adding a 2D birds-eye view of the locale only a little harder. Even a 2.5D "Habbo" rendering of SL would not be impossible.

  • We've already played around with using LSL sensor data and libomv to generate live radar maps in web browsers - why not push this a bit further and use Unity, X3D or similar to "re-create" Second Life in the browser - it won't look "identical", but in reality it's all just bits anyway.


Four situations, four different ways of rendering the Second Life "semantics".

Our own work on PIVOTE shows another approach to this problem. By creating the structure and content of a training exercise away from the visualisation tool we are free to then deploy the exercise onto the web, or iPhone or virtual world of our choice without having to change the semantic information or the learning pedagogy. If that semantic model could be extended to include the virtual world itself, then we would have a true write once - play anywhere training system.

One final issue that our bots particularly suffer from, is that having access to objects is no real guarantee of having access to true semantics. I might create a plywood cube in Second Life and call it a chair, a snake, or anything. The bot cannot guarantee that reading an object's name will tell it what the object is. To be truly semantic any future editing system should ideally force us to put accurate semantics on the objects we create - and in particular their place in the ontology of the world. Then even if we can't recreate a "chair" as the specified collection of prims or polygons we can substitute our own.

So this is my challenge to the virtual world community. Stop thinking of virtual worlds (and especially the future of virtual worlds) in terms of how they are rendered, but concentrate on their object models and the underlying semantics. I have every confidence that the progressive increase in PC power and bandwidth - and the existing capabilities of computer games - will mean that the look and feel of virtual worlds will come on just fine. And those of us deploying virtual worlds into enterprises will find that with wider adoption and real business need/demand will come the solution to all our problems of firewalls and user account controls (just as it did when the web first arrived in enterprises). These are (almost) trivial problems. If we want to create truly usable and powerful virtual spaces (and I even hesitate even to use the world virtual) then we should be focussing on the semantics of the spaces and the objects within them. That way we will avoid the problems of applications and the web. We will know what the objects in our world are - we only have to decide how to render them.


***Imported from old blog***

Friday, 17 April 2009

Return to the Third Imperium


After a break of almost 15 years I've decided that it's time to return to Traveller. This has been driven by a number of things, including getting the TravellerMap into our Mapscape hub in Second Life (a very Traveller space), the growing (if small) Traveller scene in Second Life, and the general resurgence in Traveller resulting from the new Mongoose Publications edition of the rules and supplements.

In re-embracing Traveller I'm struck by the fact that yet another lynchpin of my youth has become a "fantasy". Just as the Third World War which I trained and wargamed for will now not happen as advertised (central Germany, circa 1985), so too will the future of Traveller, or any "Star Wars" type SF, not happen. I am convinced that AI, or simpler personality constructs, and their gleisner robot selves will be the first "earthkind" to reach the stars. If we ever get there as our biological selves then we'll find local space already populated by our digital selves and their bretheren. This goes far beyond the Cyberpunk of William Gibson et al, and is still best captured by Greg Egan's books - although just how you'd make an SF-RPG of them I've no idea (but it is an idea).

There certainly remains a couple of issues with Traveller, in all its incarnations. The first is the whole take on AI, wetware, cyberpunk, and robotics, which are generally frowned upon (or banned) and the most over-taken by real tech development. But at least an IMTU (In My Traveller Universe) approach can incorporate those without too much damage to the canon. The bigger issue is 3D space. The Traveller Universe is a 2D map, and I love that map. But nowadays it seems just too unrealistic - particularly when I can use SL to visualise 3D volumes of space. Traveller 2300AD was a great product, and in some ways it would be great to have its version of real, 3D space, grow into the accepted Traveller Known Space. But for now I'll accept the 2D map - even if I play around with my own morphing of the 2D data onto a possible 3D mapping.

And why Traveller and not another SF RPG. I love its depth of history, the detail of the history, the community effort, the fact that the only FTL is by jump, the fact that it *was* a beleivable future - if a little Imperial.

So which Rule Set and Milieu to use. I "grew up" in CT, I remember where I was when the Fifth Frontier War broke out (travelling through Minneapolis), so for me the setting has to be the classic one, circa 1100 - 1110. Rule wise I always preferred MT, I love the task system (and still use its principle in other games), and the amount of background detail it had. The fractured Imperium plot-line I can take or leave. TNE was just too different and messy, T4 OK (and I wrote a lot for it), T20 so-so (no great fan of D20), and still haven't got T5. GURPS Traveller is a sore point as I wrote a whole supplement for them which was cancelled at the last moment and I never got paid. Reading the reviews it sounds like the Mongoose Traveller may be good - with improved character generation and a task system - so I'll probably buy that and if as good as it sounds use that as my base.

Then there's a question of style. I probably haven't played a face-to-face game of Traveller since I left school - it's never been the most popular of RPGs in the UK, and a lot of us just get enjoyment from the setting and literature. But virtual worlds like Second Life offer the opportunity for virtual role-play, not just meeting up with Travellers, but acting out adventures in "real" Scouts and at "real" starports. One day I can see an entire OpenSim grid for Traveller (or a more generic SF setting), and I'm happy to help build it. But once I also had fun playing real-time Traveller - where you play a solo adventure but everything happens in real-time - so if you character makes a 7 day hyper-jump trip you spend 7 real days waiting for the to reappear. I don't think I want to constrain myself to such a literal linkage, but the idea of using Traveller to effectively drive a long term narrative does appeal. In fact the other driver to get back into Traveller is my daughters interest in what she calls "role-playing" - collaborative interactive fictions created on discussion board sites live Envision Free. Why not play Traveller more like that.

So I think I'll take a blog approach. My character, Corro Moseley, of course, will adventure across Known Space, initially from Gushemege (where I was HIWG Sector Analyst and so built most of it!) through Vland to the Spinward Marches, and then into Zhodani space, and who knows on to the Galactic core (or Longbow?). I'll record his adventures on the Gushemege blog, using random encounters a lot to spark adventure ideas, and pre-published adventures where they fit.

Of course another driver for this is to try and automate Traveller. If we do get that Open-sim Nirvana I don't want to spend the whole time rolling dice and looking up tables, I want to play it like a fully immersive MMORPG. So the whole exercise will give me an excuse to track down the current web based software and information support for Traveller, make extensive use of (and add to) the Traveller Wikia, and maybe create new resources too. I also want to be able to do a lot of this in dead-time, so that means finding/creating resources which will work on my iPhone or Netbook.

So that's enough of a brain dump about what I want to do, and why. Let's hope I can now make the time to do it.

Follow the evolving adventure on my Traveller site - http://www.converj.com/sites/gushemege/



***Imported from old blog***

Thursday, 9 April 2009

Academic Earth - Video lectures from the world's top scholars

http://www.academicearth.org/

Neat site and looks like good content. Yet another challenge/opportunity for University 2.0?


***Imported from old blog***

Thursday, 2 April 2009

New Armed Robot Groomed for War

http://blog.wired.com/defense/2007/10/tt-tt.html

Twoi years old, but as a weapons platform for close infantry support you can really see the uses (and dangers)

Also from http://www.mcclatchydc.com/251/story/64779.html

"Autonomous armed robotic systems probably will be operating by 2020, according to John Pike, an expert on defense and intelligence matters and the director of the security Web site GlobalSecurity.org in Washington."


This prospect alarms experts, who fear that machines will be unable to distinguish between legitimate targets and civilians in a war zone.

"We are sleepwalking into a brave new world where robots decide who, where and when to kill," said Noel Sharkey, an expert on robotics and artificial intelligence at the University of Sheffield, England.

Human operators thousands of miles away in Nevada, using satellite communications, control the current generation of missile-firing robotic aircraft, known as Predators and Reapers. Armed ground robots, such as the Army's Modular Advanced Armed Robotic System, also require a human decision-maker before they shoot.

As of now, about 5,000 lethal and nonlethal robots are deployed in Iraq and Afghanistan. Besides targeting Taliban and al Qaida leaders, they perform surveillance, disarm roadside bombs, ferry supplies and carry out other military tasks. So far, none of these machines is autonomous; all are under human control."


***Imported from old blog***

Friday, 27 March 2009

#brum #buses times mashup in SL, phone and proxy

livebusstops.jpg

Yesterday I spend the afternoon doing a demo of displaying live bus times in SL ready for a council Transport Summit. In doing so I discovered all sorts of council resources I wasn't aware of. Others might be, but in case you too aren't then here they are, and also info about how we did the mash-up, and a proxy you can use for your own mashup (until we get asked to close it down!).

The first site is DigiTV. This was designed for people to access council services from a set top box, but it also works really well from an iPhone (or any large web phone) as its all simple menus and chunky links. From here you can get to council info on what's on, travel information, GP services, job centres, report problems etc. If you drill down on the Transport links you can get to every bus stop in the city and the next bus times - either timetable or real-time derived. And if you break out of the frame (http://www.digitv.gov.uk/digitv/cds/Birmingham/Netgem/home.html) you can bookmark individual bus stops on your phone - I finally get what the Finns had a decade ago - the ability to not walk out my door til I know I can hit a bus!

The other site is http://netwm.mobi/. This gets you into the same live bus data but through a very simple HTML layout - great for older phones, and also does simple Google Maps of the bus stops in a given area.

To get a single bus stop simply the URL is http://netwm.mobi/departureboard?atcoCode=43002200505 - where the atcoCode is the ID of the stop - just look at the URL to see it.

In order to do the mashup we just wanted the bus times in simple text - not HTML or layout. So we wrote a simple Perl proxy that is given an atcoCode and calls the URL above, then strips out all the HTML and gives you the data in cod XML, or plain text (and we're working on RSS). To use it just call:

http://www.daden-cs1.co.uk/cgi-bin/sl/twmbustimes.pl?op=text&atco=43002200505

Guess where I catch the bus! Just change the atco code to the bus stop you want, and change op=text to op=html to see in very simple html, or op=sl to see in cod xml.

Happy to let anyone play with the proxy for further mash-ups - and be great to knwo if you use it.

And before any one asks why would anyone go all the way into SL just to check there bus time that is NOT the intention of the demo. Its just part of a broader demo of how virtual worlds can be used to mash up a wide variety of data in new ways to give city (and business) managers and planners new ways of looking at and sharing information - help help build the national and global Birmingham profile for digital innovation whilst we're at it!


***Imported from old blog***

Wednesday, 25 March 2009

Milky Way Transit Authority

http://arbesman.net/milkyway/

Not bad, but could have been done better - and anyway haven't they decided that the Milky Way is a barred spiral now.


***Imported from old blog***

Friday, 13 March 2009

World Builder - stunning short film - tech enabled love story


World Builder from Bruce Branit on Vimeo.


Stunning video - SL meets Star Trek Holodeck meets RL love story

Now if only SL building worked like this - but another classic case of its easier to virtualise us (as avatars) rather than try and virtualise things around us.


***Imported from old blog***

Thursday, 12 March 2009

BBC Radio 7's Planet B - cult classic in the making?

The BBC's Planet B is growing on me - characters hunting through a series of virtual worlds for a friend, and dealing with "virals", avatars which are not being driven by humans, and "rogues" - virals which have become sentient. With the vagaries of iPlayer you just need to make sure you catch all the episodes before they die - I think I missed the first 1 or 2, and heard some out of order, but am now trying to go back through in the right sequence.

Could become a cult classic.


***Imported from old blog***

Wednesday, 11 March 2009

Monday, 9 March 2009

Wolfram Blog : Wolfram|Alpha Is Coming!

http://blog.wolfram.com/2009/03/05/wolframalpha-is-coming/

This could be interesting - from the guy who created Mathematica - a plain language factual question answering system.

"It's going to be a website: www.wolframalpha.com. With one simple input field that gives access to a huge system, with trillions of pieces of curated data and millions of lines of algorithms.

We're all working very hard right now to get Wolfram|Alpha ready to go live.

I think it's going to be pretty exciting. A new paradigm for using computers and the web.

That almost gets us to what people thought computers would be able to do 50 years ago!"


***Imported from old blog***

Friday, 20 February 2009

Millennium Development Goals - 2008 Progress Chart

mdg_progress_2008.jpg

Neat graphic of where we are against the Millennium Development Goals - click image for link to original PDF


***Imported from old blog***

Monday, 16 February 2009

The Mom Song

Never seen this before, but it had the family in stitches.


***Imported from old blog***

Thursday, 12 February 2009

Creating a MegaBrain with TheBrain Mindmaps

http://blog.thebrain.com/megabrain/

I have Kurzweil's TheBrain on my laptop and have used it a bit, but not this extent!


***Imported from old blog***

Monday, 2 February 2009

Saturday, 17 January 2009

FIST - Future Infantry Soldier Technology

And this is where the infantry are going - FIST - Future Infantry Soldier Technology, with a whole mix of networked technologies so soliders can have a HUD showing where their team-mates are (and their health/ammo state) and where the enemy is, and the view through their gun site - useful for shooting round corners. The radio data system even sounds like the sort of self-routing AX.25 system I used to demo at the School of Signals in the 80s.

And here's the link to the similar US Future Force Warrior programme on Wikipedia, and a good vendor site on the project. And integrating AI into the system.



***Imported from old blog***

Friday, 16 January 2009

Taranis Unmanned Combat Air Vehicle (UCAV) Demonstrator

http://www.airforce-technology.com/projects/tanaris/

The future of combat aircraft?

"BAE Systems said Monday Sept. 24 2007 that work has started on the physical build of the Taranis airframe - a 124 million pound ($US 251 million dollars / Euros 177 million) ) unmanned combat aerial vehicle demonstrator aimed at helping Britain's MOD (Ministry of Defence) determine the future balance of assets within the Armed Forces. Taranis will help inform the MOD's approach to the future capabilities needed for deep target attack and intelligence, surveillance, target acquisition and reconnaissance (ISTAR). Ground testing of Taranis is scheduled to begin in early 2009, with the first flight trials due to take place in 2010". - Daylife

Apparently the Taranis is already being blamed for UFO sightings in the UK


***Imported from old blog***

Humaniti 2100

I must try and get back into the habit of a Friday afternoon blog post. So here's something I've been putting together for a while on my Palm - it is unashamably direct and does not particularly hedge its bets - where's the fun in doing that?.

I still reckon that Greg Egan (in Diaspora) has probably got the most accurate view of how humanity may 'evolve' over the next few centuries. Inspired by him here's my take on where we could be by 2100.

NATURALS

Naturals are those who have refused any sort of body mod or digital existence. In 2100 there will be a frighteningly large number of people living (even in poverty) for whom this may be the only option. Increasingly though this is a moral choice. However general health advances mean that life expectancy is 100+, with good quality of life to 90+ for those in the developed world.

AUGMENTED

The Augmented are those who have taken advantage of the transformative technologies of genetic and nano engineering and digital/cyber mods, but who see their 'self' as purely their organic mind. If they use 'scapes and virtual worlds through avatars it is for specific, non-persistent, purposes. Augmentation itself may range fm slight (e.g. just regular use of an avatar or life-logging systems) to extreme (bio-electronic cyber-systems).

MULTIPLES

Multiples are those of organic descent who have created digital copies of themselves which exist persistently (and probably in multiple instances) in 'scapes across the globe, planets and (by 2200) the stars. I still think that 'uploading' a mind from the brain could be in the near-impossible category. lnstead the first 'copies' will come from explicit teaching/programming, quickly followed by automated learning from email & social media, and ultimately by eavesdropping on everything we do from birth - our lifelog becomes us.

The real challenge for multiples is the re-integration of learnt experiences. Copies can easily just copy data, but what about the organic prime? Again I think that uploading memories is probably a no-no, but in-silico memory accessed through some sort of personal agent or augmented reality (or even brain-jack) would seem achievable.

Freed of a corporeal existence the copies can explore the stars in starship borne 'scapes, and even be beamed from star to star at the speed of light (and bear in mind that since the Multiples sense of time is dependent only on processor clock speed the journey to Alpha Centauri could pass in seconds or millennia - again see Egan, this time in Permutation City).

But perhaps the most telling feature of Multiples is that they can be immortal - so whilst your organic atom based self may die your digital Multiples can live forever - perhaps we might even call them ghosts.

And if we can create simple Multiples now (and I think we can) then it means that we can create simple (multiple) immortality right now - and just think of the moral and ethical issues that raises.

(and if you doubt this whole section take a look at this recent DoD requirement)

DIGITALS

Digitals (who also normally exist as Multiples) are personality constructs that are not derived from an organic, living source. At their most basic, and in current tech terms, these could be virtual receptionists or game NPCs, but very shortly we'll be able to create autonomous, self-motivated avatars - the fore-runners of true digitals. We might also create Digitals from historic personalities, and we could even use software DNA to allow Digitals to breed and evolve (for what a baby Digital might experience read the opening of Egan's Diaspora). The key point is that within the virtual space Digitals are not differentiable from Multiples or the avatars of the Augmented.

GLEISNERS/ANDROIDS

But Multiples and Digitals need not be confined to virtual spaces. Once we have a digital self controlling an avatar body there is no reason why we can't have the same self controlling a robot body. Indeed in building our own AI engine we created a Sensor and Action mark-up language to isolate the AI 'brain' from the embodiment technology. So the same AI that controls a Second Life avatar could also control, and live through, an AIBO or ASIMO. Fast forward 50-100 years and our 'human-level' Multiples and Digitals can walk the atom-based physical worlds in human (or non-human) bodies (what Egan calls Gleisner robots). And whislt embodied Digitals may sound weird just think what it would be like, as the human root of a Multiple, to shake your own hand.

When I first wrote this I entitled it Humaniti 2200 - but I've now put it to 2100. For either date though it's a matter of degree - particularly for the 'organic' element. There is also though the potential of a 'singularity' effect; once we have one fully fledged Digital we could rapidly clone or evolve it (or it could clone or evolve itself) so that the Digital population could go from single figures to thousands to millions in years or decades.

So how might percentages or numbers go? This is probably not even a guess, but its always useful to have some strawman figures.

Type / Date200920152020205021002150 2200
Naturals99%95%90%80%50%30% 10%
Enhanced1%5%10%20%50%70%90%
Multiples0100s10k+100k+10m+1bn2bn
Digitals010s50s+100s+10k+1m++100m++
Gleisners01s10s+10s+1k+100k++1m++
Figures for naturals and Enhanced are % biological human population. + means +/- 1 order of magnitude, ++ means +/- 2 orders of magnitude

And if you doubt the 2015 figures I fully expect Daden to be running at least 1 Multiple (probably me!), 1 Digital (an enhanced Halo) and even a Gleisner (Halo in some 2015 equivalent of a Femsapien).


But the real message is that we can already create Multiples and Digitals, and even Gleisners - they just aren't very good yet! But it means there has been a shift. This is no longer a question of when or if or can, but just one of how good, and how fast will we improve.



***Imported from old blog***

Monday, 12 January 2009

OSD09-H03: Virtual Dialogue Application for Families of Deployed Service Members

Virtual Dialogue Application for Families of Deployed Service Members OSD09-H03

Interesting DoD RFP for a chatbot for absent serviceman's families. Too late for us to respond directly (and the project overall is probably a bit too big for us and US based). but would sure be something interesting to contibute to and I'm sure our AAML/ASML, extended AIML and SL technology would play in nicely.

Question remains though - do you have to pass the Turing Test as part of the project?


***Imported from old blog***

Thursday, 8 January 2009

Procedural's CityEngine

cityscape

http://www.procedural.com/cityengine/features.html

Great parametric driven city modelling tool. Also exports to Collada.


***Imported from old blog***

Geograph British Isles - photograph every grid square!

http://www.geograph.org.uk/

Must take part in this. One image per grid square in the UK. Some nice empty spaces in mid-wales!



***Imported from old blog***

Digital Urban: Tilt-Shift Barcelona


Tilt-Shift Barcelona from joja on Vimeo.

http://digitalurban.blogspot.com/2009/01/tilt-shift-barcelona.html

***Imported from old blog***

Map of SL in 2004 and 2008

SLMaps-08-04.jpg

***Imported from old blog***

Wednesday, 7 January 2009

NOAA's Second Earth and US EPA transport builds in SL

snapshot_secondearth.png

Every so often you follow a link to a great location in SL, and then stumble on something else (almost) as amazing.

Followed the link from New World Notes to NOAA's stunning Second Earth. I've seen a sculptie earth before, but this one is huge and has live data (and satellites) on it. Stunning build and makes me want to renew our efforts to find a (UK) sponsor for a 1 - 6 sim scale UK model. Next time I visit I must put my spacesuit on too!

Looking at the local map I saw some ship and aircraft shapes on an island to the East (this is all around Scilands - where else!). What I found was a whole sim of very high quality, full size transport from NOAA exploration ships (an a wonderful rusting hulk), to trains and Concorde and even an Imperial shuttle. The island is owned by the EPA, so would love to know what they have in mind! Note the parcels are private, but you can hover over and cam in to see the detail.

snapshot_epa.png

Stunning places, and the potential applications enormous. I'd challenge anyone to go visit them and then not see just how important virtual worlds are going to become!



***Imported from old blog***

Semantic Info via Google

http://www.readwriteweb.com/archives/google_semantic_data.php


***Imported from old blog***

Tuesday, 6 January 2009

Just posted a new video of our Halo robotar in SL

I've just posted a new video to the Daden YouTube channel showing our Halo automated avatar being put through her paces. This is a recording of the demo we gave at the BCS AI Special Interest Group conference in December, and includes a demo of the emotion work we have been doing with the University of Wolverhampton.


***Imported from old blog***

Monday, 5 January 2009