Archive

Archive for April, 2012

Supercomputers and the Future

April 19, 2012 Leave a comment

Wednesday the 18th of April marked 100 days to the greatest show on earth, along with the promise of even more superlatives, as a direct consequence of the Olympic motto: “Faster, Higher, Stronger”. It certainly made an auspicious date for an event, held at the House of Lords, on the future of Supercomputers. 

House Of Lords

The House Of Lords

The event was The Second Lorraine King Memorial Lecture, sponsored by Kevin Cahill, FBCS.CITP (author of “Who owns Britain” and “Who owns the World”), and superbly hosted by the Lord Laird and Computer Weekly. The main topic of debate centred on whether Supercomputers were merely “prestige objects or crucial tools in science and industry”.

Figure: (L-R) Kevin Cahill, Prof. Meuer and Lord Laird

The lecture delivered by Supercomputer expert, Prof. Dr. Hans Werner Meuer, (see CV) was most illuminating, and I gathered, among other things, that the UK ranked 4th in the Top500 list of Supercomputer using countries, and that France was the only European country with any capability to manufacture Supercomputers. Clearly more needs to be done by the likes of the UK or Germany to remain competitive in the Supercomputing stakes, which begged the question, (as posed later by an attendee), of whether these machines were nothing more than objects of geopolitical prestige, superiority and / or bragging rights, (e.g. My Supercomputer is faster than yours, so Nyah-nyah, nyah-nyah nyah-nyah! – Or perhaps Na na, na, na, naa! – apologies to the Kaiser Chiefs).

In any case, several things stood out for me at this rather well attended event, including:

  • The definition of a Supercomputer remains based on the most powerful or fastest computers, at any given point in time, e.g. Apple’s iPad 2 is two-thirds as powerful as the Cray2 Supercomouter from 1986. The typical measure of speed and power is based on sheer numerical processing power (i.e. not data crunching), using the Linpack test
  • According a paper by Sponsor, Kevin Cahill, the Supercomputer sector is the fastest growing niche in the world of technology, and it is currently worth some $25Billion. Japan, China and the USA are currently holding the lead in the highly ego driven world of Supercomputing, but there is an acute shortage of the skills and applications required to make the most of these amazing machines
  • Typical applications of Supercomputing include: university research, medicine (e.g. Human Genome Project), geophysics, global weather and climate research, transport or logistics. It is used in various industries e.g.: Aerospace, Energy, Finance and Defence etc. More recent applications, and aspirations, include: bio-realistic simulations (e.g. the Blue Brain Project), and a shift towards data crunching in order to model and tackle challenges in such areas as Social Networks and Big Data.
  • The future of Supercomputers is to move past the Petaflop Supercomputers of today, to Exaflop capable machines by 2018. The next international conference on Supercomputers takes place June 17-21, in Hamburg, Germany, and it promises to include topics on: big data / alternative architectures for data crunching / Exascale computing / Energy efficiency / technology limits / Cloud computing for HPC, among other things.
Future of Supercomputers

The Future of Supercomputing (Source: http://www.isc-events.com/slides/london)

Overall, this was an excellent event, in a most impressive venue, and the attendees got a chance to weigh in with various opinions, questions and comments to which the good Professor did his best to respond, (including inviting everyone to Hamburg, in June, to come see for themselves!). Perhaps the most poignant take away of the evening, in my opinion, was the challenge by Lord Laird to the computing industry about a certain lack visibility, and the need for us to become more vocal in expressing our wishes, concerns and desires to those in power, or at least to those with the responsibility to hold Government to account. As he eloquently put it, (but paraphrasing slightly), “If we don’t know who you are, or what it is you want, then that is entirely your own fault!”

Publishers vs. eBook Price Fix vs. Copyright

April 17, 2012 Leave a comment

Recent developments in the world of publishing, clearly demonstrate yet again that the primary objective of the content industry is to make a tidy profit. Nothing wrong with that, if you ask me; however, it usually turns into a rather sticky mess when that pursuit is clouded by accusations of skulduggery, conspiracy and outright price fixing.

I refer to a recent lawsuit filed by the US Justice Department against Apple and 5 major book publishers, over allegations of conspiracy, collusion and price fixing. According to this article from the Wall Street Journal, it could change the course of a rapidly expanding eBook publishing industry. But how so, you ask?

Well, it is really down to opposing business models, (i.e. the so called agency versus wholesale approach to eBook pricing), where, on one hand, an agent such as Apple will allow publishers to set their own price, and take a cut (in this case 30%) from sales on its iBook platform. On the other hand, a wholesale pricing model is one where the retailer (e.g. Amazon or Barnes and Noble) sets the price for eBooks and can effectively apply discounts as they wish (even if it means selling eBooks at a loss). Obviously, this latter scenario leaves publishers with less control over prices, and consequently profits, hence the opportunity to take advantage of a more favourable option could not fail to be attractive.

However, the question remains about the value proposition for consumers, who are themselves increasingly embracing eBooks for its convenience, ease of use and, perhaps more to the point, a huge potential for significantly lower prices overall. One might argue that eBooks do not require paper, glue, physical stores / shelf space or any significant distribution / transport costs, therefore they really shouldn’t be priced anything close to their physical versions. Surely, this quest to keep prices high can only be in favour of publishers, and their bottom lines, mustn’t it?

So what are the key arguments / rationale for keeping eBook prices artificially high? Perhaps the main reason has to do with high operating costs incurred by large publishers, as well as the need to maintain a powerful marketing and promotional machinery. Furthermore, it may also be argued that lower cost eBooks are somehow cannibalising the margins to be had from physical books. Whatever the case, it seems publishers stand to lose out if they don’t do something (innovative?) to counter the effects of change.

Hmm, now where have we seen this before, (and how did that industry cope / survive)? Ah, yes, the music industry went through something similar, except they chose to sue those pirates and freeloaders (aka the people formerly known as customers), that supposedly ‘stole their bottom line’. However, they seem to have found other ways to complement dwindling revenue streams, e.g. via ticket sales for live performances. By the way, death may no longer prevent artistes from performing before a live audience, assuming this deceased artist hologram idea catches on.

Luckily the book publishing industry don’t have to take quite so drastic a measure, especially as it has been shown time and again that new media formats and channels do not necessarily mean the complete demise of existing ones. This is arguably the perfect time for publishers to embrace even bolder / more innovative thinking to discover complementary initiatives that will bolster an industry under threat, real or imagined. They must observe and capitalise on consumer trends and emergent user behaviours. For example, the sheer capacity, variety and anonymity (i.e. no tell tale covers) of reading material to be found on your average eBook reader means that users now carry, consume and explore hitherto unthinkable (at least in public) subject matter. The current boom in romantic erotica sub-genre, aka Mommy Porn, is an interesting case in point.

Perhaps even more fundamental, is a need to seriously consider the verboten idea of evolving copyright into something much better aligned with the digital age. Unfortunately, that will be a tough sell to the publishing industry, if this report of a speech given by HarperCollins International Chief Exec, at the London Book Fair, is anything to go by. According to the article, “others in the book trade, including the Publishers Association” have criticised the recent Hargreaves Review of Copyright, which some feel could weaken the current copyright regime. As you may have gathered by now, I don’t subscribe to that point of view, but then I am only an author and may not see things in quite the same light as a successful publisher might.

In many ways, this whole situation could be seen as a remix of circumstances surrounding the birth of copyright. In 1710, the printing industry lobbied for creation of a law to govern the rights to print or reproduce works (now known as the Statute of Anne), in order to protect their interests and the authors / creators of said works. Copyright is essentially an artificial system, which routinely needs a degree of manual intervention whenever new and disruptive content technology or consumer trend emerges. That, in my opinion, is the fundamental flaw with copyright which any revision thereof must try to address. In an age of multi-platform, multi-channel and multi-format publishing, there really is no place (or time) for manual intervention each time a new and disruptive trend, challenge and opportunity presents itself. I for one would be more than happy to attempt to demonstrate just how such a system could work (based on real copyright content), but then I would probably need a hefty six figure advance from some far-sighted multi-publisher to make it happen. Who says there is no future for publishing?

How Can You Measure Real Value?

April 2, 2012 Leave a comment

It’s been a while since my last post, but then nothing much has changed, perhaps because, in real terms, a few weeks is really not that long, even in the fast-paced world of digital technology and innovation. However, it could just be proof of that old saying: “the more things change, the more they remain the same”, right?

Although, on the surface, it might not appear that much has changed, there are evident signs of continuous progress in several areas, including: technology and innovation; user experience and social networking / media / business; mobility and data of the large variety (aka big data). Many other experts and analysts, across various media and other channels, do a great job of observing / commenting on these topics and trends that I won’t bother trying to rehash them here.

In any case, the point I really wish to explore is that such developments, trends and indicators seem to point towards a new value exchange paradigm and/or system, sometime in the not too distant future. This notion is clearly described by Tim O’Reilly, at the last Strata Conference, where he talked about a fundamental need to find better ways for “measuring the economic impact of the sharing economy”. Among other things, he asks the key question, in my opinion, of how to measure the real value of sharing, particularly where traditional economic value yardsticks, (e.g. typical financial metrics), are no longer adequate for the task. He also described the often unmeasured benefits to be derived from the sharing economy (e.g. enriching an ecosystem of which you are part), versus the sometimes destructive impact of a profit-led, financially measured system (e.g. the contribution of global financial institutions to the current economic shambles). It would appear in this new paradigm that the way forward would involve “creating more value than you capture”, which, somewhat counter-intuitively, actually works to your advantage.

Perhaps this paradigm shift will be most realisable, (at least for the content industry), via a strategy of diversification and multi-publishing, which together increases the likelihood of better traction / success for content, via multiple touch-points, partnerships and hooks to end consumers. A couple of examples, which describe real life scenarios in e-book publishing and music licensing, are outlined below as follows:

  1. E-Book Publishing: A recent post on CopyrightandTechnology.com discusses Harry Potter’s DRM Free e-Book offering, which runs somewhat counter to conventional wisdom for publishing such valuable properties in fully DRM’ed electronic formats, for fear of piracy. However this works for Harry Potter on many levels, especially considering how this would complement and create further opportunities for their existing and future merchandising initiatives.
  2. Music Licensing: An article in the Berklee Music Business Journal examined the pros and cons of Coca-Cola’s equity stake in a music licensing startup. On the one hand, a major global consumer brand partners with a music outfit to source original musical content for its marketing campaigns; on the other hand the artistes, (often independent, unsigned and eager to be heard), get an opportunity to gain access to Coca-Cola’s global marketing might – which beats anything a record label can provide these days. Verdict: Win / Win!
  3. Streaming Movies: The key players in on-demand video streaming services, e.g.: Netflix, Hulu, Amazon (i.e. Prime and LoveFilm), and latterly Sky, all offer different value propositions to the consumer, but in my opinion, the winner/s will likely emerge from those that are willing to leverage multiple customer propositions / channels / formats (e.g. books, music, DVD and perhaps devices).

In conclusion, it is becoming increasingly harder to ignore such trends / evidence / indicators that suggest a move towards multiple consumer propositions (including pricing), multi touch points (channels / interactions) and multi-formats is rapidly gaining ground. This makes it even more imperative to find a better yardstick for measuring the real value of content, products and services for both suppliers and consumers. It seems to me that we’re likely heading for a post monetary value exchange and recognition system, and hopefully one that is more in keeping with the post-global realities of a digitally connected planet. I remain optimistic, and fully convinced that money is not, and perhaps has never really been, the best yardstick for measuring true value.