Archive

Archive for the ‘Internet Business’ Category

Are NFTs the future of digital IP and the creative world, or just a remix of DRM and all its woes? (Part 3)

February 5, 2022 4 comments

This is third in a series of posts to share some observations, opinions and conclusions from playing with this intriguing technology that sits squarely at the intersection of digital technology, creative content and intellectual property. The topic is broken down into the following parts:

  1. What are NFTs (and the non-fungibility superpower)?
  2. What has this got to do with Intellectual Property (and content protection)?
  3. Does it mean that NFTs are like DRM remixed?
  4. How does it affect the creative industry today and in the future?
  5. Summary observations and conclusions.
Read more…
Advertisement

More Perils of Reusing Digital Content

February 7, 2016 1 comment
Some time ago I wrote an article and blog post entitled “the perils of reusing digital content” looking at the key challenges facing users of digital content which thanks to the power of computing and the Internet has become more easily available, transferable and modifiable. It says a lot about the age in which we live that this is still not universally perceived to be a good thing. It also explored the Creative Commons model as a complementary alternative to a woefully inadequate and somewhat anachronistic copyright system in the digital age. Since then the situation has got even more complex and challenging thanks to the introduction of newer technologies (e.g. IoT), more content (data, devices and channels), and novel trust / sharing mechanisms such as blockchain. 


I’ve written a soon-to-be-published article about blockchain, from which the following excerpt is taken:  “Blockchains essentially provide a digital trust mechanism for transactions by linking them sequentially into a cryptographically secure ledger. Blockchain applications that execute and store transactions of monetary value are known as cryptocurrencies, (e.g. Bitcoin), and they have the potential to cause significant disruption of most major industries, including finance and the creative arts. For example, in the music industry, blockchain cryptocurrencies can make it economically feasible to execute true micro-transactions, (i.e. to the nth degree of granularity in cost and content). There are already several initiatives using blockchain to demonstrate full transparency for music payments – e.g. British artiste Imogen Heap’s collaboration with UJO Music features a prototype of her song and shows how income from any aspect of the song and music is shared transparently between the various contributors.”


The above scenario makes it glaringly obvious that IP protection in digital environments should be focused more on content usage transparency rather than merely providing evidence or enforcing copying and distribution restrictions. The latter copy and distribute restriction model worked well in a historically analog world, with traditionally higher barriers-to-entry, whereas the former transparent usage capability plays directly to the a strength of digital – i.e. the ability to track and record usage and remuneration transactions to any degree of granularity, (e.g. by using blockchain).


Although it may sound revolutionary and possibly contrary to the goals of today’s content publishing models, in the longer term, this provides a key advantage to any publisher brave enough to consider digitising and automating their publishing business model. Make no mistake, we are drawing ever closer to the dawn of fully autonomous business models and services where a usage / transparency based IP system will better serve the needs of content owners and publishers.


In a recent post, I described a multi-publishing framework which can be used to enable easier setup and automation of the mechanisms for tracking and recording all usage transactions as well as delivering transparent remuneration for creator(s) and publisher(s). This framework could be combined with Creative Commons and blockchains to provide the right level of IP automation needed for more fluid content usage in a future that is filled with autonomous systems, services and business models.


Predicting the (near) Future

December 22, 2015 Leave a comment
The future is always tricky to predict and, in keeping with Star Wars season, the dark side is always there to cloud everything. But as we all know in IT the ‘Cloud’ can be pretty cool, except of course when it leaks. Last month saw the final edition of Gartner’s Symposium/ITxpo 2015 in Barcelona, and I was fortunate to attend (courtesy of my Business Unit) and bear witness to some amazing predictions about the road ahead for our beloved / beleageured IT industry.
 
Judging from the target audience, and the number of people in attendance, it is safe to say that the future is at best unpredictable, and at worst unknowable, but Gartner’s Analysts gave it a good go; making bold statements about the state of things to be, within the next 5 years or so. The following are some key messages, observations and predictions which I took away from the event.
 
1. CIOs are keen to see exactly what lies ahead.
Obviously. However, it does confirm to my mind that the future is highly mutable, especially given the amount of change to be navigated on the journey towards digital transformation. I say ‘towards’ because, from all indications, there is likely no real end-point or destination to the journey of digital transformation. The changes (and challenges / opportunities) just keep coming thick and fast, and at an increasing pace. For example, by 2017, Gartner predicts that 50% of IT spending will be outside of IT, it currently stands at 42% today, therefore CIOs must shift their approach from command and control style management to leading via influence and collaboration.
 
2. Algorithmic business is the future of digital business
A market for algorithms (i.e. snippets of code with value) will emerge where organizations and individuals will be able to: licence, exchange, sell and/or give away algorithms – Hmmm, now where have we seen or heard something like that before? Anyway, as a result, many organisations will need an ‘owner’ for Algorithms (e.g. Chief Data Officer) who’s job it’ll be to create an inventory of their algorithms, classify it (i.e. private or “core biz” and public “non-core biz” value), and oversee / govern its use.
 
3. The next level of Smart Machines
In the impending “Post App” era, which is likely to be ushered in by algorithms, people will rely on new virtual digital assistants, (i.e. imagine Siri or Cortana on steroids) to conduct transactions on their behalf. According to Gartner, “By 2020, smart agent services will follow at least 10% of people to wherever they are, providing them with services they want and need via whatever technology is available.” Also, the relationship between machines and people will initially be cooperative, then co-dependant, and ultimately competitive, as machines start to vie for the same limited resources as people.
 
4. Platforms are the way forward (and it is bimodal all the way)
A great platform will help organisations add and remove capability ‘like velcro’. It will need to incorporate Mode 2 capability in order to: fail fast on projects / cloud / on-demand / data and insight. Organisations will start to build innovation competency, e.g. via innovation labs, in order to push the Mode 2 envelope. Platform thinking will be applied at all layers (including: delivery, talent, leadership and business model) and not just on the technology / infrastructure layer.
 
5. Adaptive, People Centric Security
The role of Chief Security Officer role will change and good security roles will become more expansive and mission critical. In future, everyone gets hacked, even you, and if not then you’re probably not important. Security roles will need to act more like intelligence officers instead of policemen. Security investment models will shift from predominantly prevention based to prevention and detection capabilities, as more new and unpredictable threats become manifest. Also organisations will look to deploy People Centric Security measures (PCS) in order to cover all bases.
 
6. The holy grail of business moments and programmable business models
The economics of connections (from increased density of connections and creation of value between: business / people / things) will become evident especially when organsiations focus on delivering business moments to delight their customers. Firms will start to capitalise on their platforms to enable C2C interactions (i.e. customer-2-customer interactions) and allow people and things to create their own value. It will be the dawn of programmable business models 
 
7. The Digital Mesh and the role of wearables and IoT
One of the big winners in the near future will be the ‘digital mesh’, amplified by the explosion of wearables and IoT devices (and their interactions) in the digital mesh environment. Gartner predicts a huge market for wearables (e.g. 500M units sold in 2020 alone – for just a few particular items). Furthermore, barriers to entry will be lower and prices will fall as a result of increased competition, along with: more Apps, better APIs and improved power.
 
The above are just a few of the trends and observations I got from the event, but I hasten to add that it will be impossible to reflect over 4 days of pure content in these highlight notes, and that other equally notable trends and topics such as: IoT Architecture, Talent Acquisition and CIO/CTO Agendas, only receive honourable mentions. However, I noticed that topics such as Blockchain were not fully explored as might be expected at an event of this nature. Perhaps next year will see it covered in more depth – just my prediction.
In summary, the above are not necessarily earth shattering predictions, but taken together they point the way forward to a very different experience of technology; one that is perhaps more in line with hitherto far-fetched predictions of the Singularity, as humans become more immersed and enmeshed with machines. Forget the Post-App era, this could be the beginning of a distinctly recognisable post human era. However, as with all predictions only time will tell, and in this case, lets see where we are this time next year. I hope you have a happy holiday / festive season wherever you are.

Governing the Internet of Things.

February 28, 2015 Leave a comment
In light of increasing coverage about the so called “Internet of Things” (IoT), it is not surprising that sovereign governments are paying attention and introducing initiatives to try understand and take advantage of / benefit from the immense promise of the IoT. Despite the hype, it is probably too early to worry about how to govern such a potential game changer, or is it?


According to Gartner’s Hype Cycle for Emerging Technologies, the Internet of Things is hovering at the peak of inflated expectations, with a horizon of some 5 – 10 years before reaching the “plateau of productivity” as an established technology, so still fairly early days as yet, it would seem. However, that is not sufficient reason to avoid discussing governance options and implications for what is arguably the most significant technology development since the dawn of the Internet itself. To this end, I attended a recent keynote seminar on policy and technology priorities for IoT (see agenda here), and below are some of the key points I took away from the event:


1. No trillion IoT devices anytime soon –  According to Ovum’s Chief Analyst the popular vision of ‘a Trillion IoT devices’ will not appear overnight, for the simple reason that it is difficult, and will take some time, to deploy all those devices in all manner of places that they need to be.


2. What data avalanche? – Although a lot of data will be generated by the IoT, it shouldn’t come as a surprise that the proportion of meaningful information will depend on the cost to generate, store and extract useful information from the petabytes of noise – there is a lot of scope for data compression. For example, the vast majority of data from say environment sensing IoT devices will likely be highly repetitive and suitable for optimisation.


3. Regulatory implications – OFCOM, the UK’s Data regulator, identified the four themes as most relevant for the future development of  IoT, i.e.: 1. Data privacy (including authorisation schemes); 2. Network security & resilience (suitable for low end devices); 3. Spectrum (e.g. opening up 700Mhz band and other high / low frequency bands for IoT); and 4. Numbering & Addressing (need to ensure there is enough numbers & addresses in the future for IoT).


4. Standards and interoperability – these remain key to a workable, global Internet of Everything (IoE) particularly because of need for data availability, interoperability (at device and data level), and support for dynamic networks and business models.


5. Legal implications – again the key concern is data privacy. According to Philip James (Law Firm Partner at Sheridans), in describing the chatter between IoT devices: “hyper-connected collection and usage of data is a bit like passive smoking – not everyone is aware of it”.


In context of the above observations, it may be easy to ignore the elephant in the room, i.e. how to manage unintended consequences from something as intangible as the future promise of IoT? What will happen if and when the IoT becomes semi-autonomous and self reliant, or is that science fiction?


Well, I wouldn’t be so sure, because it all boils down to trust: trust between devices; trust in data integrity; and trust in underlying networks and connectivity. However, this is not something the Internet of today can provide easily, therefore some interesting ideas have started percolating around scalable trust and integrity. For example, Gurvinder Ahluwalia (IBM’s CTO for IoT and Cloud Computing) described a scenario using hitherto disruptive and notorious technologies (i.e. Blockchain and BitTorrent, of Bitcoin and Pirate Bay fame respectively), to create a self trusting environment for what he calls “democratic devices”.


The implications are astounding and much closer to the science fiction I mentioned previously. However, it is real enough when you consider that it requires a scalable, trustworthy, distributed system to verify, coordinate, and share access to the ‘Things’ on the IoT, and that key components and prototypes of such a system already exist today. This, in my opinion, is why sovereign governments are sitting up and taking notice, as should all private individuals around the world.


Copyright and Technology in 2013

November 18, 2013 Leave a comment

Last month’s conference on copyright and technology provided plenty of food for thought from an array of speakers, organisations, viewpoints and agendas. Topics and discussions ran the gamut of increasingly obvious “business models are more important than technology” to downright bleeding edge “hypersonic activation of devices from outdoor displays “. There was something to take away for everyone involved. Read on for highlights.

The Mega Keynote interview: Mega’s CEO Vikram Kumar, discussed how the new and law-abiding cloud storage service is proving attractive to professionals who want to use and pay for the space, security and privacy that Mega provides. This is a far cry from the notorious MegaUpload, and founder Kim Dotcom’s continuing troubles with charges of copyright infringement, but there are still questions about the nature of the service – e.g. the end-to-end encryption approach which effectively makes it opaque to outside scrutiny.  Read more about it here.

Anti-Piracy and the age of big data – Mark Monitor’s Thomas Sehested talked about the rise of data / content monitoring and anti-piracy services in what he describes as the data driven media company. He also discussed the demise of content release windows, and how mass / immediate release of content across multiple channels lowers piracy, but questioned if this is more profitable.

Hadopi and graduated response – Hadopi’s Pauline Blassel gave an honest overview on the impact of Hadopi, including evidence of some reduction in piracy (by factor of 6M-4M) before stabilsation. She also described how this independent public authority delivers graduated response in a variety of ways e.g. from raising awareness to imposing penalties and focusing primarily on what is known as PUR (aka ‘Promotion les Usage Responsible’)

Auto Content Recognition (ACR) and the 2nd Screen – ACR is a core set of tools (including DRM, watermarking and fingerprinting), and the 2nd screen opportunity (at least for broadcasters) is all about keeping TV viewership and relevance in the face of tough competition for people’s time and attention. This panel session discussed monetisation of second screen applications, and the challenges of how TV is regulated, pervasive and country specific. Legal broadcast rights is aimed at protection of broadcast signals, which triggers the 2nd screen application, (e.g. via ambient / STB / EPG based recognition). This begs the question of what regulation should be applied to the 2nd screen, and what rights apply? E.g. Ads on TV can be replaced in the 2 screen, but what are the implications?

Update on the Copyright Hub – The Keynote address by Sir Richard Hooper, chair of the Copyright Hub and co-author of the 2012 report on Copyright Works: Streamlining Copyright Licensing for the Digital Age, was arguably the high point of the event. He made the point that although there are issues with copyright in the digital age, the creative industries need to get off their collective backsides and streamline the licensing process before asking for a change in copyright law. He gave examples of issues with the overly complex educational licensing process and how the analogue processes are inadequate for the digital age (e.g. unique identifiers for copyright works).

Sir Richard Hooper

Sir Richard Hooper

The primary focus of the Copyright Hub, according to Sir Richard, is to enable high volume – low value transactions, (e.g. to search, license and use copyright works legally) by individuals and SMEs. The top tier content players already have dedicated resources for such activities hence they’re not a primary target of the Copyright Hub, but they’ll also benefit by removing the need to deal with trivial requests for licensing individual items (e.g. to use popular songs for wedding videos on YouTube).

Next phase work, and other challenges, for the Copyright Hub include: enabling consumer reuse of content, architectures for federated search, machine to machine transactions, orphan works registry & mass digitisation (collective licensing), multi licensing for multimedia content, as well as the need for global licensing. Some key messages and quotes in the ensuing Q&A include:

  • “the Internet is inherently borderless and we must think global licensing, but need to walk before we can run”
  • “user-centricity is key.  People are happy not to infringe if easy / cheap to be legal”
  • “data accuracy is vital, so Copyright Hub is looking at efforts from Linked Content Coalition and Global Repertoire Database”
  • “Metadata is intrinsic to machine to Machine transactions – do you know it is a crime to strip metadata from content?”
  • “Moral rights may add to overall complexity”

As you can probably see from the above, this one day event delivered the goods and valuable insights to the audience, which included people from the creative / content industries, as well as technologists, legal practitioners, academics and government agencies. Kudos to MusicAlly, the event organiser, and to Bill Rosenblatt, (conference chair), for a job well done.

Next Stop: I’ll be discussing key issues and trends with Digital Economy and Law at a 2 day event, organised by ACEPI,  in Lisbon. Watch this space.

The Startup Kids

May 30, 2013 2 comments

Digital innovation is becoming the norm for young startups these days, and the resulting shift in culture and attitude that comes along with it is now pervasive in the Silicon valleys, alleys, glens, and roundabouts of this world. However, this wasn’t always the case, and it only takes a good documentary to show just how far things have moved on from the days of Steve Jobs and Bill Gates to the current crop of digital wunderkinds.

 

BCSStartupKids

BCSStartupKids

 

Early this month, I attended the screening of The Startup Kids, a documentary film about said young digital startups, courtesy of BCS Entrepreneurs specialist group, and I wrote a review for it here. Suffice it to say that the cast of subjects interviewed on this hour long film read like a who is who of young digital entrepreneurs and included founders of such popular services as: Vimeo, Soundcloud, Kiip, InDinero, Dropbox, and Foodspotting to name a few. The topics covered include: what it takes to be a real digital entrepreneur (e.g. words like obsessive, passionate, workaholics come to mind), and why only the smart, flexible, and incredibly lucky few ever make it all the way. All in all, it was a really good and insightful documentary

Thanks to the BCS Entrepreneurs, and the Innovation Warehouse, for hosting this fun event, and here’s hoping for more such events in the future.

Copyright And Technology 2012 Conference

June 20, 2012 Leave a comment

Yesterday saw the first UK edition of this annual conference, which took place in London’s Kings Fund venue. The full day conference featured panels and expert speakers on that most interesting, challenging and potentially lucrative junction of copyright, content and technology. And, another buzzword for the ‘social’ melting pot – Social DRM!

Copyright And Technology Conference Word Cloud

Copyright And Technology Conference Word Cloud

The event format involved the usual keynotes and plenary sessions, during the morning segment, and a split into two streams, (covering technology and legal aspects), in the afternoon. My key take-aways include:

  1. User education on copyright content infringement is far too one-sided. According to expert copyright lawyer, Andrew Bridges, potential infringers / fans need ‘credible teachers’ with a more balanced agenda
  2. Traditional Hollywood release window is under threat (from user demand for content, here and now!)
  3. Piracy data collection / analysis are increasingly used by big content owners (e.g. Warner Bros and Harper Collins) to identify potential demand for specific content, via pirate channels. An interesting question by conference chair, Bill Rosenblatt, was whether content providers saw any potential for combining piracy data collection/analysis with social media buzz analysis, in order perhaps to help identify new market opportunities, remained mostly unanswered
  4. Media monitoring organisations can collect and analyse, (with consumers’ permission), actual usage data from user computers. According to the speaker from Warner Bros, their research apparently confirms claims that HADOPI has had an impact, with a recent decline in Peer-to-Peer file-sharing, in France.
  5. According to MarkMonitor, a high proportion of pirated ebook content are in the PDF format, which some think may be a result of easy portability between devices. Also, according to Harper Collins speaker, key motivational factors for ebook piracy include: Pricing, DRM and territorial restrictions.
  6. In the Technology stream, the panel on content identification (e.g. via fingerprinting vs. session based watermarking) discussed creation of content aware ecosystems using Automatic Content Recognition
  7. The term ‘Social DRM’ (a buzzword if I ever heard one) is the use of user information to uniquely identify digital content (and to potentially name and shame file sharers), as described by CEO of Icontact. One attendee grilled the presenter about ways and means to crack it! Apparently, the term Social DRM was coined by Bill McCoy at Adobe (now at IDPF), and is really just watermarking content with personally identifiable information
  8. Bill Rosenblatt described LCP (Lightweight Content Protection) for ePub as being somewhere in the middle of the content protection continuum (i.e. between no DRM and very strong DRM). Also, he observed that thepublishing industry stance on DRM is still in flux, and that genres such as (sci-fi, romance, IT) were mainly going DRM-free, whilst other e.g. higher education still used strong DRM to protect content
  9. Finally, my technology stream panel session on Security Challenges of Multi-Platform Content Distribution saw key contributions from experts, with multiple perspectives, from: a Security Consultant (Farncombe), DRM Provider (Nagra), Business PoV (Castlabs) and Content Provider / Owner (Sony Picture Entertainment).

Overall, this was a very good first outing for the Copyright and Technology conference in London. The co- organisers, GiantSteps and MusicAlly, did a great job to pull it off, despite disappointment (by last minute cancellation of a keynote) from the HADOPI Secretary General). I would certainly encourage anyone interested in the opportunities and challenges of content, technology and copyright to attend this conference in future. And yes, Social DRM is my new buzzword of the month!

An IP System Fit for the 21st Century

Last week, I attended a breakfast meeting at the House of Commons to discuss and reflect on practical issues around implementing recommendations of the Hargreaves Report, as well as ways in which the IP system can be evolved to better enable the benefits from 21st Century business and technology opportunities.

UK House of Parliament

UK House of Parliament

This event, organised by the Industry and Parliament Trust, featured brief talks by Professor Ian Hargreaves (author of the IP Review report & recommendations – download it here), Ben White (Head of IP at the British Library), and Nico Perez (co-founder of startup, MixCloud), plus Q&A style discussions with the attending group of politicians and business people from relevant industries. Some key observations and comments are:

  • London has the largest cluster of IP related start-ups, as well as the biggest hub for VCs, in Europe
  • There has been a lot of international interest in the Hargreaves report and recommendations (the good professor regularly gets calls from interested observers across the globe). Also, the review findings and recommendations had good traction with the UK government.
  • Digital economy versus creative economy; are they one and the same (i.e. is there and/or should there really be a difference)?
  • The larger creative industry players (e.g. publishers), and their lobbyists, are not in full agreement with the review findings and / or recommendations, and remain firmly resistant to change
  • According to one attendee, the interests of creative stakeholder (e.g. content creators) were not well represented or served by the review findings and recommendations
  • Collecting societies act like de facto monopolies, which can make life difficult for some more innovative start-ups
  • Broadcast TV players are trying to innovate and catch up with what consumers are already doing in their homes, but the current IP system is not sufficiently geared towards enabling such initiatives.

Note: Further information, comments and observations can be found in the IPT blog post about this event.

The upshot of the above points, in my opinion, is that a new / evolved IP system must really be geared towards dual targets, i.e. to help simplify and facilitate the use and reuse of IP works, especially in the digital realm. Such a focus would undoubtedly go a long way towards addressing the legion of non-technological challenges faced by most innovators, entrepreneurs and investors in the creative digital industries. For example, according to an article (see: The Library of Utopia), published by MIT technology review, “the major problem with constructing a universal library nowadays has little to do with technology. It’s the thorny tangle of legal, commercial, and political issues that surrounds the publishing business.”

These are pretty much the same issues to be found in similar ventures within publishing and other major creative industries, e.g.: Music (think cross border licensing for the much vaunted Celestial Jukebox), or a global film and image library (e.g. a mash-up of Hulu, Netflix, Corbis and Getty Images). In all cases, technology is not the stumbling block, because the bigger challenges lie with any combination of: business strategy, commercial models, legal / political / cultural mindsets, encountered along the way.

Having said that, it can be argued that such hurdles are not sustainable, for various reasons, not least of which is that individuals (or customers, casual pirates, consumers, freetards etc. – take your pick) are already way ahead of the curve in terms of digital content / technology, and will often use it exactly as they see fit.

This means that established incumbent players in the creative industries are forever playing a reactive / catch-up game, instead of pursuing or encouraging discovery of the next big thing. As a result, most disruptive propositions will invariably have a high impact on established business models, especially if and when they harness the natural instincts of individual users. An interesting example could be the recently launched Google Drive, complete with built-in OCR capability (which will enable users to digitize and search scanned content). Could this ultimately lead to a user generated version of Google Books?

To conclude, an IP system worthy of the 21st century is an urgent necessity, but there is also pressing need to keep in mind the big picture, which is that the Internet is a global enabler / platform, therefore any new IP system must likewise be global in scope. The UK, with its wealth of creative talent, plus such efforts as the IP review and recommendations, may be in a unique position to provide some leadership on the best way forward for IP in this 21st century.

Who needs a Digital Copyright Exchange?

January 12, 2012 1 comment

I was kindly invited to attend a ‘narrow table’ discussion session about the key challenges facing innovation and startups when dealing with a copyright system that is clearly not fit-for-purpose in an increasingly digital world.

This event was organised by The Coalition for a Digital Economy (Coadec) and took place yesterday evening at the TechHub, in the heart of London’s TechCity and the fabled ‘Silicon Roundabout’.

Silicon Roundabout
London’s “Silicon Roundabout”*

This session focused on teasing out the real needs (and supporting evidence thereof) for a Digital Copyright Exchange, as recommended in the Hargreaves report, which would help to address key challenges facing UK innovation and entrepreneurship in the world of digital. This is part of the diagnostic phase of an independent feasibility study led by Richard Hooper.

Attendees included entrepreneurs and start-ups (in music and other digital media) as well as participants from the publishing, legal, academic, public sector, and consulting industries. Highlights from the discussions include:

  1. Academic publishing – e.g. universities get double-charged for publishing academic works; i.e. for researching the content, which is provided free to the publisher, and again for the published work
  2. Costly clearance – e.g. according to one attendee, the British Library’s Sound Archives proportionally spent the largest amount of time negotiating / clearing rights for the materials, than on creating archive itself.
  3. Orphan works – DCE could provide a useful mechanism for managing orphan works.
  4. Small / Medium Scale Enterprises – SMEs and startups experience the most difficulty with licensing, especially as they lack the resources and money to go through the hoops in negotiating with rights owners. E.g. the lack of a clear and comprehensive licensing system hampers start-ups in establishing their business models (this is particularly acute with music streaming services)
  5. Price versus value – Collecting societies may not have the right pricing models for music content. E.g. On-demand streams are considered more expensive than scheduled streams or download.
  6. Physical versus digital copyright – The old world approach of counting instances of works for remuneration does not translate well for digital copyright and new usage scenarios
  7. Rights owners are scared – they don’t wish to make the wrong decision and risk cannibalising their existing business
  8. Software Licensing – The DCE should also extend to include software and software licensing
  9. Navigation – This is a cross industry issue with copyright. A single platform approach to cover all licensing needs would be great as this would provide a single point of reference for information and guidance for users
  10. Government copyright – It was suggested that government owned IP (e.g. ordnance survey data, census, land or electoral register data) should be covered by the DCE
  11. Social Media Data – Increasing use of social media data streams for powering new applications makes it a crucial element for future services which will need addressing, sooner or later, perhaps in the DCE.

The above are only a few of the sentiments expressed on the day, and attendees were encouraged to send in their responses to the call for evidence as soon as possible.

Overall, this was a very informative session which seems to confirm something I’ve often stated, which is that the key role of any new digital copyright mechanism should be to simplify and facilitate the use of copyright material within and outside the digital environment. If the Digital Copyright Exchange had those as key principles, it would go a long way to ensuring successful outcomes and delivery of the promised benefit of over £2 Billion to the UK economy.

———-

*Note: Image adapted from – Original Image © Copyright Nigel Chadwick and licensed for reuse under this Creative Commons Licence.

The ISP Dilemma Continues

December 14, 2011 Leave a comment

Some time ago I wrote a post about the challenges facing Internet Service Providers (ISPs) over whether they can afford to be the police of the Internet, with respect to helping find and stop persistent abuse of content, and other illegal online activities by their users. This is still a serious issue today, particularly in light of the cloud, hence the urge to revisit that post here.

The biggest challenge then was around the growing perception of ISPs as de-facto gatekeepers of the Internet, which effectively added another layer of complexity to their traditional / core business. As a result, not only do ISPs have to deal with existing and non-trivial issues (e.g. declining markets, convergent evolution via multi-play business models, and issues around increasing broadband / bandwidth consumption), they also have to contend with the fact that:

  • Content owners still want ISPs to play a more central role in preventing, detecting, monitoring and punishing illegal file sharing (e.g. via schemes like the infamous three strikes proposal).
  • Various initiatives by governments around the world, such as the UK’s Digital Economy Act, are put in place to help provide much needed governance and teeth to the need for ways to monitor and combat illegal activities including copyright infringement.
  • There still are also signs of lack of trust by ISP customers over service quality / charges, and potential invasion of privacy

These all add up to a severe headache for ISPs, and may be made even worse when you throw cloud services into the mix. Some of the options, or combinations thereof, that ISPs have used or considered using to deal with these key challenges include:

  • Targeted advertising schemes – preferably via opt-in models as a way to help subsidise the cost of service. In some cases even extending to much cheaper or even “free” access, for your usage information, of course.
  • Industry self regulation – Still not easy to do, but one that would benefit the entire industry, and help address the pressures from content owners
  • Network Controls – Invest in better ways to track, monitor and control or “shape”  network traffic, in order to deliver better quality of service, promote fair use, and support law enforcement
  • Partner with content owners – To explore new and more flexible content business models. E.g. a survery found that music fans might actually prefer ISPs as their music supplier. However the advent since of cloud based music and streaming services may have changed that landscape somewhat.

In any case, it is still advisable for ISPs to bear in mind the following three points in trying to deal with this dilemma:

  1. Do not alienate or irritate the customer – protecting the customer relationship and keeping their trust is still key to future success
  2. Resist excessive external pressures – Content owners need ISPs as much as ISPs need them, and perhaps even more so
  3. Take the initiative – ISPs should be more proactive in creating customer-pleasing, regulator-friendly propositions and business models (perhaps by working closely with consumers and content owners)

Overall, there is no easy way to slow down the natural evolution of the Internet, and cloud services, therefore ISPs need to do more to understand, evolve and embrace what is really a critical niche in the digital content ecosystem. The cloud is here for all, and it is here to stay.

 

Note: This post is brought to you in partnership with Intel(R) as part of the “Technology in tomorrow’s cloud & virtual desktop” series. For more information please click – HERE