Archive

Posts Tagged ‘Technology’

Copyright, Blockchain, Technology and the State of Digital Piracy

January 15, 2017 Leave a comment
The next installment of one of my favourite conferences on copyright and technology is right around the corner, on January 24th in NYC, and as usual it promises some interesting: debate, controversy and hot-off-the-press insights into the murky world of copyright business, technology, and legislation. Plus, this year, it also features a panel on the game changing technology of blockchain and its myriad disruptive applications across entire industries, including copyright and the creative industries.

Thankfully, the inclusion of this panel session recognises the never-ending role of new and innovative technologies in shaping the evolution of Copyright. Ever since that first mass copy technology (i.e. the printing press) raised questions of rights ownership, and due recompense for works of the mind, new technologies for replicating and sharing creative content have driven the wheel of evolution in this area. Attendees will doubtless benefit from the insight and expertise of this panel of speakers as well as moderator and Program Chair, Bill Rosenblatt, who questioned (in a recent blog post), the practicality, relevance and usefulness of blockchain in a B2C context for copyright. You are in for a treat.

This is a very exciting period of wholesale digital transformation, and as I mentioned once or twice in previous articles and blog-posts, the game is only just beginning for potential applications of: blockchain, crypto currencies, smart licences and sundry trust mechanisms in the digital domain. In an age of ubiquitous content and digital access, the focus of copyright is rightfully shifting away from copying and moving towards the actual usage of digital content, which brings added complexity to an already complex and subjective topic. It is far too early to tell if blockchain can provide a comprehensive answer to this challenge.

The Copyright and Technology conference series have never failed to provide some thought-provoking insights and debates driven by expert speakers across multiple industries. In fact, I reconnected recently with a couple of previous speakers: Dominic Young and Chris Elkins, who are both still pretty active, informed and involved in the copyright and technology agenda. Dominic, ex-CEO of the UK’s Digital Catapault, is currently working on a hush hush project that will potentially transform the B2C transaction space. Chris is co-founder Muso, a digital anti-piracy organisation which has successfully secured additional funding to expand its global footprint with innovative approaches to anti-piracy. For example, if you ever wondered which countries are most active in media piracy, then look no further than Muso’s big data based state of digital piracy reports. Don’t say I never tell you anything.

In any case, I look forward to hearing attendees impressions on the Copyright and Technology 2017 conference, which I’m unable to attend / participate this timme unfortunately. In the meantime, I’ll continue to spend my spare time, or whatever brain capacity I have left, with pro-bono activities that allow me to: meet, mentor / coach and advise some amazing startups on the dynamic intersection of IP, business and technology. More on that in another post.

Copyright and technology: glass half full or half empty?

October 11, 2014 Leave a comment
following on from my last post about IP and Digital Economy, I’d like to focus this one on the evolving role of copyright in the digital economy. What are the key recent developments, trends and challenges to be addressed, and where are the answers forthcoming? Read on to find out.
Picture5-md
Where better to start than at the recent Copyright and Technology 2014 London conference in which both audience and speakers consisted of key players in the intersection of copyright, technology and digital economy. As you can probably imagine such a combination provided for great insights and debate on the role, trends and future of copyright and digital technology. Some key takeaways include:
  • The copyright yin and technology yang – Copyright has always had to change and adapt to new and disruptive technologies (which typically impact the extant business models of the content industry) and each time it usually comes out even stronger and more flexible – the age of digital disruption is no exception. As my 5 year old would say, “that glass is half full AND half empty”
  • UK Copyright Hub – “Simplify and facilitate” is a recurring mantra on the role of copyright in the digital economy. The UK Copyright Hub provides an exchange that is predicated on usage rights. It is a closely watched example of what is required for digital copyright and could easily become a template for the rest of the world.
  • Copyright frictions still a challenge – “Lawyers love arguing with each other”, but they and the excruciatingly slow process of policy making, have introduced a particular friction to copyright’s digital evolution. The pace of digital change has increased but policy has slowed down, perhaps because there are now more people to the party.
  • Time for some new stuff – Copyright takes the blame for many things (e.g. even the normal complexity of cross border commerce). Various initiatives including: SOPA & PIPA / Digital Economy Act / Hadopi / 3 strikes NZ have stalled or been drastically cut back. It really is time for new stuff.
Picture9-md

Source: Fox Entertainment Group

  • Delaying the “time to street” – Fox describe their anti-piracy efforts in relation to film release windows, in an effort to delay the “time to street” (aka pervasive piracy). These and other developments such as fast changing piracy business models, or the balance between privacy vs. piracy and technologies (e.g. popcorn time, annonymising proxies, cyberlockers etc.) have added more fuel to the fire.
  • Rights Languages & Machine-to-Machine communication – Somewhat reminiscent of efforts to use big data and analytics mechanisms to provide insight from structured and unstructured data sources. Think Hadoop based rights translation and execution engines.
  • The future of private copying – The UK’s copyright exceptions now allow for individual private copies of owned content. Although this may seem obvious, but it has provoked fresh comments from content industries types and other observers e.g.: When will technology replace the need for people making private copies? Also, what about issues around keeping private copies in the cloud or in cyber lockers?
Picture6a-sm

Mutant copyright techie-lawyer

In conclusion, and in light of the above gaps between copyright law and technology, I’ve decided that I probably need to study and become a mutant copyright techie-lawyer in order to help things along – you heard it here first. Overall, this was another excellent event, with lots of food for thought, some insights and even more questions, (when won’t there ever be?), but what I liked most was the knowledgeable mix of speakers and audience at this years event, and I look forward to the next one.

Business Transformation at the Open Group Conference

December 9, 2013 Leave a comment

The last Open Group Conference in London provided an opportunity to hear about latest  developments in Health, Finance and eGovernment. It also featured major milestones for the Open Group, e.g. the successful conclusion of the Jericho forum (on de-perimeterised security), and the rise of Platform 3.0 (aka Digital). Read on for some highlights and headlines from the event

Open Group London

eGovernment – According to one keynote speaker, the transition towards egovernment is reflected in growing demand for the IT industry to help implement or enable such major initiatives as: open data, global tax information exchange, as well as an enterprise architecture plus supporting data structures to cover all human endeavour.  The Global Risks 2013 report illustrates pressing issues to be addressed by world leaders, particularly in the G8 and G20 countries which together represent 50% – 95% of the global economy. Some IT enabled scenarios, such as massive disinformation and the dangers of starting “Digital wildfires in a hyperconnected world”, illustrate the hurdles that need to be overcome with vital input from the IT industry. According to one attendee, “…government is just the back office for the global citizen”. Overall, these initiatives are aimed at connecting governments, by enabling better information exchange, and providing much needed support for an emerging global citizen.

Platform 3.0 – The conference provided updates on Platform 3.0, (aka the Open Group’s approach to Digital). Andy Mulholland (Ex Global CTO at Capgemini) set the scene in his keynote speech, by discussing the real drivers for change and their implications, plus the emerging role of business architecture and innovation, as well as the Platform 3.0 approach to Digital. Subsequent sessions provided a summary of activities outlining key Principles (and requirements) for Platform 3.0, including: the role of the IT organisation in managing digital (i.e. brokering anywhere / anytime  transactions), Inside Out vs. Outside In approach to interaction, and the challenge for Enterprise Architects to acquire key skills in organisational change & behaviours, in order to remain relevant.

eHealth – Several sessions were dedicated to the trends and impact of technology on healthcare. Topics discussed include: Big Data in healthcare and the growth in Smartphone or smart device capabilities for health care. Also discussed were:

  • Shrinking R&D budgets leading to collaborative efforts (e.g. Pistoiaalliance.org ),
  • Explosion of health monitoring related services and offerings e.g. self help health websites, bio telemetry wristbands etc.
  • Personalized Ambient Monitoring (PAM) of mentally ill patients, using multiple devices and algorithms. apparently 1 in 4 people in the UK will experience some kind of mental illness within the year.
  • Unobtrusive Smart Environment for Independent Living (USEFIL) aimed at senior citizens
  • Trends in life logging (e.g. quantified self and life slices), heading towards embedded or implanted devices (e.g. digestible RFID chips)
  • IPv6 and ubiquity of information points – ID management for tomorrow will include a surfeit of personal data.

However, key challenges discussed include privacy issues regarding the collection, storage and access to personal / health information. Also, who will monitor all that data gathered from sensors, monitoring and activation from the Internet of things for healthcare?

Innovation – These sessions focused on various aspects of future technology trends and innovation. It featured speakers from KPN, IBM, Inspired and Capgemini (i.e. yours truly), discussing:

  • Smart technologies (e.g. SMART Grid) and interoperability constraints, plus the convergence of business and technology and fuzzy boundaries of “outside in” versus “inside out” thinking
  • New technology architecture opportunities to leverage world changing developments such as: Semantics, nano technology, 3D printing, Robotics and the Internet-of-things, overlaid with exponential technologies (e.g. storage / processing power / bandwidth) and the network effect
  • Effects of Mobile and Social vs. traditional MDM, plus emerging trends for incorporating new dynamic data (sentiment analysis / IoT sensors plus deep / dark data).
  • Use of big data to enable the Social enterprise, via smarter workforce, innovation and gamification.
  • Case study of Capgemini internal architecture and innovation work stream – illustrating key organisational trends and cross sector innovation, plus challenges for internal innovation, and the emerging role of business model innovation and architecture

As you can probably surmise from the above, this multi-day conference was jam-packed with information, networking and learning opportunities. Also the Open Group’s tradition of holding events in the great cities of the world, (e.g. this one took place just across the road from the UK Houses of Parliament), effectively brings the latest industry thinking / developments to your doorstep, and is highly commendable. Long may it continue!

Copyright and Technology in 2013

November 18, 2013 Leave a comment

Last month’s conference on copyright and technology provided plenty of food for thought from an array of speakers, organisations, viewpoints and agendas. Topics and discussions ran the gamut of increasingly obvious “business models are more important than technology” to downright bleeding edge “hypersonic activation of devices from outdoor displays “. There was something to take away for everyone involved. Read on for highlights.

The Mega Keynote interview: Mega’s CEO Vikram Kumar, discussed how the new and law-abiding cloud storage service is proving attractive to professionals who want to use and pay for the space, security and privacy that Mega provides. This is a far cry from the notorious MegaUpload, and founder Kim Dotcom’s continuing troubles with charges of copyright infringement, but there are still questions about the nature of the service – e.g. the end-to-end encryption approach which effectively makes it opaque to outside scrutiny.  Read more about it here.

Anti-Piracy and the age of big data – Mark Monitor’s Thomas Sehested talked about the rise of data / content monitoring and anti-piracy services in what he describes as the data driven media company. He also discussed the demise of content release windows, and how mass / immediate release of content across multiple channels lowers piracy, but questioned if this is more profitable.

Hadopi and graduated response – Hadopi’s Pauline Blassel gave an honest overview on the impact of Hadopi, including evidence of some reduction in piracy (by factor of 6M-4M) before stabilsation. She also described how this independent public authority delivers graduated response in a variety of ways e.g. from raising awareness to imposing penalties and focusing primarily on what is known as PUR (aka ‘Promotion les Usage Responsible’)

Auto Content Recognition (ACR) and the 2nd Screen – ACR is a core set of tools (including DRM, watermarking and fingerprinting), and the 2nd screen opportunity (at least for broadcasters) is all about keeping TV viewership and relevance in the face of tough competition for people’s time and attention. This panel session discussed monetisation of second screen applications, and the challenges of how TV is regulated, pervasive and country specific. Legal broadcast rights is aimed at protection of broadcast signals, which triggers the 2nd screen application, (e.g. via ambient / STB / EPG based recognition). This begs the question of what regulation should be applied to the 2nd screen, and what rights apply? E.g. Ads on TV can be replaced in the 2 screen, but what are the implications?

Update on the Copyright Hub – The Keynote address by Sir Richard Hooper, chair of the Copyright Hub and co-author of the 2012 report on Copyright Works: Streamlining Copyright Licensing for the Digital Age, was arguably the high point of the event. He made the point that although there are issues with copyright in the digital age, the creative industries need to get off their collective backsides and streamline the licensing process before asking for a change in copyright law. He gave examples of issues with the overly complex educational licensing process and how the analogue processes are inadequate for the digital age (e.g. unique identifiers for copyright works).

Sir Richard Hooper

Sir Richard Hooper

The primary focus of the Copyright Hub, according to Sir Richard, is to enable high volume – low value transactions, (e.g. to search, license and use copyright works legally) by individuals and SMEs. The top tier content players already have dedicated resources for such activities hence they’re not a primary target of the Copyright Hub, but they’ll also benefit by removing the need to deal with trivial requests for licensing individual items (e.g. to use popular songs for wedding videos on YouTube).

Next phase work, and other challenges, for the Copyright Hub include: enabling consumer reuse of content, architectures for federated search, machine to machine transactions, orphan works registry & mass digitisation (collective licensing), multi licensing for multimedia content, as well as the need for global licensing. Some key messages and quotes in the ensuing Q&A include:

  • “the Internet is inherently borderless and we must think global licensing, but need to walk before we can run”
  • “user-centricity is key.  People are happy not to infringe if easy / cheap to be legal”
  • “data accuracy is vital, so Copyright Hub is looking at efforts from Linked Content Coalition and Global Repertoire Database”
  • “Metadata is intrinsic to machine to Machine transactions – do you know it is a crime to strip metadata from content?”
  • “Moral rights may add to overall complexity”

As you can probably see from the above, this one day event delivered the goods and valuable insights to the audience, which included people from the creative / content industries, as well as technologists, legal practitioners, academics and government agencies. Kudos to MusicAlly, the event organiser, and to Bill Rosenblatt, (conference chair), for a job well done.

Next Stop: I’ll be discussing key issues and trends with Digital Economy and Law at a 2 day event, organised by ACEPI,  in Lisbon. Watch this space.

The Open Group Conference

July 21, 2012 Leave a comment

This week’s quarterly Open Group conference in Washington DC,  featured several thought provoking sessions around key issues / developments of interest and concern to the IT world, including: Security, Cloud, Supply Chain, Enterprise Transformation (including Innovation), and of course Enterprise Architecture (including TOGAF and Archimate).

The Capitol in Washington DC

The Capitol in Washington DC

Below are some key highlights, captured from the sessions I attended (or presented), as follows:

Day 1 – Plenary session focused on Cyber Security, followed by three tracks on Supply Chain, TOGAF and SOA. Key messages included:

  • Key note by Joel Brenner described the Internet as a “porous and insecure network” which has become critical for so many key functions (e.g. financial, communications and operations) yet remains vulnerable to abuse by friends, enemies and competitors. Best quote of the conference, was: “The weakest link is not the silicon based unit on the desk, but the carbon based unit in the chair” (also tweeted and mentioned in @jfbaeur’s blog here)
  • NIST’s Dr. Don Ross spoke about a perfect storm of consumerisation (BYOD), ubiquitous connectivity and sophisticated malware, leading to an “advanced persistent threat” enabled by available expertise / resources, multiple attack vectors and footholds in infrastructure
  • MIT’s Professor Yossi Sheffi expounded on the concept of building security and resilience for competitive advantage. This, he suggested, can be done by embracing “flexibility DNA”, (as exhibited in a few successful organisations), into the culture of your organisation. Key flexibility traits include:
    • Your resilience and security framework must drive, or at least feed into, “business-as-usual”
    • Continuous communication is necessary among all members of the organisation
    • Distribute the power to make decisions (especially to those closer to the operations)
    • Create a passion for your work and the mission
    • Deference to expertise, especially in times of crisis
    • Maintain conditioning for disruptions – ability for stability is good, but flexibility to handle change is even better
  • Capgemini’s Mats Gejneval discussed agility and enterprise architecture using Agile methods and TOGAF. He highlighted the relationship flow between: agile process -> agile architecture -> agile project delivery -> agile enterprise, and how the latter outcome requires each of the preceding qualities (e.g. agile methods, and faster results, on its own will not deliver agile solutions or enterprise). My favourite quote, during the Q/A, was: “…remember that architects hunt in packs!”

Day 2 – Plenary session focused on Enterprise Transformation followed by four streams on Security Architecture, TOGAF Case Studies, Archimate Tutorials, and EA & Enterprise Transformation (including our session on Innovation & EA). Key Highlights include:

  • A case study on the role of open standards for enterprise transformation, featured Jason Uppal (Chief Architect at QRS), describing the transformation of Toronto’s University Health Network into a dynamic and responsive organisation, by placing medical expertise and requirements above the flexible, open standards based, IT delivery.
  • A view on how to modernise service to citizens via a unified (or “single window government”) approach was provided by Robert Weisman (CEO of Build a Vision Inc). He described the process to simplify key events (from 1400 down to 12 major life events) around which the services could be defined and built.
  • Samira Askarova (CEO of WE Solutions Group) talked about managing enterprise transformation through transitional architectures. She likened business transformation to a chameleon with: its huge, multi-directional eyes (i.e. for long term views), the camouflage ability (i.e. changing colours to adapt), and the deliberate gait (i.e. making changes one step at a time)
  • The tutorial session on Innovation and EA, by Corey Glickman (Capgemini’s lead for Innovation-as-a-Managed Service) and yours truly, discussed the urgent need for EA to play a vital role in bridging the gap between rapid business model innovation and rapid project delivery (via Agile). It also provided several examples, as well as a practical demonstration of the Capgemini innovation service platform, which was well received by the audience. Key take aways include:
    • Innovation describes an accomplishment, after the fact
    • EA can bridge the gap between strategy (in the business model) and rapid project delivery (via Agile)
    • Enterprise Architecture must actively embrace innovation
    • Engage with your partners, suppliers, customers and employees – innovation is not all about technology
    • Creating a culture of innovation is key to success
    • Remember, if you are not making mistakes, you are not innovating

Day 3 – Featured three streams on Security Automation, Cloud Computing for Business, and Architecture methods and Techniques. Highlights from the Cloud stream (which I attended) include:

  • Capgemini’s Mark Skilton (Co-chair of the Open Group’s Cloud Working Group) talked about the right metrics for measuring cloud computing’s ability to deliver business architecture and strategy. He discussed the complexity of Cloud and implications for Intellectual Property, as well as the emergence of ecosystem thinking (e.g. ecosystem architecture’ and ‘ecosystem metrics’) for cloud computing and applications
  • A debate about the impact of cloud computing on modern IT organisational structure raised the point that a dysfunctional relationship exists between business and IT with respect to cloud services. The conclusion (and recommendation) is that healthy companies tend to avoid buying cloud services in business silos, instead they will pursue a single cloud strategy, in collaboration with IT, which is responsible for maintenance, security and integration into the enterprise landscape
  • Prakash Rao, of the FEAC Institute, discussed Enterprise Architecture patterns for Cloud Computing. He reiterated the point made earlier about how enterprise architecture can be used to align enterprise patterns (i.e. business models) to development processes. Also that enterprise patterns enable comparison and benchmarking of cloud services in order to determine competitive advantage

 

The bullet items and observations recorded above does not do justice to breadth and depth of the entire conference which included networking with attendees from over 30 countries, across all key industries / sectors, plus multiple, simultaneous streams, sessions and activities, many of which I could not possibly attend. Overall, this was an excellent event that did not disappoint. Further materials can be found on the Open Group website, including:

I would recommend the Open Group conference to any professional in IT and beyond.

Supercomputers and the Future

April 19, 2012 Leave a comment

Wednesday the 18th of April marked 100 days to the greatest show on earth, along with the promise of even more superlatives, as a direct consequence of the Olympic motto: “Faster, Higher, Stronger”. It certainly made an auspicious date for an event, held at the House of Lords, on the future of Supercomputers. 

House Of Lords

The House Of Lords

The event was The Second Lorraine King Memorial Lecture, sponsored by Kevin Cahill, FBCS.CITP (author of “Who owns Britain” and “Who owns the World”), and superbly hosted by the Lord Laird and Computer Weekly. The main topic of debate centred on whether Supercomputers were merely “prestige objects or crucial tools in science and industry”.

Figure: (L-R) Kevin Cahill, Prof. Meuer and Lord Laird

The lecture delivered by Supercomputer expert, Prof. Dr. Hans Werner Meuer, (see CV) was most illuminating, and I gathered, among other things, that the UK ranked 4th in the Top500 list of Supercomputer using countries, and that France was the only European country with any capability to manufacture Supercomputers. Clearly more needs to be done by the likes of the UK or Germany to remain competitive in the Supercomputing stakes, which begged the question, (as posed later by an attendee), of whether these machines were nothing more than objects of geopolitical prestige, superiority and / or bragging rights, (e.g. My Supercomputer is faster than yours, so Nyah-nyah, nyah-nyah nyah-nyah! – Or perhaps Na na, na, na, naa! – apologies to the Kaiser Chiefs).

In any case, several things stood out for me at this rather well attended event, including:

  • The definition of a Supercomputer remains based on the most powerful or fastest computers, at any given point in time, e.g. Apple’s iPad 2 is two-thirds as powerful as the Cray2 Supercomouter from 1986. The typical measure of speed and power is based on sheer numerical processing power (i.e. not data crunching), using the Linpack test
  • According a paper by Sponsor, Kevin Cahill, the Supercomputer sector is the fastest growing niche in the world of technology, and it is currently worth some $25Billion. Japan, China and the USA are currently holding the lead in the highly ego driven world of Supercomputing, but there is an acute shortage of the skills and applications required to make the most of these amazing machines
  • Typical applications of Supercomputing include: university research, medicine (e.g. Human Genome Project), geophysics, global weather and climate research, transport or logistics. It is used in various industries e.g.: Aerospace, Energy, Finance and Defence etc. More recent applications, and aspirations, include: bio-realistic simulations (e.g. the Blue Brain Project), and a shift towards data crunching in order to model and tackle challenges in such areas as Social Networks and Big Data.
  • The future of Supercomputers is to move past the Petaflop Supercomputers of today, to Exaflop capable machines by 2018. The next international conference on Supercomputers takes place June 17-21, in Hamburg, Germany, and it promises to include topics on: big data / alternative architectures for data crunching / Exascale computing / Energy efficiency / technology limits / Cloud computing for HPC, among other things.
Future of Supercomputers

The Future of Supercomputing (Source: http://www.isc-events.com/slides/london)

Overall, this was an excellent event, in a most impressive venue, and the attendees got a chance to weigh in with various opinions, questions and comments to which the good Professor did his best to respond, (including inviting everyone to Hamburg, in June, to come see for themselves!). Perhaps the most poignant take away of the evening, in my opinion, was the challenge by Lord Laird to the computing industry about a certain lack visibility, and the need for us to become more vocal in expressing our wishes, concerns and desires to those in power, or at least to those with the responsibility to hold Government to account. As he eloquently put it, (but paraphrasing slightly), “If we don’t know who you are, or what it is you want, then that is entirely your own fault!”

Publishing, Intellectual Property and Private Equity: A Tale of Three Events

March 3, 2011 Leave a comment

It’s not often one gets an opportunity to attend three compelling events in one evening, but as luck would have it, the stars were aligned and I managed to do just that in a mad scramble from one venue to the next. Such are the benefits of living and working in a great city like London, but less so were the thorny issues under debate at each of the three events.

It took a minute to digest and process various messages from these events, but as promised / tweeted, below are three key points, take-away or opinions:

1. Publishers must embrace multi-platform models as business-as-usual (Publishing Expo 2011)

It was standing room only at the Multi-Publishing & Digital Strategies Theatre in a packed final session on “the future of multi-platform publishing”. According to one of the speakers, “the bleeding edge of multi-publishing model is one third print, one third digital, and one third live events.”

Standing Room Only

Publishing Expo 2011: Future of Multi-Publishing

My Comment – Never mind multi-platform, it sounds more like a multi-model approach will be necessary for the entire creative industry, in my opinion.


2. But how do you value Intellectual Property? (IP For Innovation And Growth)

This has to be one of the thorniest questions for IP, because consistent and intelligent valuation of IP is at best confusing, or non-existent. IP is really just an economic mechanism, so a fundamental attribute should be the ability to establish an agreed value for the property in question, but this presents a severe problem because current valuation are highly subjective and always dependent on the buyer or seller’s points-of-view. Throw in the ability to effortlessly copy and distribute works via digital technology, and you’ll get the somewhat muddy picture.

IP Panel at the RSA

The RSA: IP For Innovation and Growth

My Comment – There is a clear opportunity here to create a dynamic and transparent IP valuation model or approach, which can produce the right valuation for IP, based on the buyer / seller relationship and context


3. And does a cash economy make IP any less relevant? (Private Equity Africa)

Apparently, it’s all about cash in Africa which leads me to wonder if and how global IP will work in a cash economy. This event does not immediately appear to have much in common with the others on IP or the creative industry, and even one of the speakers afterwards, said he considered Intellectual Property in Africa to be, and I quote, “nothing more than intellectual masturbation”. However, when you think of the thriving industry and market for music and filmed entertainment (e.g. Nigeria’s Nollywood), it is easy to see how IP can provide an important boost to developing economies. Therefore, even if there is little point in enforcing IP Rights locally, all developing economies must be interested and involved in any discussion relating to global IP rights and digital distribution / piracy.

PE Africa

Private Equity Africa

My Comment – when it comes to content and IP, it is a level playing field as all jurisdictions and stakeholders struggle with the impact of digital technology

Overall, one clear trend I can see emerging from the above is that such tough questions / issues will need even tougher answers and resolutions to overcome. For example, they may well be pointing to the same underlying problem – i.e. a flawed and inflexible concept of economic value – but perhaps that is rightly the subject of another blog and blogger.

So what will it be; my ecosystem or yours?

February 12, 2011 1 comment

It seems to me that anywhere you go these days, there’s bound to be someone dropping that term like it’s going out of fashion. You’ll hear them talk about this ecosystem, or that ecosystem, usually in reference to any number of things from consumer products, business models, IT systems or even personal social networks (I kid you not). So just what is an ecosystem, really?


For one thing, it is an over-used / overloaded term which, according to the Oxford English Dictionary, refers to a biological system comprising all organisms that live in, and interact with, a particular physical environment. This definition is consistent with others from a variety of both lexical and semantic sources, and as a result, one can only conclude that ecosystem is used, by most non-scientific types, as a metaphor to describe similar complex systems.

As buzzwords go, the term “ecosystem” has been around for a while, yet for some reason its use (and abuse) appears to have gained traction with a much wider variety of people, professions and circumstances. For example, it is no longer unusual to hear it from the lips of: economists, technologists, consultants, media folk and even start-ups and their VCs. In a recent panel session, at a music/tech seminar, it seemed that each panelist used “ecosystem”, in different contexts & meanings, to answer to a single question! Surely it must be time to stop and call amnesty on such indiscriminate use of this term.

To be fair, there is a certain attraction to using such a rich metaphor to describe certain things, and this perhaps reflects a rather complex, information-rich and often confusing electronic age. The ecosystem concept communicates this complexity rather eloquently, comprising as it does, such intricate components as: environments, niches, food chains, roles, relationships (e.g. specialists, generalists, predators, prey, symbiosis or parasitism), and an idea of balance and equilibrium. As a result, one can easily see a similarity and applicability to modern businesses, (e.g. high-tech or financial systems), which themselves also have a complex set of interacting entities and components including: value chains, webs & networks; IT systems; information flows & controls; as well as various business and revenue models (complete with predators, prey, and mutants with emergent skills e.g. in Internet, social network, or Cloud technologies).

However, there are limitations to the ecosystem metaphor, and perhaps not everything can or should be described in terms of an ecosystem. For example, it is extremely difficult to find anything like true balance or equilibrium in areas such as high-technology, business, politics or global economics and finance (don’t even get me started). Furthermore, new and emerging patterns of complex digital interaction, usage and convergence are not yet fully understood, and this is particularly true for: content, context, rights and entitlements (e.g. individual privacy). To my mind, this is a clear indication that even complex metaphors like ecosystems may not be rich enough to properly describe the evolutionary fusion of human beings, digital technology and our physical environment, e.g. the emergence of Augumented Reality applications are a case in point.

In conclusion, ecosystem is an over-loaded term that is increasingly used by people in business, technology and other fields, to describe complexity. It works well to a large extent, but indiscriminate and uninformed use can only add further confusion and FUD to an already complex situation. It may well be that as people, technology and environment continue to evolve / converge we’re going to need even richer metaphors to describe it all. So next time someone says ecosystem, you might do well to ask: “…my ecosystem or yours?”

Copyright and Technology 2010 Conference

June 23, 2010 Leave a comment

Last week’s event on copyright and technology has led me to the conclusion that a long-overdue dialogue is slowly taking place between two vital groups in the digital content economy, i.e. the legal and technology stakeholders. However, it also raised some questions about likely winners and losers in the evolution of a digital content ecosystem.

This inaugural conference took place in in New York City, and I was lucky enough to be invited to moderate a panel session on the role and future of DRM, and other content protection technologies, that inhabit the interface between copyright and technology. Below are some key messages from this event:

  • DRM is not quite dead. If anything it is alive and well, outside of “permanent Internet music downloads”, according to event chairman Bill Rosenblatt in his opening address, which was also a master class on the trajectory of challenges and developments in the battle between copyright and digital technology.
  • “Sopranos level” commercial piracy, as operated by sophisticated / profit-oriented criminal organizations, (i.e. not your ordinary file-sharing individual) have become the key focus of attention and anti-piracy efforts by major content owners. According to Viacom’s Stanley Pierre-Louis, organizations like Viacom are making every effort to find the “right balance to take advantage of new platforms whilst protecting IP”.
  • Innovative approaches are critical for video content monetization – For example, Ads are video too, and Google’s Shalini Govil-Pai described how more brands are now using YouTube to ‘prove’ their ads before putting them out via more expensive broadcast TV channels
  • Interoperability is vital. And initiatives like the Digital Entertainment Content Ecosystem (DECE) will help “provide users with a choice of platforms”, according to Mitch Singer (CTO for Sony Pictures Entertainment). However, one notable absentee from this 50+ strong consortium is Apple which operates its own closed content ecosystem.
  • “You can’t monetize what you can’t identify”, therefore correct content identification is a critical element in any monetization effort. Technologies like Fingerprinting and Watermarking both play a large part in making this happen, but there’s still more work to be done, and implementation can be tough.
  • Progressive Response (aka 3 Strikes) and ISP level monitoring may be flawed – A telling question from speaker Gary Greenstein, (IP lawyer from Wilson Sonsini Goodrich & Rosati) was: “What if you connect to your own music collection over an ISP, would that not be a false positive for copyright infringement”?
  • Rights Management is still a major headache – An observation from my panel session was that although DRM still has a role and future (even if by another name) in digital content monetization, by far the bigger issue for content owners remains the challenge of inadequate rights management for content. This is an area that is actively being addressed by companies like Teradata, SAP and Capgemini which together can deliver even more innovative solutions for IP rights management

However, there is still a lot of work to be done before copyright and technology can claim to work well together (see my session intro slides). One attendee’s poignant observation highlighted the relatively limited availability of legal content (perhaps as a result of cumbersome content rights models, infrastructure issues or outdated release window models), versus widely available but illegal / pirate copies that can be found online, sometimes even before commercial release of the product!

In conclusion, this was a very useful and timely conference given the high level of engagement and interaction (including the customaryTwitter commentary) between audience and speakers, right from the start. However it could have done with mixing up the legal / technology streams a bit more, but the overall feedback was positive, and I suspect many attendees, and the entire content industry, would benefit from more of this type of event and dialogue in the future. Hopefully the next one might even be held right here in London – aka the birthplace of modern copyright!

Copyright and Technology

June 15, 2010 Leave a comment

These two mostly work hand in hand especially as a change in one typically brings about a change in the other, think P2P file sharing and DMCA or Digital Economy Act. So why is this the case, and will it ever change?

It is hard to think of a scenario where copyright is not tightly related to technology, for the simple reason that even the act of copying (at least to any economically significant scale) is highly dependent on having access to an appropriate copy technology. Interestingly, the UK’s Statute of Anne, which is widely regarded as the first fully fledged copyright law, came into being after the introduction of print duplication technology, aka the printing press. Given where we are today, with ubiquitous and massively available digital duplication and dissemination technologies, copyright has become even more intertwined with everyday technology; from early music players to broadcast receivers and shiny new mobile / media / communication devices. As a result, it has become even more urgent to find ways in which to make copyright and technology work better together.

These and other pressing topics will be up for debate at the inaugural / eponymous Copyright and Technology 2010 conference which will be taking place this Thursday in New York City, and yours truly will be there to participate and hopefully get some indication of where things are heading in the near-mid term. The two tracks of this conference are divided equally between technology and legal aspects of copyright, which should make for some interesting cross-fertilisation of ideas and potential insight to the future of copyright. Watch this space.