Archive
Becoming Salesforce: Beyond Cloud Services
I’ve always maintained (here and here) that a tradition for innovation trumps mere culture of innovation hands down. This was clearly demonstrated at a recent boot camp for new joiners to Salesforce, in San Francisco. Judging by the frenetic pace of a week long immersion in all things Salesforce, the work involved in introducing and maintaining the Salesforce ‘Ohana’ culture of innovation is a relentless and never-ending pursuit that is worthy of any tradition.
By all accounts this was a ‘mega’ boot camp event, comprising over 250 new hires from many different countries and regions. Below are my top three takeaways from the event:
1: Ohana and Value Alignment
Salesforce believes passionately in giving back to the local community and included a day one agenda item for attendees to undertake pro bono work for some of the local charities. After a couple of hours physical labour, one starts to realise just how serious Salesforce takes the 1-1-1 pledge (i.e. to contribute one percent of employee time / resources / products to help local communities via charity, education and other worthy causes). As if that wasn’t enough, Chief Adoption Officer, Polly Sumner later bought the point home with a passionate talk about how each employee must make it a mission to define their purpose and actively pursue it by aligning with company values and recording as individual annual objectives. The result: a committed workforce that is empowered to make meaningful and positive contributions, as part of their day job and career aspirations. Given such a culture, it is not surprising why and how customer success is the ultimate raison d’etre for Salesforce
2 – Change is rapid and constant
Several speakers, over the course of the event, took pains to emphasize the need to adapt and adopt a fast paced mentality in order to survive and thrive in Salesforce. With three major (as in all the bells and whistles) releases each year, the Salesforce platform and clouds are constantly evolving to become ever faster, smarter and more personalised with each new release. The latest offerings in Analytics (Wave), user experience (Lightning) and Internet of Things (IoT Cloud) is merely a foretaste of what is likely to manifest on such a dynamic platform. If you are inclined to wonder how or why I can say this things, then look no further than the amazing level of talent gathered at the event. Every background was represented, from ex-marines to rocket scientists, or ex McKinsey, Deloitte, IBM and Capgemini consultants, plus key talent from competitors such as Oracle, Microsoft and SAP. The Force is strong in the Ohana.
3 – Awesome is more than just a word
I must have counted over one hundred separate utterances of the word ‘awesome’ (including two completely unforced instances by yours truly), but suffice it to say I have yet to come across any organisation where employees seem to be in such awe of their own, er ‘awesomeness’, for lack of a better word. As part of the boot camp, we were also introduced to all the Salesforce clouds i.e.: Sales, Service, Marketing, Apps, Community, Analytics and IoT Clouds. What is truly impressive is how they all integrate and work together or separately as per customer requirements. A typical customer pitch kicks of with the inevitable Safe Harbour statement and a thank you to the customer, followed by a description of the new technology, new business and new philanthropic models espoused by Salesforce and how that could be made to work better for the customer. It is indeed a brave new world for cloud services.
Overall, the boot camp delivered an unabashed experience of the Salesforce Ohana culture and, given the number of attendees at this event, there definitely is a strong demand for more talented people with the right experience and mindset to join such a fast growing organisation. Finally, and by all indications, Salesforce is certainly showing the hall marks of a company with a clear tradition for innovation that is deeply rooted in its values. Long may it continue, and I can’t wait to see what’s next on the ever changing horizon. Mahalo!
Governing the Internet of Things.
The Open Group Conference
This week’s quarterly Open Group conference in Washington DC, featured several thought provoking sessions around key issues / developments of interest and concern to the IT world, including: Security, Cloud, Supply Chain, Enterprise Transformation (including Innovation), and of course Enterprise Architecture (including TOGAF and Archimate).

The Capitol in Washington DC
Below are some key highlights, captured from the sessions I attended (or presented), as follows:
Day 1 – Plenary session focused on Cyber Security, followed by three tracks on Supply Chain, TOGAF and SOA. Key messages included:
- Key note by Joel Brenner described the Internet as a “porous and insecure network” which has become critical for so many key functions (e.g. financial, communications and operations) yet remains vulnerable to abuse by friends, enemies and competitors. Best quote of the conference, was: “The weakest link is not the silicon based unit on the desk, but the carbon based unit in the chair” (also tweeted and mentioned in @jfbaeur’s blog here)
- NIST’s Dr. Don Ross spoke about a perfect storm of consumerisation (BYOD), ubiquitous connectivity and sophisticated malware, leading to an “advanced persistent threat” enabled by available expertise / resources, multiple attack vectors and footholds in infrastructure
- MIT’s Professor Yossi Sheffi expounded on the concept of building security and resilience for competitive advantage. This, he suggested, can be done by embracing “flexibility DNA”, (as exhibited in a few successful organisations), into the culture of your organisation. Key flexibility traits include:
- Your resilience and security framework must drive, or at least feed into, “business-as-usual”
- Continuous communication is necessary among all members of the organisation
- Distribute the power to make decisions (especially to those closer to the operations)
- Create a passion for your work and the mission
- Deference to expertise, especially in times of crisis
- Maintain conditioning for disruptions – ability for stability is good, but flexibility to handle change is even better
- Capgemini’s Mats Gejneval discussed agility and enterprise architecture using Agile methods and TOGAF. He highlighted the relationship flow between: agile process -> agile architecture -> agile project delivery -> agile enterprise, and how the latter outcome requires each of the preceding qualities (e.g. agile methods, and faster results, on its own will not deliver agile solutions or enterprise). My favourite quote, during the Q/A, was: “…remember that architects hunt in packs!”
Day 2 – Plenary session focused on Enterprise Transformation followed by four streams on Security Architecture, TOGAF Case Studies, Archimate Tutorials, and EA & Enterprise Transformation (including our session on Innovation & EA). Key Highlights include:
- A case study on the role of open standards for enterprise transformation, featured Jason Uppal (Chief Architect at QRS), describing the transformation of Toronto’s University Health Network into a dynamic and responsive organisation, by placing medical expertise and requirements above the flexible, open standards based, IT delivery.
- A view on how to modernise service to citizens via a unified (or “single window government”) approach was provided by Robert Weisman (CEO of Build a Vision Inc). He described the process to simplify key events (from 1400 down to 12 major life events) around which the services could be defined and built.
- Samira Askarova (CEO of WE Solutions Group) talked about managing enterprise transformation through transitional architectures. She likened business transformation to a chameleon with: its huge, multi-directional eyes (i.e. for long term views), the camouflage ability (i.e. changing colours to adapt), and the deliberate gait (i.e. making changes one step at a time)
- The tutorial session on Innovation and EA, by Corey Glickman (Capgemini’s lead for Innovation-as-a-Managed Service) and yours truly, discussed the urgent need for EA to play a vital role in bridging the gap between rapid business model innovation and rapid project delivery (via Agile). It also provided several examples, as well as a practical demonstration of the Capgemini innovation service platform, which was well received by the audience. Key take aways include:
- Innovation describes an accomplishment, after the fact
- EA can bridge the gap between strategy (in the business model) and rapid project delivery (via Agile)
- Enterprise Architecture must actively embrace innovation
- Engage with your partners, suppliers, customers and employees – innovation is not all about technology
- Creating a culture of innovation is key to success
- Remember, if you are not making mistakes, you are not innovating
Day 3 – Featured three streams on Security Automation, Cloud Computing for Business, and Architecture methods and Techniques. Highlights from the Cloud stream (which I attended) include:
- Capgemini’s Mark Skilton (Co-chair of the Open Group’s Cloud Working Group) talked about the right metrics for measuring cloud computing’s ability to deliver business architecture and strategy. He discussed the complexity of Cloud and implications for Intellectual Property, as well as the emergence of ecosystem thinking (e.g. ecosystem architecture’ and ‘ecosystem metrics’) for cloud computing and applications
- A debate about the impact of cloud computing on modern IT organisational structure raised the point that a dysfunctional relationship exists between business and IT with respect to cloud services. The conclusion (and recommendation) is that healthy companies tend to avoid buying cloud services in business silos, instead they will pursue a single cloud strategy, in collaboration with IT, which is responsible for maintenance, security and integration into the enterprise landscape
- Prakash Rao, of the FEAC Institute, discussed Enterprise Architecture patterns for Cloud Computing. He reiterated the point made earlier about how enterprise architecture can be used to align enterprise patterns (i.e. business models) to development processes. Also that enterprise patterns enable comparison and benchmarking of cloud services in order to determine competitive advantage
The bullet items and observations recorded above does not do justice to breadth and depth of the entire conference which included networking with attendees from over 30 countries, across all key industries / sectors, plus multiple, simultaneous streams, sessions and activities, many of which I could not possibly attend. Overall, this was an excellent event that did not disappoint. Further materials can be found on the Open Group website, including:
- Event website: http://www.opengroup.org/dc2012
- Live Streams – http://new.livestream.com/opengroup
- Archimate 2.0 Specification (free download) – http://www.opengroup.org/archimate/
- Photo Contest – https://www.facebook.com/theopengroup
I would recommend the Open Group conference to any professional in IT and beyond.
Cloud and emerging economies
With the Earth’s population hovering at the Seven Billion mark, there is pressing need for bigger, better, faster, and preferably cheaper, sources and versions of almost everything (e.g. food, energy and even computing power). This is just as acute in the emerging economies of Africa, South Asia and Latin America, which must rely on more creative and innovative ways to achieve their objectives. Enter the cloud.
Although, in many so called emerging economies, certain key infrastructure essentials such as constant power supply, high bandwidth connectivity and landline coverage may be lacking, the rapid expansion and penetration of mobile technology (and infrastructure) as well as novel approaches to power management has helped to create opportunities for entrepreneurs and operators to provide Internet based services to the populace. Furthermore, the lack of pre-existing infrastructure that would otherwise require interfacing and integration is advantageous and has contributed to what is often described as the leapfrog effect.
The upshot of this is that mobile technology and the Internet both combine to create an opportunity for accelerated growth and development in emerging economies. Other factors include: a younger demographic; dysfunctional institutions; a global economy in shambles; an expanding middle class plus a Diaspora of educated and skilled professionals that are increasingly tempted to return and contribute to further development of these markets. A recent Sunday Times Magazine article (note: subscription required) pointed out that six of the ten fastest growing economies in the last decade were African.
In light of the above, it is easy to see how cloud and emerging economies can align to mutual benefit, not least because they are relatively more flexible and unencumbered by legacy considerations for pre-existing infrastructure and / or an aging population of baby boomers. However, the reality is that much care needs to be taken in order to reach the full potential of such alignment. Recently, a friend and colleague with much experience working across Europe, Africa, Middle East and the Caribbean, described a trajectory and framework for cloud technology adoption which encompassed: 1 – localised exploitation (via mobile / enterprise systems); 2 – Business Process Re-engineering (requiring business analysis / change management expertise); 3 – B2B interconnectivity between businesses (at local and global levels). In addition, global tech companies are already getting in on the action, and you can’t turn a corner without bumping into various initiatives from the likes of: Google, IBM, Microsoft, Cisco, and SAP, to name but a few. Also there is a lot of technology investment activity from Private Equity and Hedge Funds.
In any case, the immediate question and decisions faced by emerging economies with respect to cloud include: information governance (where is the data located); data centres (location and hosting options); security (emerging threats and vulnerabilities); new and smart applications (designed to work within the limitation and specific circumstances of particular markets). Once again, it will require more innovative and creative approaches to attain the promise of mobile / cloud enabled leapfrog effect. It really is an exciting time for emerging markets.
Big Data, Cloud, Social and Mobility == Super Disruption

Cloud, big data, social and mobility
Over the course of this blogging campaign I have focused mostly on cloud and certain relevant aspects (e.g. content, security, access and Intellectual Property), but the fact remains that other equally profound developments, such as: big data, social and mobile computing also provide significant challenges and opportunities for both consumers and the enterprise. Gartner predicts that the above four forces will combine to transform the IT landscape in 2012, and I couldn’t agree more. In my opinion, this will probably go much further than the IT landscape, since such a potent combination can easily transform entire industries as well.
In 2011, the impact of social media and mobility meant that many organisations sought ways to engage better with their customers, using social media and mobile technologies. Also various organisations, ranging from consumer products to public sector, actively looked for ways to manage and leverage increasingly large amounts of ‘big data’ and valuable content, sometimes in ways that almost rivalled traditional content industries. Think publishing, broadcast and, of course, social media footprint in your organisation today and compare it to just 3 years ago.
So what does each of the aforementioned forces portend for industries in 2012, and what are the early signs or indicators of disruption? My imaginary crystal ball has misted over slightly, but the following are some key trends to watch for the coming year:
- Big Data – According to Cisco’s Visual Networking Index (VNI), there will be more networked devices than people on earth, by year end 2011. With so many networked devices, and a related prediction that this number will double to over 2 devices per person by 2015, this is a clear indicator of the trajectory of growth for big Data over the next few years.
- Cloud – Cloud service providers will continue to improve and optimise services, particularly at the Data Centre level, in order to provide a seamless and efficient solution for their customers. Key focus areas include: security, intelligent storage, unified networking, policy-based power management, and trusted computing capabilities. Basically, anything that will make it easier to transition customers to the cloud environment, along with greater confidence in sustainable delivery and quality of service will win the day
- Social – Social media, networking and CRM all represent a move towards user centric engagement models that will allow a two way conversation between the enterprise and their: customers, suppliers, partners and employees. The user expectation of more meaningful and productive dialogue with the enterprise is only set to increase over the next 12 months
- Mobility – This is both a technology and use centric force which readily demonstrates the combination of all three forces along with location (in space and time). In the paradigm shifting world of context aware computing, the user and their activities are central to the flow and direction of dialogue / interaction with the enterprise. Increasingly users expect the enterprise to be able to leverage contextually relevant information when dealing with them, and this in turn drives enterprise adoption of enabling technologies to provide this capability.
A good case in point will be the summer Olympic Games in London, which should provide a fertile proving ground for many of the combined challenges and opportunities presented by the four buzzwords / trends discussed above.
In conclusion, I expect no less than a step change in disruption levels across industries over the next 12 months, or so. The gloomy economic situation will only enhance the need for change, particularly in situations where: competitors are plunging ahead; customers are expecting even more for nothing; and employees are demanding similar levels of service and user experience from their enterprise, as might be expected for a consumer – which they likely are. Some very interesting times lie ahead.
Note: This post is brought to you in partnership with Intel(R) as part of the “Technology in tomorrow’s cloud & virtual desktop” series. For more information please click – HERE
The ISP Dilemma Continues
Some time ago I wrote a post about the challenges facing Internet Service Providers (ISPs) over whether they can afford to be the police of the Internet, with respect to helping find and stop persistent abuse of content, and other illegal online activities by their users. This is still a serious issue today, particularly in light of the cloud, hence the urge to revisit that post here.
The biggest challenge then was around the growing perception of ISPs as de-facto gatekeepers of the Internet, which effectively added another layer of complexity to their traditional / core business. As a result, not only do ISPs have to deal with existing and non-trivial issues (e.g. declining markets, convergent evolution via multi-play business models, and issues around increasing broadband / bandwidth consumption), they also have to contend with the fact that:
- Content owners still want ISPs to play a more central role in preventing, detecting, monitoring and punishing illegal file sharing (e.g. via schemes like the infamous three strikes proposal).
- Various initiatives by governments around the world, such as the UK’s Digital Economy Act, are put in place to help provide much needed governance and teeth to the need for ways to monitor and combat illegal activities including copyright infringement.
- There still are also signs of lack of trust by ISP customers over service quality / charges, and potential invasion of privacy
These all add up to a severe headache for ISPs, and may be made even worse when you throw cloud services into the mix. Some of the options, or combinations thereof, that ISPs have used or considered using to deal with these key challenges include:
- Targeted advertising schemes – preferably via opt-in models as a way to help subsidise the cost of service. In some cases even extending to much cheaper or even “free” access, for your usage information, of course.
- Industry self regulation – Still not easy to do, but one that would benefit the entire industry, and help address the pressures from content owners
- Network Controls – Invest in better ways to track, monitor and control or “shape” network traffic, in order to deliver better quality of service, promote fair use, and support law enforcement
- Partner with content owners – To explore new and more flexible content business models. E.g. a survery found that music fans might actually prefer ISPs as their music supplier. However the advent since of cloud based music and streaming services may have changed that landscape somewhat.
In any case, it is still advisable for ISPs to bear in mind the following three points in trying to deal with this dilemma:
- Do not alienate or irritate the customer – protecting the customer relationship and keeping their trust is still key to future success
- Resist excessive external pressures – Content owners need ISPs as much as ISPs need them, and perhaps even more so
- Take the initiative – ISPs should be more proactive in creating customer-pleasing, regulator-friendly propositions and business models (perhaps by working closely with consumers and content owners)
Overall, there is no easy way to slow down the natural evolution of the Internet, and cloud services, therefore ISPs need to do more to understand, evolve and embrace what is really a critical niche in the digital content ecosystem. The cloud is here for all, and it is here to stay.
Note: This post is brought to you in partnership with Intel(R) as part of the “Technology in tomorrow’s cloud & virtual desktop” series. For more information please click – HERE
IT Security: Still Hot & Cloudy!
This a refresh of an older, but still relevant, post I did last year about security and cloud which remains mostly true even today. The origin and subject of the post was from an event on IT security at the BCS Chartered institute for IT which featured 3 speakers on IT Security and Cloud.
I said back then that if I was a betting man, I’d wager the IT security industry was on the brink of a major revolution on the back of the Cloud, and indeed that still appears to be the case today. In fact, the question asked then of how many people in the audience actively used the cloud will have many more hands raised in response, if asked today, mainly because people are much more aware of the cloud then before. Which is not to say that the cloud has completely become front and centre; it still exists rightfully behind the scenes, powering various services that may still be taken for granted by the consumer, however some more recent services are also leveraging increased awareness of cloud by consumers and positioning themselves directly as cloud services. E.g. think Apple’s iCloud or Amazon’s Cloud drive for instance.
But I digress, what’s this got to do with IT Security you ask? The answer is very simple, if the cloud is really a behind-the-scenes enabler, then cloud security should also be behind the scenes right? But I still have this uneasy feeling, that we’ll yet see someone get sued over security breaches emanating from the Cloud. How long will it be before we get cloud compliance and cloud security risk assessment models, regulations and perhaps even exotic insurance policy for Cloud based services? Furthermore, the Internet (and consequently the cloud) is essentially borderless technology, which means that various national and international data governance regimes may have a thing or two to say about where data is stored – assuming it can be found in one place! This could well be a nightmare in the making for eDisclosure and/or eDiscovery.
Finally, apparently some clever Silicon Valley types are actively seeking ways to commoditize the cloud, and cloud based services, such that it can be traded as a financial instrument. Hmmm, now where did we see that one before (does Collateralized Debt Obligation ring a bell)? Suffice it to say there’s a lot of food for thought when it comes to Cloud Security, and far better qualified people than I have pondered, spoken and written about it (e.g. see my review of an excellent book about Cloud Security), so I shall just leave well enough alone.
To conclude, I dare say that cloud has come a long way since last year, especially in the minds of consumers, and it is looking likely to stay that way for a while yet, or at least until the next big hot topic strikes the zeitgeist. We can only wait and see.
Note: This post is brought to you in partnership with Intel(R) as part of the “Technology in tomorrow’s cloud & virtual desktop” series. For more information please click – HERE
Copyright and the Cloud
As promised in my last blog post, the focus this time is very much around the challenges of Intellectual Property, (esp. copyright), in a cloud context. Content protection is one thing, but establishing exactly what one can and can’t do with content in the cloud is equally as important, if not more so, in an environment where geographic location is almost irrelevant. The key question is if and how copyright will survive and thrive in the context of cloud.
The answer currently tends towards ‘not so well’. At least, not without a major overhaul of copyright, and its various regional incarnations, to work seamlessly in a global context. Last week’s Copyright and Technology 2011 conference (see great recap here by Bill Rosenblatt) provided much food for thought, and some insight on the key challenges facing copyright in the highly mobile, cloud enabled, information intensive content usage scenarios of today and tomorrow. Below are 3 highlights from the event, in my opinion:
- The brilliant keynote address by Microsoft’s Tom Rubin spelt out some key policy considerations for achieving what he describes as “copyright at the speed of light”, which addressed several vital topics including: clarity on orphan works; need for copyright registry / licence databases; improved metadata; better policies to address the divergent focus of copyright (i.e. territorial outlook) versus cloud (i.e. global outlook); as well as the need for frictionless cross border licensing. He concluded with 3 areas of focus for policies to help prepare and optimise copyright for the cloud, including: 1) appropriate enforcement; 2) robust metadata; and 3) streamlined licensing. These he claims will go some way towards realising the potential for “fantastic user experience with creative works in the cloud”, and I wholeheartedly agree.
- I moderated a panel session which focused on the lessons from real world implementation of DRM, and which provided some good insight from three speakers who already earned their stripes implementing DRM for clients. For example, my question about how to provide fine grained control over user access to specific content within a certain building/location elicited an answer, with examples, of how this was already being designed and implemented for clients in the airline and hospitality (e.g. hotel) industries. I imagine there are great opportunities here for events and venues (e.g. conferences, concerts, major sporting events, art galleries, educational and other public institutions). By the way, the simplest approach involves exclusive content access, via Wi-Fi and browser, which cuts out once a user moves outside the area of coverage. However, the level of sophistication can increase dramatically when this is also aligned with DRM secured content, and location based functionality (which is readily available on most smart mobile devices), plus a dash of Augumented Reality, for that added vavavoom. The possibilities are mind boggling.
- Finally, I found out some people were seriously creating real world applications for Digital Personal Property (DPP), which is probably best described as a way of making digital content to be more like physical property). DPP involves creating ‘unique’ digital copies of content (e.g. music, films or books) such that once a copy is lent, resold or otherwise given to another party, the original will no longer be accessible to the lender, seller or giver, respectively. Hmmm, whilst on the one hand this makes a certain kind of sense, particularly from the ‘property’ side of Intellectual Property (i.e. think digital property or currency in virtual worlds and online games e.g. Second Life or Farmville); on the other hand, it appears such a mind numbingly daft, futile and King Canute like venture to try and force digital content into an analogue world view, operating within a digital environment! It brings back to mind the spectacular failure of previous attempts to enforce highly intrusive DRM mechanisms over digital content. Having said that, I somehow get the impression that this will be a most interesting development to watch, mainly because of the potential for surprising outcomes from such apparently ‘foolish’ endeavours. A little lateral thinking never hurt anyone. For more information about DPP and the two interesting / controversial initiatives, just click on IEEE P1817 and/or Redigi (the latter is already embroiled in legal tussles with the RIAA, but then that is not surprising!). I’d be very interested to hear about any other DPP projects going on out there.
In conclusion, I think it is fair to say that copyright in a cloud context brings to very sharp relief to some of the key challenges that need addressing for that next step in cultural and socio-economic evolution. This includes: the need for some fairly significant adjustment of the Intellectual Property mechanism within the digital environment; a rethink of physical geographical or territorial boundaries in a digital world; and perhaps an exploration of other, better ways to assess the true value of digital content, in light of usage and context. Like I said earlier, lots of food for thought indeed.
Note: This post is brought to you in partnership with Intel(R) as part of the “Technology in tomorrow’s cloud & virtual desktop” series. For more information please click – HERE
Content Security and the Cloud
Following on from my previous post about storage in the cloud, the topic of content security, (aka how do you secure what is already stored in the cloud?), seemed like a natural next stop, hence this post. What does it take for content to be deemed secure in the cloud environment, and can it really be so?
Many months ago, I reviewed a book (for the BCS, Chartered Institute for IT), which dealt with the topic of cloud security, and I recall that although the book’s titular topics of Cloud Security and Privacy was very apt, it did not take a lot of reading to get the gist that security touches every aspect of cloud, right from initial login to choice of service provider and beyond. You may be forgiven for thinking that once your content is deposited in a secure cloud location, e.g. in a highly redundant, uber-secure, private cloud provided by a certified defence contractor, then it must be secure right? Wrong.
The content, and not just the location, is what needs securing. The age old concept of perimeterised security, such as can be found within firewalls, does not apply well to distributed cloud services, hence the need for the actual stored material to have it’s own inherent security (be it encryption, obfuscation, DRM etc.). What really matters is how the material is protected from intentional or accidental leakage.
Several methods or techniques are in use today by cloud service providers to secure the content stored within their services, and just like most things in cloud, you may even get a choice of how locked down you want it to be. Again, I mean locked down as in the actual content, and not the cloud. One of the more promising systems, spearheaded by the video content industry (and Digital Entertainment Content Ecosystem), is the cloud based digital rights locker system known as Ultraviolet, which allows users to buy content once and allow playback across any supporting platform / device. More information about the alliance and partners can be found here.
The key challenge is typically around content usage, and perhaps more importantly, the users intent. The use of otherwise secured content once released / accessed can often introduce an element of risk of leakage which spans anything from intentional copy and distribute (e.g. via the so called analog hole), to accidental misuse or malicious hacking. The impact of content leakage in the cloud can be devastating for content industry players that rely on revenue from their content investments.
The next post on this series will be looking very closely at the challenges facing copyright in the context of the cloud, and I hope to be able to bring back some insight from the rather timely Copyright and Technology 2011 conference, which I am attending today.
Note: This post is brought to you in partnership with Intel(R) as part of the “Technology in tomorrow’s cloud & virtual desktop” series. For more information please click – HERE
Storage and the Cloud
For this second post in the cloud series, I’d like to take a quick look at the challenges and opportunities around digital content storage in the cloud.
According to Cisco’s visual networking index, by 2015 “the equivalent of an archive of all movies ever made will cross Global IP networks every 4 minutes”, or to put it another way, Global IP traffic will quadruple from 2010 to 2015 with a compound annual growth rate of 32%. Oh, and by the way, over 60% of this traffic will be video! Now, that’s an awful lot of content which implies an increased need for storage, at one point or another in the content life cycle.
It doesn’t take a genius to see the potential for content storage on the cloud, and indeed so many examples already exist of cloud storage providers for both enterprise and consumer specific needs (e.g. think Amazon, Dropbox, or even Apple’s iCloud). So what’s the big deal? Well, according to a recent Storage Networking Industry Association (SNIA) Cloud Adoption Study, over half of enterprise respondents were planning to deploy cloud storage, and up to 60% planned to retain data anywhere between 5 – 20 years plus, on the cloud. This means the content stored on cloud is likely to increase exponentially over time, in light of the aforementioned growth in traffic.
Whilst these trends offer great opportunities, at least for cloud storage services and the content industry ecosystem, it also provides some key challenges to be addressed along the way, e.g. data storage and security, regulatory compliance and retention issues, as well as IP Rights management in a distributed, global digital landscape (the last will be subject of a separate post in this series).
In my opinion, one immediate issue for cloud storage will be how to interoperate, and easily migrate, stored data / content between cloud services. There is clear need for standards for cloud storage, and several initiatives, e.g. SNIA’s Cloud Storage Initiative (which introduced the Cloud Data Management Interface), and the Open Grid Forum’s Open Cloud Computing Interface are certainly steps in the right direction, because they help to specify the attributes, functions and requirements of data and content stored in the cloud. The key message for Enterprises looking to step into the cloud storage arena would be to ensure that their suppliers or vendors have adopted, or plan to adopt, a cloud storage standard early on in the selection process.
Note: This post is brought to you in partnership with Intel(R) as part of the “Technology in tomorrow’s cloud & virtual desktop” series. For more information please click – HERE