• CloudBees acquires Electric Cloud to further CI/CD mission
    by James on April 18, 2019 at 1:41 pm

    CloudBees has announced the acquisition of San Jose-based software veteran Electric Cloud, with the aim to become the ‘first provider of end-to-end continuous integration, continuous delivery, continuous deployment and ARA (application-release automation). The acquisition will marry two leaders in their respective fields. Electric Cloud was named as a leader in reports from both Gartner and Forrester around application release orchestration and continuous delivery and release automation respectively. CloudBees’ most famous contribution to the world of software delivery is of course Jenkins, an open source automation used for continuous delivery. The original architects of Jenkins are housed at CloudBees, including CTO Kohsuke Kawaguchi. Last month, CloudBees announced the launch of the Continuous Delivery Foundation (CDF), leading the initiative alongside the Jenkins Community, Google, and the Linux Foundation. At the time, Kawaguchi said: “The time has come for a robust, vendor-neutral organisation dedicated to advancing continuous delivery. The CDF represents an opportunity to raise the awareness of CD beyond the technology people.” From Electric Cloud’s side, the company bows to CloudBees as the ‘dominant CI vendor, CD thought leader and innovator’. “CloudBees recognised the enormous value of adding release automation and orchestration to its portfolio,” the company wrote on its acquisition page. “With Electric Cloud, CloudBees integrates the market’s highest-powered release management, orchestration and automation tools into the CloudBees suite, giving organisations the ability to accelerate CD adoption.” “As of today, we provide customers with best-of-breed CI/CD software from a single vendor, establishing CloudBees as a continuous delivery powerhouse,” said Sacha Labourey, CloudBees CEO and co-founder in a statement. “By combining the strength of CloudBees, Electric Cloud, Jenkins and Jenkins X, CloudBees offers the best CI/CD solution for any application, from classic to Kubernetes, on-premise to cloud, self-managed to self-service.” Financial terms of the acquisition were not disclosed. Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more. […]

  • A guide to securing application consistency in multi-cloud environments
    by davidmoss on April 18, 2019 at 10:19 am

    Cloud computing. It is like a well-trodden path when it comes to talking about digitalisation. Multi-cloud is another term that has crossed many lips, and something that we are still getting to know now, as efforts continue to ramp up to meet the demand of the digital era. For organisations, the demand to adapt and move with the times is apparent. But the demand for organisations to transform digitally is even more pressing. The whole ecosystem of growth, progression and delivering on the expectations of the end user rests in the delivery of cutting-edge, end-to-end managed IT services. It is the kind of transformational change that is driving organisations to seek consistency across multiple environments. The availability of multiple platforms means organisations can be spoilt-for-choice, and when they have picked and deployed a wide variety of applications that are subject to shifts and changes, they can lack overall control, support and visibility. And, with the story of multi-cloud barely on the first chapter, there is more still to come and more to be done to ensure support for present and future digital transformation needs. The multi-cloud approach is enabling new digital workflows, and companies can ensure better collaborative capabilities between what may have once been siloed components. The landscape is, however, an ever-changing environment. The customer centric view means being application centric and organisations need to ensure they can support customer-specific services as they either evolve, change or become redundant. All businesses must support a huge amount of vastly different server applications, with Virtual Services ranging from very low throughput single node IoT devices, to highly critical real-time online production servers that need to deliver high-availability online examination software. This is where multi-cloud requires another layer in order to work to its best capabilities for such a carefully balanced environment, and where the consideration of application delivery services and software-driven infrastructure solutions comes in to play. The problem with multi-cloud The requirement now is for platforms where every application and deployment is managed seamlessly so business can thrive in the long term. The solution needs to address the holes that a rush to multi-cloud infrastructure can leave, such as a requirement for easier management capabilities and analysis technology that can work across platforms to analyse and troubleshoot applications. Multi-cloud risks adding complexity, but enterprises know they can’t have siloed clouds. Having many different tools to deploy applications in different clouds can fracture development teams, and inconsistent services and processes across clouds defeats the very purpose of a multi-cloud initiative in the first place. The problems stem from the differences between all of the components of multi-cloud, such as how on-prem data centres work, the requirements of various applications and the disparity between clouds. The compute, storage and networking resources themselves are not the issue when it comes to multi-cloud, but the consistent provisioning and management over every working component is more likely the sticking point. It is the mission of internal service providers to achieve consistency without slowing everything down. The business wants speed and that means finding a way to deploy applications on different platforms quickly. In the race to get applications deployed teams want guardrails to ensure they can move quickly without the risk of driving off the cliff (or in this case) cloud edge. They want a safeguard where they know exactly what to expect whatever the environment they operating in. Keeping up with the clouds The difficulty is that, in the rush to keep up with the latest IT strategies driving digital change across every vertical sector, organisations have struggled to deploy a well-integrated, and complete multi-cloud solution. Instead they have resorted to bolting multiple clouds on to existing structures, in turn leaving a complicated mismatch that can lead to vendor or platform lock-in. It is the kind of thing that requires a whole team of specialists to manage and configure application deployment and delivery in various silos over multiple clouds, as opposed to on-prem. These problems are real and difficult, but they are not impossible to fix. The solution is abstraction. Too many services today are opinionated about the underlying infrastructure when they shouldn’t have to be. For example, a hardware appliance is confined to the datacentre and many cloud providers offer proprietary services unique to their cloud and their cloud alone. The next generation of application services is abstracted from the underlying infrastructure, software-defined, and opinionated only about the needs of the application (not the infrastructure that delivers it). These software-only services will play a critical role in the data centre and across multiple clouds providing consistent experiences in every environment. Not only is this a simple solution, the tools to carry it out are readily available right now. And ultimately, this is the solution where multiple environments must be fully integrated and it allows organisations to get the most out of multi-cloud use where each application sits in the cloud that provides maximum benefit to the business. Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more. […]

  • VMware’s blockchain now integrates with DAML smart contract language
    by James on April 17, 2019 at 10:54 am

    VMware’s move into the blockchain space represents the latest cloud vendor getting involved with the technology – and it has been enhanced with the announcement of an integration with Digital Asset. Digital Asset, which operates DAML, an open source language for constructing smart contracts, is integrating the latter with the VMware Blockchain platform. The move to open source DAML was relatively recent, and the company noted this importance when combining with an enterprise-flavoured blockchain offering. “DAML has been proven to be one of the few smart contract languages capable of modelling truly complex workflows at scale. VMware is delighted to be working together on customer deployments to layer VMware Blockchain alongside DAML,” said Michael DiPetrillo, senior director of blockchain at VMware. “Customers demand choice of language execution environments from their blockchain and DAML adds a truly robust and enterprise-focused language set to a blockchain platform with multi-language support.” The timeline of the biggest cloud players and their interest in blockchain technologies is an interesting one. Microsoft’s initiatives have been long-standing, as have IBM’s, while Amazon Web Services (AWS) went back on its word to launch a blockchain service last year. VMware launched its own project, Project Concord, at VMworld in Las Vegas last year but followed this up with VMware Blockchain in beta in November. Despite the interest around blockchain as a whole, energy consumption has been a target for VMware CEO Pat Gelsinger, who at a press conference in November described the technology’s computational complexity as ‘almost criminal.’ VMware was named by Forbes earlier this week in its inaugural Blockchain 50. The report – which carries similarities to its annual Cloud 100 rankings – aimed to provide analysis on those with the most exciting initiatives based in the US and who had a minimum valuation of sales of $1 billion. Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more. […]

  • The state of cloud business intelligence 2019: Digging down on Dresner’s analysis
    by louiscolumbus on April 17, 2019 at 9:26 am

    An all-time high 48% of organisations say cloud BI is either “critical” or “very important” to their operations in 2019 Marketing and sales place the greatest importance on cloud BI in 2019 Small organisations of 100 employees or fewer are the most enthusiastic, perennial adopters and supporters of cloud BI The most preferred cloud BI providers are Amazon Web Services and Microsoft Azure. These and other insights are from Dresner Advisory Services’ 2019 Cloud Computing and Business Intelligence Market Study. The 8th annual report focuses on end-user deployment trends and attitudes toward cloud computing and business intelligence (BI), defined as the technologies, tools, and solutions that rely on one or more cloud deployment models. What makes the study noteworthy is the depth of focus around the perceived benefits and barriers for cloud BI, the importance of cloud BI, and current and planned usage. “We began tracking and analysing the cloud BI market dynamic in 2012 when adoption was nascent. Since that time, deployments of public cloud BI applications are increasing, with organisations citing substantial benefits versus traditional on-premises implementations,” said Howard Dresner, founder, and chief research officer at Dresner Advisory Services. Please see page 10 of the study for specifics on the methodology. Key insights gained from the report include the following: An all-time high 48% of organisations say cloud BI is either “critical” or “very important” to their operations in 2019 Organisations have more confidence in cloud BI than ever before, according to the study’s results. 2019 is seeing a sharp upturn in cloud BI’s importance, driven by the trust and credibility organisations have for accessing, analysing and storing sensitive company data on cloud platforms running BI applications. Marketing and sales place the greatest importance on cloud BI in 2019 Business intelligence competency centres (BICC) and IT departments have an above-average interest in cloud BI as well, with their combined critical and very important scores being over 50%. Dresner’s research team found that operations had the greatest duality of scores, with critical and not important being reported at comparable levels for this functional area. Dresner’s analysis indicates operations departments often rely on cloud BI to benchmark and improve existing processes while re-engineering legacy process areas. Small organisations – of 100 employees or fewer – are the most enthusiastic, perennial adopters and supporters of cloud BI As has been the case in previous years’ studies, small organisations are leading all others in adopting cloud BI systems and platforms.  Perceived importance declines only slightly in mid-sized organisations (101-1,000 employees) and some large organisations (1,001-5,000 employees), where minimum scores of important offset declines in critical. The retail/wholesale industry considers cloud BI the most important, followed by technology and advertising industries Organisations competing in the retail/wholesale industry see the greatest value in adopting cloud BI to gain insights into improving their customer experiences and streamlining supply chains. Technology and advertising industries are industries that also see cloud BI as very important to their operations. Just over 30% of respondents in the education industry see cloud BI as very important. R&D departments are the most prolific users of cloud BI systems today, followed by marketing and sales The study highlights that R&D leading all other departments in existing cloud BI use reflects broader potential use cases being evaluated in 2019. Marketing & Sales is the next most prolific department using cloud BI systems. Finance leads all others in their adoption of private cloud BI platforms, rivaling IT in their lack of adoption for public clouds R&D departments are the next most likely to be relying on private clouds currently. Marketing and sales are the most likely to take a balanced approach to private and public cloud adoption, equally adopting private and public cloud BI. Advanced visualisation, support for ad-hoc queries, personalised dashboards, and data integration/data quality tools/ETL tools are the four most popular cloud BI requirements in 2019 Dresner’s research team found the lowest-ranked cloud BI feature priorities in 2019 are social media analysis, complex event processing, big data, text analytics, and natural language analytics. This years’ analysis of most and least popular cloud BI requirements closely mirror traditional BI feature requirements. Marketing and sales have the greatest interest in several of the most-required features including personalised dashboards, data discovery, data catalog, collaborative support, and natural language analytics Marketing and sales also have the highest level of interest in the ability to write to transactional applications. R&D leads interest in ad-hoc query, big data, text analytics, and social media analytics. The retail/wholesale industry leads interest in several features including ad-hoc query, dashboards, data integration, data discovery, production reporting, search interface, data catalog, and ability to write to transactional systems Technology organisations give the highest score to advanced visualisation and end-user self-service. Healthcare respondents prioritise data mining, end-user data blending, and location analytics, the latter likely for asset tracking purposes. In-memory support scores highest with Financial Services respondent organisations. Marketing and sales rely on a broader base of third party data connectors to get greater value from their cloud BI systems than their peers The greater the scale, scope and depth of third-party connectors and integrations, the more valuable marketing and sales data becomes. Relying on connectors for greater insights into sales productivity & performance, social media, online marketing, online data storage, and simple productivity improvements are common in marketing and sales. Finance requiring integration to Salesforce reflects the CRM applications’ success transcending customer relationships into advanced accounting and financial reporting. Subscription models are now the most preferred licensing strategy for cloud BI and have progressed over the last several years due to lower risk, lower entry costs, and lower carrying costs Dresner’s research team found that subscription license and free trial (including trial and buy, which may also lead to subscription) are the two most preferred licensing strategies by cloud BI customers in 2019. Dresner Advisory Services predicts new engagements will be earned using subscription models, which is now seen as, at a minimum, important to approximately 90% of the base of respondents. 60% of organisations adopting cloud BI rank Amazon Web Services first, and 85% rank AWS first or second 43% choose Microsoft Azure first and 69% pick Azure first or second. Google Cloud closely trails Azure as the first choice among users but trails more widely after that. IBM Bluemix is the first choice of 12% of organisations responding in 2019. Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more. […]

  • The unforgiving cycle of cloud infrastructure costs – and the CAP theorem which drives it
    by amrishkapoor on April 16, 2019 at 10:40 am

    Long read Modern enterprises that need to ship software are constantly caught in a race for optimisation, whether in terms of speed (time to ship/deploy), ease of use or, inevitably, cost. What makes this a never-ending cycle is that these goals are often at odds with each other. The choices that organisations make usually inform what they’re optimising for. At a fundamental level, these are the factors that drive, for example, whether enterprises use on-premises infrastructure or public clouds, open source or closed source software, or even certain technologies – such as containers versus virtual machines. With this in mind, it is prudent to take a deeper look into the factors that drive this cyclical nature of cloud infrastructure optimisations, and how enterprises move through the various stages of their optimisation journey. Stage #1: Legacy on-premises infrastructure Large premises often operate their own data centres. They’ve already optimised their way here by getting rid of large houses of racks and virtualising a lot – if not all – of their workloads. The consolidation and virtualisation resulted in great unit economics. This is where we start our journey. At a particular point, the linear cost of paying per VM becomes more expensive than creating and managing an efficient data centre – 451 Research put it as roughly 400 VMs managed per engineer Unfortunately, their development processes are now starting to get clunky: code is built using some combination of build automation tools, and home-grown utilities, and usually involves IT teams to requisition virtual machines to deploy. In a world where public clouds offer developers the ability to get all the way from code to deploy and operate within minutes, this legacy process is too cumbersome, even though this infrastructure is presented to the developers as a ‘private cloud’ of sorts. The fact that the process takes so long has a direct impact on business outcomes because it greatly delays cycle times, response times, the ability to release code updates more frequently and – ultimately – time to market. As a result, enterprises look to optimise for the next challenge: time to value. Public clouds are evidently a great solution here, because there is no delay associated with getting the infrastructure up and running. No requisition process is needed, and the entire pipeline from code to deploy can be fully automated. Stage #2: Public cloud consumption At this point, an enterprise has started using public clouds – typically, and importantly, a single public cloud – to take advantage of the time-to-value benefit that they offer. As this use starts expanding over time, more and more dependencies are introduced on other services that the public cloud offers. For example, once you start using EC2 instances on AWS, pretty quickly your cloud-native application also starts relying on EBS for block storage, RDS for database instances, Elastic IPs, Route53, and many others. You also double down further by relying on tools like CloudWatch for monitoring and visibility. In the early stages of public cloud use, the ease of use when going with a single public cloud provider can trump any other consideration an enterprise may have, especially at reasonably manageable scale. But as costs continue to grow with increased usage at scale, cost control becomes almost as important. You then start to look at other cloud cost management tools to keep these skyrocketing costs in check – ironically from either the cloud provider itself (AWS Budgets, Cost Explorer) or from independent vendors (Cloudability, RightScale and many others). This is a never-ending cycle until, at some point, the public cloud infrastructure flips from being a competitive enabler to a commodity cost centre. At a particular tipping point, the linear cost of paying per VM becomes more expensive than creating and managing an efficient data centre with all its incumbent costs. A study by 451 Research pegged this tipping point to be at roughly 400 VMs managed per engineer, assuming an internal, private IaaS cloud. Thus there is a tension between the ease of using as many services as you can from a one-stop-shop and the cost of being locked into a single vendor. The cost associated with this is two-fold: Being at the mercy of a single vendor, and subject to any cost/pricing changes made here. Being dependent on a single vendor means that your leverage is reduced in price negotiations, not to mention being subjected to further cross- and up-sells to other related offerings that further perpetuates the lock-in. This is an even larger problem with the public cloud model because of the ease with which multiple services can proliferate Switching costs. Moving away from the vendor incurs a substantial switching cost that keeps consumers locked in to this model. It also inhibits consumers’ ability to choose the right solution for their problem In addition to vendor lock-in, another concern with the use of public clouds is the data security and privacy issues associated with off-premises computing that may, in itself, prove to be a bridge too far for some enterprises. One of the recent trends in the software industry in general, and cloud infrastructure solutions in particular, is the rise of open source technology solutions that help address this primary concern of enabling ease of use alongside cost efficiency, and lock-in avoidance. Open source software gives users the flexibility to pay vendors for support – either initially, or for as long as it is cost-effective – and switch to other vendors or to internal teams when it is beneficial (or required, for various business reasons). Note that there are pitfalls here too – it is sometimes just as easy to get locked in to a single open source software vendor as it is with closed source software. A potential mitigation is to follow best practices for open source infrastructure consumption, and avoid vendor-specific dependencies – or forking, in the case of open source – as much as possible. Stage #3: Open source hell You’ve learned that managing your data centre with your own homegrown solutions kills your time-to-value, so you tried public clouds that gave you exactly the time-to-value benefit you were looking for. Things went great for a while, but then the scale costs hit you hard, made worse by vendor lock-in, and you decided to bring computing back in-house. Except this time you were armed with the best open source tools and stacks available that promised to truly transform your data centre into a real private cloud (unlike in the past), while affording your developers that same time-to-value benefit they sought from public clouds. If you belong to the forward-looking enterprises that are ready to take advantage of open source solutions as a strategic choice, then this should be the cost panacea you’re looking for. Right? Wrong. Unfortunately, most open source frameworks that would be sufficient to support your needs are extremely complex to not only set up, but manage at reasonable scale. This results in another source of hidden operational costs (OPEX) – management overhead, employee cost, learning curve and ongoing admin – which all translate to a lot of time spent, not only on getting the infrastructure to a consumable state for the development teams, but also keeping it in that state. This time lost due to implementation delays, and associated ongoing maintenance delays, is also costly; it means you cannot ship software at a rate that you need to stay competitive in your industry. Large enterprises usually have their own data centres, and administration and operations teams, and will build out a private cloud using open source stacks that are appropriately customised for their use. There are many factors that go into setting this up effectively, including typical data centre metrics like energy efficiency, utilisation, and redundancy. The cost efficiency of going down this path is directly dependent on optimising these metrics. More importantly, however, this re-introduces our earliest cost factor: the bottom line impact of slow time-to-value and the many cycles and investment spent not just on simply getting your private cloud off the ground, but having it consumable by development teams, and in an efficient manner. You have now come, full circle, back to the original problem you were trying to optimise for. The reason we’re back here is that the three sources of cost we’ve covered in this post – lock-in, time-to-value and infrastructure efficiency – seemingly form the cloud infrastructure equivalent of the famous CAP theorem in computer science theory. You can usually have one, or two, but not all three simultaneously. In order to complete the picture, let’s introduce solutions that solve for some of these costs together. Approach #1: Enabling time-to-value and lock-in avoidance (in theory) This is where an almost seminal opportunity in terms of cloud infrastructure standardisation comes in: open source container orchestration technologies, especially Kubernetes. Kubernetes offers not only an open source solution that circumvents the dreaded vendor lock-in, but also provides another layer of optimisation beyond virtualisation, in terms of resource utilisation. The massive momentum behind this technology, along with the community behind it, has resulted in all major cloud vendors having to agree on this as a common abstraction for the first time. Ever. As a result, AWS, Azure and Google Cloud all offer managed Kubernetes solutions as an integral part of their existing managed infrastructure offerings. While Kubernetes can be used locally as well, it is notoriously difficult to deploy and even more complex to operate at scale, on-premises. This means that, just like with the IaaS solutions of the public clouds, to get the fastest time-to-value out of the open source Kubernetes, many are choosing to use one of the Kubernetes as a service (KaaS) services offered by the public clouds. This hence achieves time-to-value and possible lock-in avoidance, since presumably you’d be able to port your application at any point to a different provider. Only, the chances are you never will. In reality, you’re risking being dependent, once more, on the rest of the cloud services offered by the public cloud. The dependency is not just in the infrastructure choice, but is felt more in the application itself and all the integrated services. It goes without saying too that if you go with a Kubernetes service offered by the public clouds, then these solutions have the same problem that IaaS solutions do at scale in the public cloud – around rising costs – along with the same privacy and data security concerns. In practice, the time-to-value here is essentially tied to Kubernetes familiarity, assuming you’re going with the public cloud offering, or advanced operational expertise, assuming you’re attempting to run Kubernetes at scale, on-prem. From the perspective of day one operations (bootstrapping), if your team is already familiar with, and committed to, going with Kubernetes as their application deployment platform, then they can get up and running quickly. There is a big caveat here – this assumes your application is ready to be containerised and can be deployed within an opinionated framework like Kubernetes. If this isn’t the case there is another source of hidden costs that will add up in regards to re-architecture or redesigning the application to be more container-friendly. On a side note, there are enabling technologies out there that aim to reduce this ramp-up time to productivity or redesign, such as serverless or FaaS technologies. Most hybrid cloud implementations end up being independent silos of point solutions that can only optimise against one or two of the CAP theorem axes The complexities of day two operations with Kubernetes for large scale, mission critical applications that span on-premises or hybrid environments are enormous, and a topic for another time. But suffice it to say that if you’re able to deploy your first cluster quickly with any open source tool – for example, the likes of Rancher or Kops among others – to achieve fast time-to-value for day one, you’re still nowhere close to achieving time-to-value as far as day two operations are concerned. Operations around etcd, networking, logging, monitoring, access control, and all the many management burdens of Kubernetes for enterprise workloads have made it almost impossible to go on-prem without planning for an army of engineers to support your environments, and a long learning curve and skills gap to overcome. Approach #2: Enabling time-to-value and infrastructure efficiency This is where hyperconverged infrastructure solutions come in. These solutions offer the promise of better time-to-value outcomes because of their turnkey nature, but the consumer pays for this by, once again, being locked in to a single vendor and their entire ecosystem of products – which makes these solutions more expensive. For example, Nutanix offers not only their core turnkey hyperconverged offering, but also a number of ‘essentials’ and ‘enterprise’ services around this. Approach #3: Enabling infrastructure efficiency and lock-in avoidance (in theory) We can take an open source approach to hyperconverged infrastructure as well, via solutions like Red Hat HCI for instance. These provide the efficiency promise of hyperconverged, while also offering an open source alternative to single-vendor lock-in. Like any other complex open source infrastructure solutions, though, they suffer from a poor time-to-value for consumers. This then is the backdrop against which most ‘hybrid cloud’ efforts are framed – how to increase time-to-value and enable portability between environments, while improving unit efficiency and data centre costs. Most hybrid cloud implementations end up being independent silos of point solutions that, once more, can only optimise against one or two of the CAP theorem axes. These silos of infrastructure and operations have further impact on overhead, and hence the cost, of management. Breaking this cloud infrastructure-oriented CAP theorem would require a fundamentally different approach to delivering such systems. ‘Cloud-managed infrastructure’ helps deliver the time-to-value, user experience and operational model of the public cloud, as well as on hybrid data centres too. Utilising open infrastructure to ensure portability and future-proofing systems and applications can help remediate costs as well. Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more. […]