Who in Open Source plays golf with the CIO?

At the CloudStack Collaboration Conference 2014 in Denver I was given the honour to present the opening keynote on the last day. Of course you stare at a number of people that are suffering from hangovers, but that is a small price to pay.

mecanoo_n190Selecting a topic for a presentation is always a burden. For this talk I focussed on the position of Open Source in the boardroom. Yes over the last decade we have seen an uprise of Open Source software that is used in corporates, enterprises and even the public sector. But at the same moment, Oracle, CA, BMC still have turnover numbers that make you dazzle. And when you take into account that software plays a bigger role than ever (Sofware is going to eat the world remember) this is still remarkable.

Especially when you know that the CIO nowadays does not want to have vendor lock-in, and that in the large application landscaped items like datacenter consolidation, application landscape normalisation and the move to the Cloud (hybrid) requires his attention with a strong focus on Cost and Flexibility (While keeping availability at 100%).

One might think that these trends are reason enough for the typical CIO to have a stronger urge to move parts of his workload to Open Source. The contrary is true unfortunately. Last month I have witnessed this a couple of times.

At the Pre Commercial Procurement Market Consultation held by the EU for their Cloud For Europe initiative the companies that attended the meeting all were commercial parties. Not one representative of Open Source software was part of the discussion. Of course my company was present and we asked some questions about the position of Open Source in this process.

Later that week Allen & Overy and Black Duck Software organised a early morning seminar at the Dutch office of Allen & Overy. The theme was open source. The presenters gave a very good overview of the status of Open Source software and the legal implications of it. Whether it is from a merger and acquisition point of view or from a risk management perspective. My main take away was that few companies have an Open Source policy in place. They do not know how wether the software that is written in their companies, or software that is bought by the companies contains Open Source. And the numbers do not lie. Approx. 30% of all code written is Open Source code. (more info from Black Duck on OSS)

The fact that Purchasing Departments, CIOs, Enterprise Architects are not always in the know is of course stunning. But this fact is not their issue alone. It is a big issue for the Open Source software industry itself. Open Source Software is missing legal bodies that can act on their behalf. e.g. CloudStack, the software that powers clouds has a lot of traction in the enterprise. Loads of companies use this product to build public or hybrid clouds, and yet the community that support and build CloudStack consists of mainly developers. Of course CloudStack is backed by Citrix and they continue to do a good job promoting that their CloudPlatform product is powered by Open Source. The Big players here have stronger presence in the boardroom, have more access to analysts and of course bigger marketing budgets.

Some companies have developed a business model to make optimal use of both an Open Source community and their developers and a commercial model. Chef and Puppet are good examples. For additional features, security or an on premise installation you have to buy the commercial versions of their products. These companies use open source to gain traction and preserve a good culture in their organisation. Their commercial model ties into this nicely.

I am not saying that this can be changed easily, but for starters I will organise an Golf Event for both CIOs and Open Source people to get acquainted. Found some cool people already that are willing to participate. If you want to join, let me know and you will get an invite.

Slides of my presentation

Posted in Cloud, iaas, open source, paas, Schuberg | Leave a comment

Devops Days Amsterdam 2014- Call for Organizers

One year ago, the meetup group DevopsDays Amsterdam, started by Alessandro Vozza was revived by organizing our first monthly meetups. After solid advice from both Kris and Patrick we started to find people that are enthusiastic about Devops. Next to that we hoped to have enough people to organize a real DevopsDays.

Since then a lot has happened. Not only have we held a DevopsDays Amsterdam 2013 event (which was a big hit!), but we also have grown from 30 meetup members to 350 members. And next to that almost every month we host a meetup with very diverse topics.

This year we will again organize a DevopsDays Amsterdam event. Around the same dates, around the same venue. But this is only the start of it. Organizing an event takes a tremendous amount of time and dedication. It is not difficult, it is just hard work. But it is a load of fun as well.

Therefor we want to reach out to all those beautiful passionate individuals that are willing to join us for this event. Raise money from sponsors, attract attendees, set up the venue, arrange food and drinks, prepare workshops, video and audio, etc. etc. etc.. If you are interested in setting up this great event in 2014, please let us know organizers-amsterdam-2013[AT]devopsdays{DOT}org. The only thing we ask you is to be active as a organizer.

After next months meetup held at Schuberg Philis on 8-jan we will define the organizing committee. It would be great if you are one of them!

Hope to see you at the meetup and hope to see you at all the google hangouts that we will plan in the course of 2014!

On behalf of the organizers,

Arjan

Posted in devops, Schuberg | Tagged , | 1 Comment

CloudStack Collaboration Conference Europe

At Schuberg Philis we aim high. We aim at mission critical application infrastructures that businesses and societies rely on 24×7 a day. Not only from an uptime and performance perspective, but also from a flexibility perspective. Customer needs are changing faster and faster (faster is the new faster) and everything will be software driven. This ever changing landscape makes us adapt and evolve always. Guaranteeing 100% uptime is still uncommon, but it is not enough to satisfy our and our customers needs.

Two years ago, after a hackaton on optimising our toolset we drank a few pints and we decided that optimising our current way of working with the tools we used back then, simply was not good enough. Open Source is a real mature option nowadays for the enterprise and the communities make brilliant tools faster and more feature rich than we are able to do so ourselves. Next to that we figured we were not innovative enough.

Back then we gave ourselves the following bold goal. Build a new customer environment that is ready for the years to come, that is feature rich, that is as robust as our current environments and do not use any technology we ever used before.

That last part has given us an amazing journey. And one of those journeys was do a PoC with different cloud platforms (not VCloud which was the most obvious choice back then because we use VMWare already). I have written about this PoC before but the outcome was CloudStack. Pretty soon we had out first ‘test’ cloud running on old hardware. And after that we went for some serious test drives. A big finding two years ago was that Nicira SDN was not part of CloudStack.

So joining the CloudStack community as a company and building a big part of Nicira (VMWare NSX)  into CloudStack (together with Nicira and Citrix) was our (Hugo actually) first adventure as a member of this community. Since then a lot of things happened between Schuberg Philis and CloudStack. Hugo became committer and Funs and Roeland have given numerous talks on different cloud related topics. We got acquainted with a number of people that are all very dedicated to the success of a great product, with a great vision.

After the CLoudStack Collaboration Conferences of San Diego (2012) and Las Vegas (2013) Hugo was called bluff to organise the Europe version. Especially when the turnout on the CloudStack SDN Talk at Apachecon in Germany turned out great.

A first conference call between the organisers of the Vegas conf and SBP was set up and the virtual handshake was done. CloudStack Collab will come to Europe and the dates were agreed upon. 20-22 November in Amsterdam.

As very experienced organisers of conferences we probably did not know where we said yes to. But Patrick Debois and Kris Buytaert taught us a lot of things when we helped organizing Devops Days Amsterdam. And how hard can it be. Some sponsors, a venue, some speakers and some attendees. Thats it. Of course practice proved we were too enthusiastic, pragmatic and simplifying things but this is how we started.

Since then I believe we created a great team, with a great mix of knowledge, personal networks, geolocation. But all with a great drive. And the best thing was we have had very good support of Citrix, but the entire conference was organised by the community. No commercial bureau has done anything for us. And the entire conference has been paid for by sponsors and attendees (with a low ticket fee).

Last Wednesday the conference started with workshops and hackatons. 180 developers and engineers gathered and worked like crazy to either fix bugs, or to gain knowledge. Knowledge on CloudStack itself, but also on the very important tools that you have to use once your IAAS layer is fixed. Chef (Michael ‘goatherder’ Ducy), Splunk (Damien), Jenkins (Cloudbees), Nexenta, ElasticSearch (Leslie) are all great examples how well these products blend together. The evening was great as well. 100 people on a boat going for dinner and drinks. Shoot some pool afterwards and having loads of discussions about Cloud, CloudStack. Good fun.

After this first hands on day, over 60 talks were given. Not only core CloudStack talks, but also very nice Devops Talks (John Willis, Mark Burgess, Paddy Power, Kris Buytaert, Pierre Yves Ritschard, Sebastien Goasgues (with Nguyen and Damien). The city of Amsterdam chipped in as well by highlighting that is is exactly 25 years ago that the Netherlands and the US were connected and mail was sent via the internet. Again the evening was filled with dinner and drinks. This time at the venue. ShapeBlue gave the jazzy tunes a rock flavour by singing the CloudStack song (check it out on youtube. Its great). CloudBeers afterwards and Thursday proved to be a killer. Friday we stayed in the good flow and even the last sessions of the day were well visited.

gameoverThe closing speech by Hugo was great. We do not have a wealthy foundation. We have a community of gifted and driven individuals and companies. Dear sponsors, thanks for your efforts, but please “insert coins to continue”. Next spring we will be back in the US. And before that CloudStack Day Japan in March.

One thing is obvious. We have chosen CloudStack and this product and community are both amazing. Sometimes different products like Openstack get more fame, which is good for them. They are doing great things in multiple areas. For CloudStack there is a big momentum. Analysts, Enterprises are looking into it as a serious candidate to run even more production workloads. The community is growing and commercial companies are even delivering 24×7 support nowadays.

We will be back as (co)organisers or Dod Ams and a lot of Meetups on Devops and CloudStack. Happy that we changed course two years ago. I would not have liked to miss this for one bit.

Posted in Cloud, devops, iaas, Schuberg | Tagged , , | 2 Comments

The importance of sponsors that stick their neck out

This year we are helping in organizing some events. The most well known ones are Devops Days Amsterdam and CloudStack Collaboration Conference Europe 2013. Patrick deBois already warned us when we organized DevopsDays in spring. Rule #1 get some sponsors for the venue before anything else. If the venue is paid for you have an event. Otherwise it remains a struggle.

Back then Schuberg Philis was willing to step up the plate to carry this risk. But now for Cloudstack Collaboration this is not doable anymore. The amount of money is too big for one or two organisations. And therefor we reached out for sponsors. Even before the event was set in stone.

It is amazing to see the willingness of a lot of companies to help out. Not only in allowing people to spend time on organizing events, but also on donating hard cash. All those companies understand the importance of the seminar we are organizing. Building an Open Source community where developers have easy access to core developers. Exchanging knowledge on cloud computing in its broadest sense. Meeting people that have overcome issues you are about to experience yourself.

And especially for these reasons I want to thank our current sponsors. First of all Citrix. They open sourced CloudStack and they are still supporting this move big time. Next to that Shape Blue, Nexenta, Netapp, CloudSoft, Ikoula, LeaseWeb, Elasticsearch, Apalia, Atom86 and Exoscale.

Thanks!

 

Posted in Cloud, development, Schuberg, Uncategorized | Leave a comment

Game Day Exercise

Game Day Exercise – Schuberg Philis

Game Day (aka DR Test) Scenario

The emphasis of this years’ Game Day Exercise (DR test) was on three major items.

  • First of all we needed to know if the new employees that joined Schuberg Philis recently are familiar with and capable of executing a DR test. Over the last years we executed multiple DR tests, we documented large part of the execution of the DR test and the experienced engineers and customer operations managers know by heart how to execute a DR or any other major outage. We have three relatively inexperienced (from a SBP perspective) engineers in the team, and one engineer joined this team over a year ago. Therefor it was good to know if those engineers are capable of executing the DR without being kick started by the experienced engineers. Is the level of automatism versus improvise te right one.
  • Secondly we emphasized more on the organizational aspect of the DR test than in previous years. The test was planned but not communicated, so the engineers didn’t know that they needed to execute a full DR. Hence we call it a game day exercise. The reason for this is that we needed to know how people react in case a major event occurs. The test was kicked off at 18:00 on a Monday night. By doing so we knew that there was a big chance that engineers are commuting from the office to home. In this scenario we could test perfectly how long it takes to create yourself a fully operational working environment including communication.
  • Thirdly we tested the link with the SBP Business Continuity Plan for the first time since the BCP plan had been adjusted in september. Over the last years we learned that communication is one of the key success factors in a DR scenario. We also learned that communication with too much stakeholders is interfering with the execution of the test.

Relation to the tests of previous years

In 2009 Schuberg Philis implemented a redundant environment over two datacenters. As of that moment it was possible to execute DR tests in such a manner that the IT functionality could be available in a different datacenter than Schuberg Philis’s own datacenter. Part of this project was a full fail over test prior to go live. After go live a  DR test was not executed in 2009.

The first DR test that was based on this architecture has been executed in 2010. The main goal of this test was to prove that the architecture was capable of a DR scenario and that the engineers were able to execute a fail over within the time frames that we agreed on. Te test itself was a graceful test. This means that the entire functionality was shut down at one datacenter and activated at our second datacenter. At that time  majority of the applications were active-passive. This proved to be the best starting point. We used this DR test to script and document large parts of the DR.

The 2011 DR test had a totally different character. Knowing that the architecture is capable of a DR and knowing that a real DR will not be graceful at all, we decided to take a more drastic approach. This time we needed to test how resilient the environment really was in case we lost datacenters the hard way. Next to that we had implemented sync mirror storage, so some of the applications ran active-active. This gave us also the opportunity to run primarily active in a different datacenter than Schuberg Philis. This architecture change needed to be tested as well. During the test we shut down power in the racks of the Datacenter 2 and the Datacenter 3. This led to a massive chain of events that needed attention and recovery. Over 1.400 Nagios alerts makes you go back to your knowledge and your experience. Make the environment reliable again by checking and fixing network, storage and virtualization. After that focus on apps.

In November 2012 we took the approach described above. The Game Day Scenario itself was a copy of an event that happened earlier in 2011. But in this test we exaggerated the event, so it was a good excuse to execute a real fail over. The scenario was the following:

17:35 | There is a fire in the Xxxxx building next door. The fire started around 17:25. We have been alerted by people in the SBP building about smoke on the streets, but also in the SBP building.

17:55 | The Fire Department came by to instruct people to leave the building as the fire is not yet under control. We can see the flames sky high through the roof. Smoke is getting more intense. Our Data Center Manager is alerted by security. The smoke detection is at a dangerous level. Only 2 ranks up and the fire suspension will be triggered. This means that we need to shut down our datacenter as fast as possible.

17:58 | Internal care takers (in Dutch Bedrijfshulpverlening ) is evacuating the building. Director of Operations is talking to the fire department because SBP is a 24/7 secured office. Security staff is informed to leave the building as soon as the building is evacuated. Police and Fire Department take over physical security of the surrounding.

17:59 | Director of Operations calls Customer Operations Manager itnernal IT on what to do. They decide that is it best to execute a DR in such a manner that SBP as a datacenter is not needed. the Customer Operations Manager calls the Lead Engineer who will initiate the DR.

Root cause for success and findings

The Game Day exercise itself was executed successfully. We experienced no major findings. This gives us great comfort that we will be up and running fast in case of a real disaster or act of god. However a number of minor findings need to be taken care off.

First of all the items that were executed successfully.

  • Total timing of the DR and the fail back was 2:30 hours. In this time frame we did a manual failover, we checked the entire architecture (storage, network, virtualization layer, application layer) and on top of that we tested all functionality. After a successful failover we agreed to fail back as soon as possible. Major functionality used by our engineers to service customers (connectivity, documentation, procedures, ticketing system, passwords) was only interrupted briefly as anticipated.
  • As all engineers were commuting or had to leave the building, it was good to see that a core team of engineers arranged a working spot in 10 minutes after the start of the test. The DR itself was therefor started promptly after the evacuation. In a test scenario, we had more time to start. However in a real disaster this might not be the case. So it is good to see that we can start faster than anticipated.
  • The link between DR and BCP was done and proved to be working. The BCP Steps
    • A. Identify and communicate incident,
    • B. CMT (Crisis Management Team),
    • C. Start McInfra DR Procedure.
    • D. Execute evacuation and
    • E. Align with customer and customer team on DR were done.
  • Of course the scope was limited to the Schuberg Philis environment only.

The minor and medium findings that need adjustment are:

  1. Minor – Setting up a conference call with your mobile phone and need of that mobile phone to call different members is not working seamless. You need to have a mobile phone next to the conference call. Setting up a conference bridge in the hotel were we found ourselves a working spot. Proved to be not working. It is preferred to have sufficient conference bridges that have both an online and a phone connection. Next to that it is preferred to have online conference possibilities that is known by all staff (webex, gotomeeting and such).
  2. Medium – Last year we decided to split the communication between the Crisis Management Team and the Schuberg Philis DR test. This is still not working in an optimal manner. Calling all four engineers on duty and eight customer operations managers takes too much time. Experiences learns that each call takes up to three minutes. This means that the Schuberg Philis Customer Operations Manager is calling for almost 45 minutes in the beginning of the DR. As the status changes quickly in the beginning the Customer Operations Manager needs to align communication with the team that executes the DR test as well. A second Customer Operations Manager will be appointed to deal with this communication as well.
  3. Minor – Not all drawings (Rack Diagram) were available in PDF format. Visio format is too big for working on a remote location. A fix is easy to implement.
  4. Minor – A fail over of the Certificate Authority is not documented. The fail over was executed successfully. However this was because the engineer new exactly what to do by heart. This will be documented by a SOP (Standard Operating Procedure).
  5. Minor – The use of the SQL query to fail over the ERP system was not described sufficiently.  This will be documented by a SOP (Standard Operating Procedure).
  6. Minor – The Load Balancer was not synced. We need to check configs of those.
  7. Minor – The order of communication was not done correctly by the Customer Operations Manager. On his way to the hotel the Customer Operations Manager called the other Customer Operations Managers. He should call the Engineers on Duty first. However as this is a rotating group it is not clear who the Engineers on duty are without looking in the pager duty tool (IRT). As a possible mitigation we could assign standard phone numbers that are switched automatically when a duty is changed from one person to another.
  8. Minor – a monitoring check was disabled during maintenance and not activated anymore. Who monitors the monitor?
  9. Minor – Not all Customer Operations Managers were contacted directly (holiday and such). As a result we decided to call the lead engineer of this team instead. This is not described in the procedure.
  10. Miror – Storage username password was not located in the Password safe. This needs to be added
  11. Minor – SBP Citrix servers couldn’t be drained without Citrix Tools. Those tools will be installed on the Management servers.
We hope that giving some insight in a typical game day exercise at SBP gives you the feeling that you are on the right track as well. Next to that we are trying to make the test scenario’s more and more life like. If you have any comments, feel free to post them here or contact me directly. It would be fun to take this to a higher level.

 

 

Posted in devops, Schuberg, Uncategorized | Tagged , , | 1 Comment

Cloud Expo Sillicon Valley – Delivering Mission Critical Workloads to the cloud

Yesterday (7-nov) I was given the opportunity to speak at the cloud expo by Shannon Williams (VP Market Development, Cloud Platforms). As you probably have read in my previous posts, our cloud is ready to rumble. Hence, the perfect timing to share some of the learning money we paid over the last year. In this blog post, I’ll write a short summary. Reason I do so is that a lot of people asked me questions during and after the presentation. That is giving me the feeling that our experience may be off value for others out there as well.Real World StoryFor starters, the assignment we as a team came up with was, design a cloud. A cloud that is capable of running traditional workloads. Workloads we often see at our enterprise customers. Workloads that rely on a solid redundant infrastructure. Next to that we do want to embrace the new as well. So we need to design for failure and resilience. We need to design for the modern type of (web)applications. Or the distributed workload as Citrix calls it.

To have the ultimate R&D effect, we told ourselves, not to use any technique or piece of equipment we have ever used before. Only when we encounter major issues or slowdown of delivery we would fall back to the stuff we master already for years. I wont go into detail here about why we need this cloud way of working and such. If you do want to know, just contact me and I am more than happy to tell you all about it.

This is the technology stack we have chosen for release 1.

the choices so far:

  1. Storage. Distributed workloads are not in need of sync storage. Having that said, it needs to have a very good level of reliability. After all we are building for Schuberg Philis. The short list here was Netapp (one of our long term partners), Gluster and ZFS on commodity hardware (Nexenta). We found that Nexenta delivers us the biggest bang for the buck.  Although we see that there service and commercial infrastructure is not as long as around as that of our current partners. Nevertheless they are a real partner to us and more than willing to move forward aggressively on those issues. If you ask me, they are going to be a real threat to the ‘classic’ storage providers. For the traditional workload we would not take the risk of the fancy new stuff. We played with the idea for a while to build our own Nexenta Metro Cluster, but for reliability and speed reasons we have chosen to stick to a regular Netapp Metro Cluster. Those cluster we operate and implement already for a long time. On the S3 part of the spectrum, we currently do not offer / build anything. As S3 api is supported as of Apache Cloudstack 4, we have multiple options there in the future. Could be Cloudian, could be something else.
  2. Switching. Shootout between Cisco and Arista. Same trade of here as well. Enterprise class for traditional workloads vs the low latency and cheaper switching for distributed workloads. As the risk here is much lower than on the storage level, and we have a strong wish to do a lot of Software Defined Networking (sdn), the choice was not that difficult. Arista all the way. Also for traditional workloads. Having that said we still keep an eye on Cisco of course. They came with a very competitive deal, and we all know and love their high quality devices.
  3. Compute. Back to what we know over here. Only difference we make is the G8 series and not the G7. It has led to ILO, driver, firmware issues as always, but those are items that we can conquer.
  4. Hypervisor. Trade offs here are mainly around price and features. If your cloud will be a big one. Now or in the future, you might want to consider an open source hypervisor. KVM and Xen are the obvious candidates. If you want feature richness and a single vendor strategy ESX is your best bet. For us, this is a really important item. We do not want to be locked at one hypervisor. We do want to work with something cheaper than ESX, but we do not want to spent cazillion hours on troubleshooting and learning a new virtualization layer. The safe bet for us was to go with the Xen server for starters. The commercial version. Not that we do not like the open source one, but in those early days of our clouds support might come in handy. Xenserver advanced it is. In our traditional environments btw a lot of ESX. On the wish lists is definately some KVM as well. Talking with Citrix on their view on this they lifted a small piece of the cutrain (is that proper English?). The cloud layer will talk to the hypervisor as integrated as possible, but the hypervisor will need to deliver the features! if you need to pick, go back to the virtualization requirements you have, check your wallet and make your pick.
  5. Software Defined Networking. A pretty new ballgame in the world. But if you look at the possibilities, they are almost endless. One of our lead network engineers ran into Nicira and he was sold. We contacted the guys and we were all very impressed. What Martin Casado and his team have accomplished in such a short timeframe is really amazing. This SDN gives us the possibility to really make hybrid clouds, to set up tunnels within DTAP environments etc.. After a short POC we new that this is amazing stuff.
  6. Cloud Orchestration. The long list we used was OpenNebula (http://opennebula.org/), Eucalyptus (http://www.eucalyptus.com/eucalyptus-cloud), OpenStack (http://www.openstack.org/) and Cloudstack (http://incubator.apache.org/cloudstack/), and Vcloud (http://www.vmware.com/products/datacenter-virtualization/vcloud-suite/overview.html). Again Vcloud was not the first option for us. Do not get me wrong. Vcloud is here to stay. We are an ESX shop and if needed we will absolutely build one of those clouds. But for now this was too close to home. We needed to learn the open source community and we needed a product that is potentially free to use. Having that said, we created a shoot out between OpenStack and Cloudstack. Requirements next to the newness and the license model are, the openness of the system. The ability to contribute ourselves to the community and the amount of time needed by our engineers to install, modify, tweak and tune the system. Based on those requirements CloudStack became the preferred tool. In the beginning we were still worried big time because the move to the Apache Software Foundation was not announced. When that was the case the end result of the PoC was obvious. Spark404 could contribute our items directly to the core team. The community is exploding as we speak. So many features are built in in a very fast pace. Our engineers needed little time to learn CloudStack. Way faster than we did with  OpenStack and the engineering efffort needed with OpenStack was bigger than we expected. Having that said, the richness of the OpenStack suite is very promising. If they get their release cycle straightened out, it is obvious that they can be a massive player in this field.
  7. Configuration Management. CFengine3, Chef or Puppet? We as addicts to the CFEngine2 product are in need of a proper config management tool. Upgrading to version 3 of CFE was in reach and so was Chef. A number of new Mission Critical Engineers used Chef in their previous lives. We handed both options to the team. Before we knew it, the first recipes and cookbooks were being created. A tough discussion internally later (or maybe 5 discussions) we voted with our feet. The team uses Chef and they are happy campers doing so. The community of OpsCode is great and with trainers as Mandi Walls, we trained approx. 35 ppl in only two sessions. Having that said, we have had great conversations with Mark  Burgess on this topic. The Theory of Promises sounds like a novel. One that you at least need to read once in your live. Nevertheless momentum was in favor of Chef, and that is what we use for our Mission Critical Cloud.

Some questions that were asked to me during and after the talk:

Q: You talked about two datacenters not far away, so you can use stretched Vlans. What is the max distance you use? A: Of course the distance is measured in milliseconds (or less). In our situation this is always less than 50km.

Q: If you would have to make the OpenStack / CloudStack decision again. Would it have the same outcome? A: Most probably yes. We still do not have the manpower to glue the OpenStack components together. And the release cycle is still not so clear to us. Having that said, we have picked CloudStack 10 months ago. It could be that my info on OpenStack is a bit outdated, because we are not really following their moves in detail. Could be that they made progression in those areas.

Q: Did you know about or talk to companies that use Nexenta beforehand? A: Yes, we have talked to more than one company. One university and one hosting provides in France. They had similar experiences with the product although one of them faced performance issues. We figured we could corner this issue rapidly, so we went for Nexenta.

https://speakerdeck.com/schubergphilis/cloud-computing-expo-2012

Posted in Cloud, Schuberg | Leave a comment

R&D in the cloud makes you wonder

1 year ago… Cloud is a hype. Cloud is a different word for the almighty internet. Dont believe the hype (http://www.lyricsdepot.com/public-enemy/dont-believe-the-hype.html). Definitions are not that important etc. etc..

We started to investigate. Eucalyptus, Open stack, Cloudstack and others. Cloudstack (former cloud.com) became open source (apache) and by doing so they lost the nickname Loudstack. For us as a company a whole new point of view opened up. What if shared resources isnt scary in some environments. What if flexibel resources are the driver between innovation for our customers? MongoDB Proof of Concept? Tridion PoC? Whatever PoC? instead of buying hardware for this PoC and loosing money and time, now you have the servers the next day. You use them to your convenience and after that they go to the shredder.

And not only on an innovation level this cloud is gold. What if we provide a cloud that delivers 100% uptime. Totally different than Amazon, Heroku and others. Whatif we can provide a cloud that doesnt suffer from Safe Harbour(tm) discussions. A cloud that is 100% european, 100% dutch and 100% available. A cloud that is as secure as physical environments that we host? Believe me, this can be true.

After we released the beta cloud last month we are so confident that we can pull this off, i am even confident writing in public about it. Of course we will make mistakes. Of course we will pay our learning money. Of couse deadlines will haunt us sooner or later. But isnt that the fun part of this game? Being ahead of the pack? Putting your cards on the table before the game is over?

In a few months time we will find out if we were to arrogant or that we pulled this one of.  if it ist he first i’ll promise i’ll write about our f…k..up of the century. If not you will hear about anyway. The outcome is important. But for now I am enjoying the moment that we said yes to an investment that is needed to make this dream come true. A lot of people are sticking their necks out. And that is more than I imagined one year ago.

Btw we need help building this thing. You’r always welcome at our hackatons ( the first one is about to be announced (hack along)) or at our office (old fashion bribery with food and drinks)

 

Posted in Cloud, development, Uncategorized | Tagged , , , , | Leave a comment

We have our own Cloud AS!

  • as-name: SBPCLOUD-AS
  • descr: Schuberg Philis B.V.

Can we fix it? Yes we can. Mission Critical goes cloud. Bunch of happy campers here at Schuberg Philis. Later this week more on the generic status of our Mission Critical Cloud.

Posted in Cloud | Tagged , , , , | Leave a comment

Beyond the Dev and Ops Marriage

Last week was on of the best weeks lately. Arranging an event is easy if you have three months till deadline. Last week however you get the feeling you are forgetting stuff. Luckily we didn’t.

Devops day started the day before actually when Mark Burgess joined us. And on Wednesday John and Kris joined. We spent a fair deal of the morning discussing all kind of things. Devops / Webops sounds nice. But to be fair the main thing is a joint language (kris tm). And next to that doing great IT can only be done in a great environment / culture.

And that was the most stunning thing. Inuits, CFEngine, Etsy, Schuberg Philis. They all understand one thing very well. And that is that everything must fit like a glove if you want to build stuff that matters.

Your hiring / recruitment process must be right. We must care about the community by making and using the best open source can offer. We must care about the next phase of industrialization, namely the informazation of society. How do we work with information and how do we consume data and how do we learn doing so. What on earth is happening to our education system. You do not learn to be a good sysadmin or programmer in a lot of places. Massive room for opportunity here to take the lead as industry.

But of course the main topic was why do we all do stuff the way we do stuff. Why do we use CFEngine and Chef. Why do we use Nagios all over the place. Why do we all graph everything that moves and matters. Stunning to see that all mentioned companies approach things in a similar fashion. Of course there are changes. Etsy is a large web operation. Schuberg Philis has a lot of heterogeneous environments with a multi party set up (business owners, sysadmins and developers are often three different companies). Nevertheless we all believe we can build beautifull envrionments that we are proud of. Environments that enables the customers to do the things they need to do. And all of that with a superbe uptime, performance and scalability. And all of that with a shitload of fun while doing it. At the end we all want to do stuff that matters.

By the way I am not writing about the content of the event. That would have been a poor mans copy of the slides. Currently we are working with Kris / John and the video editting team to get this thing online. When we are ready, i’ll post the link. Trust me, those guys are good.

 

Posted in devops, Schuberg | Tagged , , , , , , , , | Leave a comment

Here it is: The beta mission critical cloud!

Not even two months ago we said during the SBP summit that we would like to have a “Mission Critical Cloud”. We said we would like to have bold goals. We said we wanted to make the difference. 8 weeks later we have a beta environment that is soft launched.

We worked hard to get here, but this is truly great to see. Started with a vision about cloud technology for enterprise customers. Aimed to achieve speed and flexibility. Aimed to shorten the timeframe in the beginning of projects. Aimed for developers that can create dev environments when ‘they’ want them. Aimed for resource ballooning. Today the team pulled it off.

We talked to numerous vendors. What do we do with storage? Classic EMC, Netapp, HDS (BlueArc), 3PAR (HP), Gluster, Nexenta? What do we do with Firewalling? How do we secure the place? What do we do with Load Balancing? Virtual or Physical? F5 or Cisco? And notonly that. What do we do with our service offering? How do we stitch this into our current service offering? Can we work with a ‘standard’ but mature portal? Or do we need a SBP flavored one?

We not only answered most of those questions, but we also built a truly amazing cloud environment. Over the next days we will polish the environment and then we will use it to replace approx. 50 old servers.

Benefits from day one. Less energy consumption, less heat generated, better scalability. We will use the beta cloud to learn how this thing behaves. Key knowledge in the next step. The non beta mission critical cloud. With not only IaaS, but PaaS as well.

Proud as a Peacock.

Posted in Cloud, Schuberg | Tagged , , , , , , , , , , | 2 Comments