Scrum Coaches are typically brought into an organization as consultants. A Scrum Coach focuses on the whole organization or a specific division of an organization. They work with and impact multiple teams. They organize teams to allow for effective Agile development across teams.
We all have seen great initiatives of products come out of Berkeley AMPLab. Today, I am dedicating this Blog to one of my favorite products in Big Data Processing realm and how a partnership like no other is bringing us a Big Data Service worthy of consideration. Of course, the product is Apache Spark and the partnership is between the big tech titan Microsoft and Databricks.
Spark was born in Berkeley AMPLabs and it was created by Matei Zaharia in 2009. Shortly thereafter, it became an open source project under a Berkeley Software Distribution license. In 2013, the software was donated to the Apache Software Foundation and the license was changed to Apache 2.0. In February 2014, Apache became a Top-Level project and by November 2014 Apache Spark was used by the engineering team at Databricks to set a world record in large-scale sorting.
Databricks is a company founded by the creators of Apache Spark and, as previously mentioned, they partnered up with Microsoft and brought Azure Databricks Cloud Services to beta in March of 2018. Looking at what Microsoft and Databricks have accomplished so far, I can testify that it is a thing of beauty. Get ready for a very practical tool for Data Scientists, Data Engineers and Analysts that makes handling the challenges of Big Data Processing and its security easy. This platform is definitely here to stay so go ahead and create a free trial account and start experimenting TODAY.
Please check out the video below for a brief introduction to Azure Databricks Cloud Services and stay tuned for my next blog where I bring to you a breakdown of why Apache Spark is the perfect Big Data Processing Platform.
This past week I had the fortune to be trained on Scrum@Scale, a framework for scaling Scrum, but the creator of Scrum himself, Jeff Sutherland. It's not a prescriptive framework like the some of the others. It's built on true Scrum and it reminded me how a lot of the problems companies face is that they aren't even doing basic Scrum right. So when they do scale, they scale up a broken system. If you start with garbage, you get more garbage.
What's great about training with Jeff is that he has so much scientific data and case studies to back it all up. Several of this studies have proven that you can double velocity, that's right, double velocity by using Scrum patterns for teams.
Here are some of the patterns for doubling velocity:
- Small teams
- Stable teams
- Dedicated teams
- T-shaped team members
- Daily Scrum
- Interrupt buffer
- A Ready backlog
- Fix bugs found within a day
- All testing completed inside the sprint
All of these have data points and studies and research to back it up. I'll cover just a couple below.
Let's look at one of the patterns, dedicated teams. According to a study by Rally Software, before they were bought, showed a huge increase in productivity when team members were dedicated. Teams that had members 50% dedicated produced around half as much as when teams had members 95% or greater dedicated.
If we look at Brook's law on cost and time to deliver based on the team-size. It's a huge difference. Look at the team size of 10 people vs 6, vs 4 and the cost they incur. The Team Size of 6 was able to deliver in twice the time of a team of 10 or 17.
Swarming & Context Switching
Let's just look at one more. Swarming and focusing on one or more projects.
Up to 75% waste from working on 5 projects at the same time. Sure that's a lot. Most people have what, avg of 3? That's still 40% loss! Which causes you to need more people to make up the difference. Save money, less people to do the work in a less time.
There are so many more studies and patterns that increase teams's velocity and reduce the company cost. Proving that implementing Scrum correctly is the only way to get ahead of your competition. As Jeff likes to say. "Change or die". Change the way you do work, or your competition is going to take you out.
I get often asked when I work with people or teach a class how to show real progress on an Agile project. Roadmaps, Release Burnup's, Velocity, how to answer questions on delivery. We build products for external clients using Scrum and will show people how we visualize progress sprint by sprint for our customers. Showing how velocity changes, roadmap, release plan changes and most importantly how customer feedback has affected the release date. This isn't hypothetical talk but real world experiences and conversations.
However, first I will start out with showing the old ways of doing things with Gantt charts. Why the traditional way doesn't work. Comparing the Green, Yellow, Red method and how it doesn't work. The roll that into the better, agile mindset of delivering questions like when will be done, what will we get and how we communicate good and bad news to our clients.
We often get asked what the differences are between a native mobile application and a hybrid one. I was starting to put down some notes on the topic when I came across an article that summed up my thoughts. So I wanted to go through it and mention some key points.
Scrum isn't easy, but it's effective. One of the things that teams struggle with is a way to automate their testing and learning techniques like Test-Drive Development or Behaviour-Driven Development. Both which can be implemented in both the back-end and the front-end code.
One team, I work with also automates the UI testing. One tool they use and they include as part of their Definition of Done for each feature is building a test automation using...