Continuous Delivery

Broadridge’s Annie Michelia on Why DevOps and the Cloud Go Hand-In-Hand

Brian Dawson: Hello. Welcome to DevOps Radio, live at DevOps World | Jenkins World 2019. I have the pleasure of having a chance to interview Annie Michelia. She is a database architect at Broadridge, and really not just a database architect. As we talked and as I understand your bio, you actually have a wide range of skills and it appears that you do a lot there at Broadridge. Can you start by telling us a bit about what you do and about how you arrived there?

Annie Michelia: Yes. By trade, I’m an Oracle DBA, to start with. So I’ve been working as a database administrator for most of my professional life. Right now in Broadridge, I work as an architect, in the sense, I work with on-premise database technologies as well as AWS cloud technologies. So I play a major role in cloud initiative projects for lift and shifts. So that’s when the DevOps kicked in.

Brian: Okay. While you’re there, can you tell me a bit about DevOps at Broadridge? You mentioned DevOps kicked in. I’m curious to know where you guys are on your DevOps journey in terms of Broadridge, and why you guys are pursuing it?

Annie: That’s a good question. So we in Broadridge have been using DevOps practices for more than five years now, successfully, to do all the application deployments, infrastructure deployments seamlessly. So the database was always standing in the backbench because of its complexity, not just complexity. Also, the thing is we handle mission-critical business data, which is persistent data, which we cannot touch and play around with.

Brian: Your systems of record that need to be maintained can’t break, right.

Annie: True. We are a financial solutions company. So for financial services, the criticality plays a very high role. So that’s why it has been staying aside. So when we started the initiatives for the cloud, the management decision was they said everything has to flow through automation, so no manual deploys, no manual installs. So we started this initiative just for the cloud piece, so for the database side.

Brian: Okay.

Annie: So what we thought is whatever we did manually in the past for databases needs to be automated, not just infrastructure, but also the ongoing deployments.

Brian: The delivery and the deployment portion of it.

Annie: Exactly. So all the schema changes, code changes and data changes, everything has to be in a no-touch environment. So that’s when we started practicing DevOps for database.

Brian: So DevOps and cloud went hand-in-hand for Broadridge.

Annie: Exactly, yeah.

Brian: Okay. What drove the decision to go to the cloud? Do you know? Was it cost, speed?

Annie: That’s one of the reasons, also the technology. So we evolved. We don’t want to be using the old technologies. The other reason could be cost, because of the licensing for the traditional enterprise DMS’s. So that’s why the organization has made a decision to do a lift and shift of most of the applications to cloud. So when we moved to the cloud, the decision was made that the environment should be zero-touch.

Brian: Okay. I’m going to want to drill into, in a moment, lift and shift, which people have different opinions and it has different meanings, and also, some of the challenges of really integrating data and traditional relational databases into a CI pipeline. But before we do, I’d like to ask have you seen benefits of DevOps so far? Is Broadridge benefiting? Are customers benefiting? If so, how?

Annie: When we thought of this automation for databases, it was really, really scary because we are dealing with critical data. So it was taking, actually, quite a time to understand and to convince the customers for the automation piece. But the whole point, if you look around the process itself, the manual process took a lot of time, but not just the time. The process, when you see it in manual, it actually involved multi-team participation. So not just the time, not just the human errors or something like that, it took multiple technical people from various teams to do a simple deployment process.

Normally, the DBAs are doing – most of the time, the product – I’m sorry. Most of the time, the production support DBAs, who are hired to do production support, end up doing deployments all the time. We are a quite large company, so we have a lot of applications to support. So rather than doing the real production support work, the DBAs ended up doing deployments all day and weekends.

Brian: And that’s at the cost of both the DBAs and potentially production support is now lacking, potentially, and not as responsive, right?

Annie: True, yeah. When it comes to automation, one real big drawback is doing the manual setup of deployment. There is no traceability for what we deployed and where we deployed. Did we do it in a test environment? Was it successful? Are we ready to do it in production? Can we go forward? So there is no traceability, so that creates a big chaos.

So when you break all this stuff into the DevOps automation, you have a version-controlled deployment. So there is a big traceability, and you know what you do in test, which can be promoted to production or not. So that’s the biggest advantage I see as a DBA over other constraints like time and other things.

Brian: Right. You gain efficiencies.

Annie: Efficiency, reliability.

Brian: Visibility, traceability, which is huge. Now, I cannot claim by any means to be a database expert. I will share, I spent a bunch of years in the game industry as a software engineer/programming, and I didn’t quite get the importance of databases, the science behind it, the complexity. As I moved into the other phase of my career, more focused around enterprise development, and began to spend more time in traditional databases, I realized how complex it can be. Something I learned there is really in that space there are a lot of manual controls into place.

Annie: True, yeah.

Brian: A lot of responsibility put in the experts’ hands, because traceability was not really built into the database updates and migration process.

Annie: That’s a little bit different. For a simple deployment, multiple teams are dependent on it and everybody ought to be on the spot for installation, validation, and then verification. So in this automation process, with a push of a button in Jenkins, that’s how we put it together. Any nontechnical person can run this. So there is not DB involvement. There is no one’s involvement, not a development involvement or a service delivery team’s involvement. Any nontechnical person can push a button from Jenkins, which can do all the magic.

Brian: Okay.

Annie: It retrieves the artifact from source control, extracts, makes a SQL connection, executes. Before execution, if there is a requirement for database backup, pre-deployment backup, advance into it, it takes the backup and runs the deployment, creates the log file, publishes the logs to all the interested parties.

Brian: Awesome.

Annie: Just the push of a button. So Jenkins does all the magic.

Brian: I always love to hear Jenkins doing all of the magic.

Annie: Yeah. That’s the only tool we are using for this automation.

Brian: I’m going to want to dig into the hows of how you did this. Again, before we shift, I have a precursor question. Are you seeing customer or business benefit from this process? I think the internal benefit is immense. We talked about productivity, visibility, traceability. Is there error reduction? Is there reduction of failure rates that are bettering the service for the customer?

Annie: Exactly. When we do the same thing manually, the errors may not be huge or quantifiable, but the impact of even one error is huge because we are in the financial industry.

Brian: Yes, right. It could be millions, billions of dollar error, one error in data.

Annie: Exactly. So we are fined with the _____, so we’ll be fined or penalized. The penalty is really huge for a single mistake in a year. So we completely eradicated the incidents. That’s a bigger achievement I could see.

Brian: So the DevOps automation provided more reliability, safety, and security for your customers.

Annie: Yes.

Brian: I bet if we were to measure blood pressure, it probably also lowered blood pressure for the DBAs and team members.

Annie: That is true, yeah.

Brian: So let’s talk a bit about the technical implementation. Look, I’ll admit that oftentimes when we talk about DevOps and how to implement DevOps, there are some rough edges and some difficult areas that oftentimes the luminaries, evangelists, speakers tend to sort of gloss over. Security is one of those that isn’t fully integrated. Oftentimes, it’s difficult. Management of environments has improved considerably. But data is one that I think people still struggle with.

Now, you guys chose to lift and shift. What I’ve seen a lot of greenfield teams do is sort of move directly to cloud technology, move directly to NoSQL databases, possibly do some level of ETL, but largely say, “We’re going to take our traditional data store and leave it behind because it’s so difficult.” You guys chose to lift and shift and bring along the data in the databases that you had invested in, correct?

Annie: Correct.

Brian: It sounds like it was challenging.

Annie: It’s a challenge, but when you look at the organization’s perspective, the total infrastructure maintenance cost is completely removed or eradicated. So the technical people can spend more of their time on architecting things, rather than sitting and supporting things, the traditional work. So it’s just we are losing the job, but the nature of the job has been elevated.

Brian: Right. You can focus on things that matter.

Annie: Yeah.

Brian: Can you share with me –? We talked a bit about some of the technical difficulties. Maybe you can take me on a journey. Tell me a bit about what the solution looks like at Broadridge, for integrating first your traditional relational databases into CI/CD. And what are the solutions? What challenge did you run into? How did you implement it? We’d love to understand some more of the details.

Annie: Okay. We just use two tools to do this. One is for the source control, GitLab. So we just tell the developers to check in their code in GitLab. And when it’s ready for deployment and when it converts to an artifact, then the process can kick in. So a nontechnical person will run this build from Jenkins. So this is the whole point.

The first and very – difficulty, I would say, is the understanding. The whole Jenkins is integrated with the other tools, like GitLab or Nexus or any other tool. So that was a challenging thing for me.

We really need to understand how they can talk to each other, with what credentials. So that’s the starting point. So, luckily, our DevOps team has a very good standardized process. So they have created service accounts for each technology, for each build. So those cannot be shared. The password cannot be given to the individuals.

So the service accounts can talk to the other tools, the other DevOps tools. It could be GitLab or Nexus or whatever. So that’s the first challenge from my side.

Brian: And do you use credential management in Jenkins for this?

Annie: Yeah. We have credentials created in Jenkins itself, which is highly encrypted. So Jenkins uses the credential file to talk to the other tools.

Brian: Talk and execute the automation or orchestrate.

Annie: Exactly. The same applies for the database credentials as well. In our setup, as I said before, when you click the button, the system downloads the artifact, makes SQL connections. It’s how it’s going to make the connection. So it needs credentials.

So we instruct the developers to submit a configuration file for us, which just had the connection information, nothing more than that. So what is your target server name, target DB name? Which object do you want to backup or do you want to skip the backup? So it’s like a ten-line config file. They just need to say yes or no or they need to fill the right-hand side.

So we just give them the template. So they are just going to fill the right-hand side of the config file. So there they give us all the connection information, but the credentials are managed by DBAs. So we create the user name and password in the credentials file to make the connections.

Brian: Okay. And then they are submitting – to stay in the technical implementation level for a bit, the developers, one, I notice you’re shifting left. They’re empowered to deliver at a speed that they need to deliver versus filling out a ticket, having a phone call, crafting an email. You guys are empowered as the DBA or the database team to focus on higher order, higher level issues. So there’s a lot of benefit there.

Now, I’m curious. Was it a shift for the developers? Are the developers directly submitting SQL or DDL?

Annie: No. They’re not allowed to. The Jenkins setup is like we have non-prod and prod server setup. In non-prod, the developers can play around with it, like in a development environment. So they can submit a sample artifact, and they can just try to run the code – sorry – Jenkins job to understand how it works. That’s all they can do. Then when the artifact is ready to be promoted for the higher environments, they can’t touch anything. So there is a different promotion process we have on the DevOps team.

Brian: Okay. I was curious about that. Right.

Annie: The DevOps team actually has a process. They call it a promote artifact process. So even they can’t do anything about it. Once they say the code is tested in development, the process kicks in that promotes the artifact to a different source repository. So after that, they have no luxury to go and change the code. So that’s done. So when you test something in development, that is going to production. So that’s the word I use, traceability, right.

Brian: Right, and it’s similar to retrieving application artifacts and the value of binary repositories, which is build once and promote.

Annie: That’s it, yeah. I wouldn’t say it’s continuous delivery, but we are targeting that continuous delivery. What I understood is when you push a button for development, when it completes, it goes to QA prod. So that’s what continuous delivery, what I understood so far. So that is not actually in place, because that is actually handled, coordinated within teams, because there is a different change control process.

Brian: Right. So there’s manual intervention. You’re automating, but your automation has to work with existing change control processes.

Annie: Exactly, yeah.

Brian: And in reality, when we’re dealing with data in a financial institution, those change control or change management processes are likely going to stay.

Annie: Exactly.

Brian: It’s not practical or not –

Annie: Yeah. We have a governance team to approve the change for production. So that’s when there is an intervention. But even there is a manual … that’s like a release management flow. So that flows automatically. Then when it’s ready to go, the button can be pushed for prod. So CI in place, CD on the way.

Brian: Right. CI in place, CD with manual gates is in place.

Annie: Exactly. That’s the right word.

Brian: People get on themselves. I speak to many organizations, “Well, we haven’t implementing continuous delivery.” I maintain that at the end of the day that there’s objectives for higher quality, optimization of effort, and increased velocity. If you can do that while still maintaining your manual checks, well, you’ve achieved your goal. It might not be pure continuous delivery. Maybe you guys will get there one day, but –

Annie: Exactly. I don’t believe in this CI/CD.

Brian: There’s a place for it at Broadridge today. But that’s where the DevOps comes in. I’m curious. Alignment is always a struggle, right. So this, yes, you still have governance in place. These changing systems, cloud, DevOps and automation must have changed the way you work and have had sort of some cultural implications.

What did Broadridge do to get teams aligned, to get everybody sort of shifting into this new way of working?

Annie: Right now in Broadridge, every week have center of practice conferences, where we educate people to understand the existing model. Then we tell them how to onboard themselves into this process. So we educate them.

We have Confluence pages created for them. So we publish periodically. Then in every COP and COE meeting, which is conducted by the DevOps team, they invite us as a guest, as a speaker, to tell them how any team can get into this process. So every day we get customers. That’s how I would say it.

So we have multiple application teams. After each and every meeting, we would get five to ten emails saying, “Hey, I want to jump into this process.” Even this morning I got an e-mail. I’m out of the office. I got an e-mail saying, “Hey, I want to jump into the process.” I said, “I’ll give you a demo by next week.” So that’s how it goes.

So people are interested to come. Even though they know the criticality, but they love the process because it eradicates the manual time and errors completely. So that’s a big thing they look at.

Brian: Would you say that marketing – so you’re getting people aligned by taking a service approach. I heard you use the words, “We’re getting customers every day.”

Annie: Yeah, that’s what I feel. I belong to the shared services group from the technology solutions team. So we provide solutions to the teams. That’s our goal, so I see them as a customer.

Brian: Okay. Like a business that’s customer-facing, it sounds like there’s education and awareness. There’s training and marketing.

Annie: Yep. That’s what we do every week.

Brian: Continual training and marketing has let to alignment and adoption. And how important was it that you show success? What I’m also gathering is as part of this, you’re positioning what the benefits are of adopting your service. Right?

Annie: Mm-hmm.

Brian: I guess I’d ask how important to the growth has it been that you guys transparently share your successes and possibly even some of your struggles and failures?

Annie: Normally, after we present the first framework, we get a lot of suggestions from the application teams. So I took this very positively. They actually told me, “Hey, your interface could look like this, man. It could look prettier.” I agreed to it because I was only focusing on the logic of the implementation, not the interface look itself. But when it comes from the development teams, they are already well-versed in programming.

So they tell me, “You could change this. You could modify this. You could put less parameters.” So I adopted those things. So it’s not just a one-way knowledge-share. It’s like a two-way understanding, yeah.

Brian: Awesome. I’d say for our listeners as well as you, when I hear you talk about creating a baseline, communicating that and getting the feedback from the developers, it very much aligns with software delivery management messaging you’re going to hear at the keynote at CloudBees today, and this concept of not just automating, but continuously improving, so you can get stuff out there, get feedback, and make it better.

Annie: Better, yeah.

Brian: So yeah, look at SDM. I hear a bit of your experience being similar. I.e., it’s not just business to customer. It’s also internal critical systems, where you can bring these to bear.

So can you tell me, in terms of security and compliance, and you’ve touched a bit upon this in terms of data, but the security and compliance, as you stated, is tantamount, is critical in financial institutions. How do you guys leverage DevOps to help ensure audit and compliance, and help ensure that you’re achieving your security and compliance goals at Broadridge?

Annie: Basically, first and foremost, I would say the source control artifact. That is the first level in auditing and security. So if you want to go back and see, you have a service ticket with a version control artifact. So you know what was installed, on which date, on which server. That is so helpful for auditing purposes.

Brian: Okay.

Annie: So for the security purpose as well, since the credentials are managed locally in Jenkins as an encrypted file, it is very secure and it’s not –

Brian: Centralized secure –

Annie: A centralized place, so that it’s not being shared with multiple teams or multiple people. Normally, DBAs hold the credentials, but now, no one is holding any credentials. So once it’s created, it’s stored. That’s it. It works very well.

Brian: Awesome. Thank you. Now I’m curious. I’m going to ask a couple of sort of fun questions, but questions we also like to help inform our audience about who our guests are. So one of those is who is your favorite DevOps superhero and why?

Annie: I would pick someone from my company. I’m sorry. I’m being selfish. Daniel Ritchie I would say.

Brian: Daniel Ritchie.

Annie: Yeah. He’s the man who introduced DevOps to me. Because when I had an interest, I approached him. Everyone in my company said Daniel is the go-to guy. So I approached him. He spent quite a few hours with me, explained about the DevOps CI/CD process. That actually impressed me to do this framework. So I would say he’s my hero.

Brian: So let’s dig into – Daniel or Danny?

Annie: Daniel Ritchie.

Brian: Daniel Ritchie. Thank you, Daniel, and hello, Daniel. So what’s Daniel’s role at Broadridge?

Annie: He’s a DevOps architect at Broadridge, and he has been wonders for the past five or six years.

Brian: Awesome. I also noticed this loop here of there’s curiosity. There’s someone that has some experience and expertise. They service the curiosity with that. So it’s sharing and training. People marketed Daniel to you, to be able to go find him. Then now, you’re doing the same, to spread the word within your company.

Annie: Yeah, true.

Brian: So yeah, it’s awesome and I love how it underscores that at the end of the day, while there may be top-down components to DevOps improvements, at the end of the day, DevOps is really about sharing, collaboration, and community. I think what you’ve shared with us has demonstrated that.

Annie: Thank you.

Brian: So to shift, I like to ask guests to share what we call a dev-oops moment. And that’s not D-E-V-O-P-S, but rather D-E-V-O-O-O-P-S. That’s a moment where either dangerously, precariously or humorously something went wrong and you learned from it. Do you have a dev-oops moment that you can share with us?

Annie: Yeah. When I was initially developing the framework, I didn’t know how to mask the password in Python script, because I’m not a developer. So when I ran the build, I saw the password displayed on the console output. It was like, “Oops.” I was like –

Brian: And now you’re like, “How do I get rid of it? Where are the logs?”

Annie: Then I went back to the CloudBees, how to wrap the password. Then I found the syntax, came back and had –

[Crosstalk]

Brian: Shhh. Nobody tell anybody.

Annie: Yeah. So that was an embarrassing moment for me, but that was just with me, so I just canceled the build.

Brian: That’s good to know. I’ll tell you. I have had something similar happen. I don’t think it was in a critical environment, but I didn’t catch it right away. So that meant that I’m SSH into servers, combing through logs, trying to figure out everywhere where this unmasked password would have leaked.

So thank you for sharing that. That’s why those DevOps moments are important. And in that, you said you were able to go to CloudBees and they explained it.

Annie: Yeah.

Brian: Do you happen to recall how you masked it?

Annie: I saw some blocks, block spots, and those guys were really good. So I just posted a question and I got an answer instantly. Yeah, within like a few minutes I got an answer, with syntax, so I could wrap like this. They gave some examples.

Brian: Then on interpretation and complexity it’ll obfuscate it.

Annie: I just used it for my thing and then I went back and learned.

Brian: Also a lesson about testing locally first, right, before it went into the pipeline.

Annie: Exactly.

Brian: Awesome. Do you have any final thoughts that you’d like to share with us, in addition to the experience and knowledge you’ve already provided, any final words for the listeners?

Annie: No. I’m so glad that I’m here to share my experience about the framework we have. I’m so happy to be here.

Brian: Awesome. I’m going to poke and prod you a little more. Now that you’re at DevOps World/Jenkins World, was this your first one or have you –?

Annie: Yeah, very first one.

Brian: Okay. What are you looking forward to getting out of it?

Annie: I would like to know about the things which I do not know so far, because my level of DevOps is very limited. So I would like to know how I can improvise my framework with the available sources. So I looked into the topics and they were so interesting.

Brian:Awesome, great. I’m pretty convinced that in terms of better understanding CI/CD, DevOps and what you can do in particular to do it, to apply it within and enterprise, you will learn more here. I’d love to hear from you, if I see you again later in the show. I’d love to hear from you how your experience has been and what you’ve learned.

Annie: Thank you so much.

Brian: All right. Thank you, Annie.

Recent Posts

AIOps Success Requires Synthetic Internet Telemetry Data

The data used to train AI models needs to reflect the production environments where applications are deployed.

1 day ago

Five Great DevOps Jobs Opportunities

Looking for a DevOps job? Look at these openings at NBC Universal, BAE, UBS, and other companies with three-letter abbreviations.

2 days ago

Tricentis Taps Generative AI to Automate Application Testing

Tricentis is adding AI assistants to make it simpler for DevOps teams to create tests.

3 days ago

Valkey is Rapidly Overtaking Redis

Redis is taking it in the chops, as both maintainers and customers move to the Valkey Redis fork.

4 days ago

GitLab Adds AI Chat Interface to Increase DevOps Productivity

GitLab Duo Chat is a natural language interface which helps generate code, create tests and access code summarizations.

5 days ago

The Role of AI in Securing Software and Data Supply Chains

Expect attacks on the open source software supply chain to accelerate, with attackers automating attacks in common open source software…

5 days ago