The Secret To Deploy on Fridays - Interview With Vasilii Oleinic
August 9, 2025
• 37 views
career tipsbest career advicejob advicehow to find a jobuniversity of waterloowaterloo computer sciencecs majorcomputer sciencesoftware engineeringhow to get a tech jobcareer coachtips for developerssdei wish i knew this before getting started codingtechnical skilllsget a job in techwork and travelwomen in techtech leadtech careerfull stack developertech jobsoftware developer career roadmapday in the life of a software engineer
You fear Friday Deployments. We live for them.
We are not the same.
I had the pleasure of sitting down to talk with Vasilii Oleinic about his career journey, his focus on software architecture, and how he approaches testing.
He had many interesting perspectives that I really enjoyed hearing about, including how to frame up a proof-of-concept vs software systems that need to be robust. Of course, Vasilii also explained his testing philosophies and how his career has helped shape them.
Stay tuned and learn how you can deploy on Fridays! Thanks for the talk, Vasilii!
View Transcript
We're ready to deploy on a Friday evening without any care. So, we just press the deploy button and we go home without even checking the deploy pipeline on. >> I'm glad you said it. Hi, I'm Nick Coantino and I'm a principal software engineering manager at Microsoft. In this video, I got to interview Villi Elenic about his journey in software engineering, software architecture, and testing as well. Of course, with all of these interviews, we get to hear these cool stories from different individuals and how they got to where they are. So, Vasili is no exception in this case. What I really liked about this conversation was hearing about how he approaches software architecture, proof of concepts versus scaling out applications and building them to be the real thing. Now, something else that probably caught your attention is that Facility wants to work towards deploying on Friday
evenings. And I am totally behind that. So, I think you're totally going to enjoy this conversation. Sit back, enjoy, and let me know in the comments if you like this kind of content. Thanks, and I'll see you next time. Cool. Okay. Well, Vasilla, do you want to give us a little bit of background for how you got to where you are in terms of software development and your journey for what that looked like? >> Uh, sure. Uh, so it's going to sound weird, but I'm coming from a musical background. So, both my parents are m musicians. Uh, they never wanted me to be a musician. So they tried to pursue me in automotive engineering stuff but I always liked programming since I was gaming from the early childhood. So I was first doing some mods for different games then trying to meddle up with the
code base of the games just to get myself some unfair advantages that I wanted in specific games. Uh but yeah at some point it in high school I started programming. It was way back with Pascal stuff for different uh how are those called um uh exam not really exams but basically Pascal in school and all that stuff and then I got into university where I started with net well we were taught C and C++ >> okay >> and then at some point Java uh I hated Java from the get-go because I knew CH by then I And at this point I understand that it was unbased but I didn't like Java at all back then. >> So yeah it was first course when I got into work. So I tried to combine uni and work together at the same stuff at the same time. So
here we are I think six or seven years later. I'm working as a senior engineer in a bigger company that I started off in a really really small company and thanks god I started over there. It was more like a startup environment, let's say. So, where there were not many engineers, there weren't the best practices in the world, but you could do everything from the get-go from business analysis to talking directly with the clients, figuring out how to do stuff and all the all those things. uh somewhere along the line that's when I really loved the idea of good architecture because I had a project I worked at the start of the project was one that >> we had to wait for 20 minutes for it to build in order after we do some changes in C# because it was a chimera of sorts. It
was a really really big project that essentially it was a test at low code stuff. So basically you just load up the entire thing the entire building the entire solution itself right >> and essentially it's was combining PPMN's uh stuff with workflow engines and all this stuff it was a try to do that so essentially we had a lot of nightmares going through that and that's when I decided that hey I want to see what a really good solution looks like by that point I think it was the first time I heard I saw mediator and the clean architecture and onion style and I was like okay so you could do it like that also not only scrapping everything on one plane. So yeah, it was about a year and a half probably into my entire engineering journey when I started looking at architecture and
I dreamt of becoming an architect. But uh my vision back then of an what an architect does and my vision right now of what an architect actually does are two different things. At first you think only about your small solution about how you're going to write the code, how the folder structure is going to be like and afterwards you understand that it's not only that it's about testben deployment how it all works together how it all brings the most important part how it all brings value to the client how certain stuff how different uh decisions you take impact the overall all uh workflow of your team of your uh teammates. Basically, how productivity impacts, how to manage technical backlogs, how to manage technical well technical backlog/technical depth items, how to get the client to know that, hey, we need to stop a little bit the
feature grind in order to clean up our messes that we've done over the last couple of sprints in order to be able to deliver at the same pace in the future. because I've had cases where we were on the feature grind and then you just look at the mess that you've built over the last couple of months because you couldn't explain to the client that hey we need to stop we need to regroup a little bit and start it over again but I think I went over from the topic itself about presenting um but yeah that's basically how I got into all that so it was a messy start uh that got me into architecture into testing I think because from where I'm coming from the companies don't usually pay that much attention to testing at least the region where >> I was born right
now I'm working in another country but the region where I was born most of the smaller companies don't really pay that much attention to the testing process essentially sometimes it's not even existing and you have only manual testers on your team or developers do the manual testing themselves without any kind of unit tests So uh that's part of the reason I got really interested in testing and unit integration on all different kinds of testing because I hated doing the clicking myself. So I loved the idea of having some automated process in the background that does it for me. Uh because I'm historically not really that uh attentive when it comes to doing the same thing over and over again manual stuff. I don't know. >> Typical typical software developer. >> Yeah, as it's typical software stuff that I personally didn't like it and that's why
I went into unit testing. Um yeah, so that's where my passion came from. uh at some point we overengineered something but I found my gig of the approach that I like to bring to the table and which brings return on investment really a really good return on investment. >> So that's probably finding the sweet spot is really the amazing thing here >> that and that's hard to do right. So you said a lot of awesome stuff there. I wanted to mention that your kind of initial, you know, journey into software development, like even as a kid, it's almost the exact same as me. I've said this on recorded sessions before where I was like before I could actually program, it was like I'm playing video games and I'm trying to look for how I can get a little bit of an advantage. So, I'm messing
around with config files and just remember being like, okay, how can I start messing around with things more because I want to be able to do more. Um, so very similar. I thought that was kind of funny. Um I wanted to ask a little bit about um so you mentioned when you started working especially at smaller companies uh and I I've observed a very similar thing where especially at smaller companies it's >> kind of feels like a lot of the time it's as startups just trying to survive right like we got to get features out we got to get value to the customer and as a result it feels like most of the time we're cutting corners like when you kind of zoom out and you can see how software is developed other places feels like cutting corners a lot. So, I'm curious um have
you noticed a very big difference being at a larger company now compared to a smaller one in terms of those practices? >> Okay. So, in terms of those practices, larger companies are a little bit on the slower side. So, it takes a little bit more time to deliver a specific feature in a higher well in a larger company at least how I feel it. uh because there are a lot of different steps in the whole process that need to be completed. Uh looking back on it, it's a really good thing to have a definition of done. I mean to know when a certain feature is finished and how to tackle those. So big companies excel at that. And that's the sole reason I switched from a a smaller company to a bigger one because I was curious to see how that's done in bigger companies
and how the same well different people approach similar problems in terms of processes in terms of how the code quality is uh well how the code is uh written how delivery is done etc etc. Um, on that note, if at least this is my personal opinion, if you are a beginner, going into a smaller company might be way more efficient for you, >> right? >> Because you get to experience a lot of stuff really fast and you have to adapt really fast to different and changing environments and that's really not that typical of a bigger company. And a bigger company, you're just usually doing your own job and you don't have to there are other people that take care of other things. So everyone has his own job there. In smaller or startup companies, you have to survive. You have to do QA. You have
to do analysis. You have to understand what the client want really wants because what he says he wants is usually a different thing to what he really needs in the long run, >> right? That's a that's a really good observation. Um I I I also share that sentiment too like uh when I have folks reach out to me and say hey like uh I'm trying to get into the industry or I'm applying to jobs like you know what do you recommend if it's because people a lot of people will see big tech and they're like I want to that's where I want to be obviously and it's like well like I'm at Microsoft now and yeah I love it at Microsoft but at the same time all of my earlier years uh like my entire career before Microsoft was startups And I I feel like
if I did not have those startup experiences, I would not be where I am today. And that's no no fault of Microsoft or big tech, but I think to your point, >> you are moving at such a rapid race pace. My goodness, I can't speak moving at such a rapid pace in a startup company and you're wearing all sorts of different hats. So, you're doing so many different things, right? like you might uh say at Microsoft I may never be able to have a customer interaction again in the space that I'm in >> and that's just the reality my customers now being on a platform team are other engineering teams so sure but in terms of end users that's not the case whereas at startups almost every single one of them if I needed to I could be exposed to like an end user and
get the feedback or have a conversation so it's a it's just a very different experience in terms of how you focus on developing software. The part that I think is interesting with respect to architecture because this is something else you mentioned in your your intro was that architecture affects so many different things in software development, right? It's not just like here's how the code is organized. Like that's a part of it, but you start to think about how is this going to affect deployment? How is this going to affect our ability to test? how is this going to affect if we have multiple teams working on this like it really can have an impact on the development life cycle. So I I think that when you have exposure to customers and you can start to understand like look we got to deliver value to customers
that's your goal right we have to do that if you're balancing those types of thoughts with what does our software architecture look like uh sometimes you have very interesting things that you need to consider to allow you to continue to deliver right it's not just is it perfect so where I wanted to go with my next sort of questions here is okay, you got to build something new for a customer, right? Um, how do you start thinking through the architectural patterns that you want to to lean into, right? Like do you just do you go with your favorite? Do you like what type of thought process do you have to put in when you're building something from scratch? And it's a very open-ended question. So, I just want to hear your different thoughts about how you might approach it. I have a small book there
if I can grab it from the >> but before we move on this is just a reminder that I do have courses available on dome train if you want to level up in your car programming if you head over to do train you can see that I have a course bundle that has my getting started and deep dive courses on car between the two of these that's 11 hours of programming in the car language taking you from absolutely no programming experience to being able to build basic applications you'll learn everything about variables loops, a bit of async programming and object-oriented programming as well. Make sure to check it out. So to answer that question, it's usually on my end at least is getting to know the business area of the customer at least and the other company in the smaller one that we worked on,
we had a really high density of projects, let's say. So, so we had a lot of smaller scale projects. We had larger scale ones that the largest I think was about four years into the whole company. Well, the whole team was working on the project for about four years. It was an ongoing live uh authentication system. Well, basically it was using uh Asure Active Directory to manage access to all the sorry to manage access all the to manage the access to all the applications for that specific customer. So we working with we were working with an IoT company for a long longer period of time. >> So we kind of knew the area of business where the customer is coming from. That's probably the first thing you need to understand is basically to see uh where the customer is coming from. Uh also take a
look at probably if you're building uh software for a competitive market, maybe take a look at other similar softwares that are on the market that are doing the same thing. Essentially, you're not going to see the entire architecture behind the scenes for jokes for those apps, but you're going to get the requirements and the functional part. So, there is a book I really liked at some point. It's exploring requirements. Okay. So, it's mirrored probably. >> No, it actually is coming through clear. It's perfect. >> Okay. Yeah. >> That always happens where I'm wondering when I when I hold up something with words, I'm like, "Oh, no. Is it is it mirrored?" No, but that's Yeah. So exploring requirements quality before design. >> Yep. So essentially you can see how many bookmarks I have over here. So I really like this book. It's mostly about first
getting the requirements down right before starting to design anything for your client. Uh the next thing is basically just a small analysis of the client. if it's a big corporation with plans to enhance this specific product. But you're not going to know that for sure from the get-go. So, you'll have to tackle the the white board problem, which is the architect's worst nightmare because when there are no constraints, no obvious constraints on the board, you can overengineer the heck out of a solution. But when you start down, >> you can get those constraints. And usually those constraints will help you clear a puff to see how you're going to build up your solution at initial stage because businesses changes over the last couple of years. You can see how many how many things changed along the way. So yeah uh depending on the client you
might go with one and depending on the team or teams you might go one of the other ways. For 90% of our clients, we were okay building a monolithic solution because hey, they didn't need they didn't have first of all the teams uh the required well they didn't have the constraint that they needed to go with microservices. They most of them wanted to go microservices first. >> So hey we have a common thing we hear now is like a lot of people >> you know jump to microservices. It is shifting a little bit but yeah >> yeah at least I've seen that from where I come from it was like we have four developers we need 20 microservices okay human >> yeah what is the reasoning behind and usually it was either a monolith then we learned how to build model monoliths or how is
it called right now the popular terms model or monolith It's coming everywhere. But essentially, it's decoupled software with different logical areas of your application that you need to and each logical area is representative of a specific uh area of business or area of volatility around your entire system, >> right? >> Um and then you scale and rinse and repeat from there. Essentially, you get you start up small. You get a P somewhere. Usually PC's are really good starting point because you can get customer feedback and then you can scrap that P and build a really good software in its place. Essentially you can deliver some PC feature. Uh see how the customer reacts. see how much value it brings to the customer because you they might think that hey our users need this feature >> right >> but the users you deliver the feature and
then the users do not use it at all because apparently they didn't need it at first so uh there is a really good book about that from zero to one I think uh it's about startups I think it was lin startup and from zero to one which I read way back when and I got those idea from there and try to implement them in software. So you get a P you see how it behaves then you design it against then you design the right thing then >> right because you actually got feedback right you ended up hearing like or or observing through metrics that this is what people are using this is what they want now we actually know versus someone saying I think this is what we want because you said it multiple times and it's very true that customers will tell you like
we want this and it's like You can't just do anything and everything a customer says because you have to sort of interpret like what what are they actually like what challenges are they facing right what are their real problems that we're trying to solve because if they say we want this a lot of the times and you know it's it makes sense but they're they're proposing a solution almost and it's like that's not the customer's job to propose the solution the customer's job is to express their challenges to you and you as the engineering team you come up with the solutions to those problems. So, um I I had a question though about uh PC. So, if you're going to do a proof of concept, especially for someone that is very interested in architecture, uh we talked about um sort of overengineering and a proof
of concept is very much on the exact opposite end of the spectrum, right? So, how do you balance or how do you shift your mindset to go, we know that we want to build something awesome with good architecture, but right now is not that moment. We need a proof of concept. How do you kind of shift into it's P time? Here's what we got to focus on, not overengineer. >> Okay. Uh so the main shift there uh the main thing there is for me well- definfined interfaces or contracts between okay uh your parts if you have a well- definfined interface a well- definfined contract for your functionality you can switch later on the implementation details to that part so you can first go with a it's similarly to TDD how they are doing their the approach to test driven development where you first uh write
the minimum amount of code uh but you still focus a lot on the contract part because that's not going to change and that's going to be a hell to pay if you don't get it right from the get- go. >> Yeah. >> Uh because we've had to at some point maintain about three versions of the same interface. So, three versions of the same API because some of our customers didn't want to update to the new one and we said hell to that after a while. And >> it's a big cost. It's a huge cost on a development team to say we got to keep this thing around and like doesn't have the parameters we need, doesn't have the functionality that we want to be able to provide. Like it Yeah, I I've been there. >> Exactly. Uh yeah. So we've tried that once went for
a couple of months to and learned that the hard way that that's not going to be maintainable in the long run. Sent an email to the our customers that hey we're we're good with you for another couple of months. So be good and switch to the new version of the interface and everything else because we are not planning on maintaining those any longer. Uh so yeah, first contracts, that's the part that is non-negotiable on me. Uh because getting the contracts right, you can then switch the P back to a really good implementation if you need to do that. Uh that's why I kind of liked the model or monolith microser approach. Well, because you have physical boundaries and you have logical boundaries and you can work with logical boundaries way easier because define a logical boundary and then you can swap the implementation details at
a later stage if you need to. If that PC didn't fire up, okay, we might switch it to some better looking code but not invest heavily into that specific area into that those functionalities. >> Sure, that that makes sense. And uh do you find that the so if you're working with other people on your teams is that do you find that it's difficult for engineers to get in that mindset of being like so you have your boundaries set up and you're trying to explain to individuals and and I don't maybe I don't want to over assume uh how you approach things but is it really a matter of like hey guys like we don't need this to be the most perfect architecture after the boundary we need to get to results or like how do you approach it and how do people respond to that
because I've I've experienced some things with PC's before and I'm curious what uh sort of what your experience has been. >> Uh okay I'll I'll cut to the chase. I was in that uh camp where I didn't want to deliver bad style bad AR well not bad working code base that was messy. I was paramount of doing everything the right way and having the entire setup. But usually it's an experience that matches throughout your career where you burn really hard because of a mistake of overengineering that you did and then you learned the hard way that hey we might have had uh better luck going with the simple approach there and not waste so much money of the customer god forbid uh on a specific feature. There are the junior develop well not junior developers who didn't have that kind of experience tend to write
that kind and be paramount of good quality software. uh for me one point was when I joined the team and the architect there and I came to the came to the solution and we had some message handlers that were message handler and the class name was underscore v2 and I'm like okay we have underscore v2 why is there no interface common for message handler so we can when version three goes on we just create a version three and then we implement that interface register it in the DI he was like looking at me uh why can't you just create a new class and just switch it in the DI and that's and that's basically it and you're do you're done. >> It was technically not the ideal solution but it was working and it was really easy to follow along in the long run. So
when you have to maintain a solution that's probably when it hits hits you that hey there is good code there is uh clean code done by all the standards of DDD testdriven development clean architecture with all the boundaries and there is the middle ground where hey this is workable this is easy to maintain this is easy to develop further we do not lose that much velocity in our sprints to develop new features. Yeah, of course there are some things that need clean up, but we can address them in the long run. So that's usually the sweet point that you come after some some form of experience there. That uh that makes sense. That that resonates with sort of my experience. I know before Microsoft, I I ran a I did a lot of prototyping at the digital forensics company that I worked at before. So
um I had generally I had small teams and we would kind of set out to go try things out and it was I think for the engineers I had on my team like we were in the mindset of like we we understand what's involved with building higher quality software but we're like we just know that we need to get to answers and we took that to and this is like sort of right before I moved to Microsoft but the team itself was starting to look at things like if we don't even need to write code and we can get the answers for how we want like the what the customers are actually going to need. If we don't even need to write code then like let's save even more time and not even write code. But we noticed it's pretty typical like sometimes people would
look at a team like that and they go it's not fair. You guys get to work on like the latest technology. It's all the cool stuff. Like why don't we get the cool stuff? And it's like that's not really what it's about. the the big challenge is that a lot of people aren't ready to basically throw away all of their work, right? We're gonna try doing something and we have to expect that it's not going to be okay. And we have to be ready to say, "Sorry, didn't work." And just toss it and move on to the next idea. And I think that to your point about having experience doing this type of work, it's cuz I think you caught yourself. It's not even like a junior versus senior thing. specifically this type of work, you need to be of the mindset that like the
goal of this work is generally getting to answers faster. If that's the goal, if you go through it and you're trying to write, you know, what perfect code might look like, it only takes a few times where you're like, man, I spent so much time doing this to get it perfect and we had to go toss it out. Like, why would I do that again? Like, it sucks. >> Exactly. What we did there also pretty good on your point to get feedback from customers was we had an amazing UXUI team that they had prototypes and I don't know how they were doing they were basically creating the whole platform in Figma I believe it was and they were able to click through the images so basically they added some kind of behavior to the UI part and they were validating they were working a couple
of months ahead of us >> nice on validating the requirements. So they were going to the customers that hey here is the UI part. They were also call uh they had an amazing guy there who was also checking usually with the developer team that hey is this feasible? We can draw it really easily but is this feasible for you to get there. So they were writing those designs and they were validating. So when development phase came in or when we had to well when development phase came in we usually had a more or less an a rough idea a really good validated rough idea of what we need to build. So we didn't have to throw away some stuff uh because that's the hardest part. Throwing away your child after you've built it up for a couple of weeks and then throwing all that code
away. That's the hardest part. and you have to go through that a couple of times in order to get used to that. I don't know if there is any way to psychologically prepare for that kind of thing. >> Yeah, >> it it feels like a very big waste. And no, you made a good point too. I think the depending on the teams we had there were there were certainly cases where we had our our product owners working with UX designers to get mock-ups and putting them in front of customers, which is good. on the prototyping team I had that was almost collapsed into the team itself. So, uh depend like our our product owner might be in a position where he has mock-ups and he's already talking with customers getting feedback because >> exactly >> to your point that can really help get ahead of
the curve and reduce the amount of waste. Um, but you know, at the end of the day, the goal of of proof of concepts, regardless of whether it's in code or, you know, it's a PowerPoint or it's Figma and you can click through things, the goal is really to kind of validate if an idea is going to solve problems. So, um, very cool to hear that it's kind of like similar sets of challenges. Um, you know, people, it's like a mindset shift. people want to they want to do good work as software engineers and sometimes the the PC realm kind of makes you feel like I'm not doing the good software engineering things like this doesn't feel good. Uh so yeah it's a definitely unique experience. >> Yep. Yep. There is a point there also to know when to shift from a PC to a
real uh to get more return on investment. So there is point at least currently in my current project we have a lot of we have some PCs >> and we've missed a couple of opportunities to move from a PC to a really good implementation and we paid the price a couple of weeks later that hey we really needed to have that stuff already done not only have via the PC. So there is a fine margin there also. >> What does that look like in terms of uh observing when that change may need to happen? Right. So you're you're doing a PC, you're like, "Hey, it's working." But >> yeah, like what what are some signs like, "Hey, maybe this needs to become a more real thing. Maybe we need to do a better job on on the design as we go forward or you know,
scrap parts of it to to go rebuild the parts now that we know." Like can you walk through some considerations there? Um usually it's the point when you realize that doing something in the well at least in my experience was that doing a repetitive thing right now uh will cost you more than redesigning the thing and well doing the P right well doing the implementation right uh because you should have good metrics and you should usually again back to the requirements part and back to the analysis did market analysis and all that stuff because right now we're working on a live project. So we had some functionality that our customers could use start started using on certain pages and when we realize that hey we need to add this functionality to other parts of our entire system to other parts of our thing. uh and
it will basically mean that hey we need let's say in story points we need five story points to implement this in another part of our system then you take a look at how many places you need to implement that in and you start adding up and you understand that hey this is no longer manageable we need to find a better solution maybe spend 10 15 story points worth to get the solution down but then uh get a return on investment really week and basically get it up and running. There is a fine line there and it depends probably on the systems and depends on the business part and uh usually developers are from what I saw it from how I was I was very narrow-minded on the functionality itself. So you need to take a step back and try to see for yourself the bigger
picture and that's the hardest part for me. It was just stepping back and not seeing the forest for behind the trees. Uh so uh yeah that's the hard part. Uh for science it's basically you spend a lot of time uh you spend more time that you expect uh nurturing or doing specific stuff that you expect it to be just a PC or one time of stuff. So, >> right, but it's it's I think to your point like you start using there's a bit of gut feel, but also like you're trying to use data to say like if we know that this is where we want to head uh and like you mentioned story points for for other folks and if you're using different systems of estimating effort and stuff, the point is really just that >> you're you're starting to realize like if we have
to go repeat this and this is the amount of effort it is, what does it look like instead if we I'm going to use air quotes here I could do it right or do it better, do it differently. Can we shrink that down? Because the the point might come up like we know there's I'm just making this up. We know there's five spots we want to do this in, but that's five spots today. Maybe there's more in the future. We don't want to keep paying this cost. So, I think that's a really good way to look at it is like it's it sounds kind of funny again, right? But if you were to look at that and say, "We got to do it in five spots, but to redesign it is actually going to be even more work." you might just say screw it. Like
it it hap it must be good enough for now and we'll just keep it. So then you don't transition from maybe what was a proof of concept with maybe slightly rougher architecture. You just say like it's working and keep it. >> Oh yeah. Yeah. And to that point the story thing is you named the term right effort. That's the word I was looking for. So it's the amount of effort that you will need to do there. So as you said it might be five places right now but we're not going to use it in the future anyway. So there is no prospect of using that functionality in the future. So that's where usually your gut feeling and how your team and how you're you've been doing stuff on the project kicks in. So that's where you should take the decision. And uh I cannot advise
right now and I will try to not advise anything at this point because it's very well it's very situational and based on uh the project itself. So any advice I can give right now or from my personal experience might be interpreted the wrong way around and taken to the extreme. >> So yeah, >> it's a very difficult thing when you're sharing information and experiences. It's like you want to be in a position where we can offer advice but it comes with a bit of a risk right like people need to understand that uh and it's it's probably one of the most important things in software engineering and we joke about it but we keep you know programmers and software developers we always say it depends but like it's because it does depend the context that you're working in determines so much and it's like you
might and I I'm guilty of this right like I I use C for everything I I basically default in all my own projects, I default to like a plug-in style architecture for everything, but I do these things because I don't have other constraints. Generally, if I'm just building stuff for myself, I'm building it how I like, right? But that's that's the context. If I'm building something for a customer or if I'm building stuff on teams of people, if I'm building stuff at work in an existing codebase and stuff, I don't just go, "Hey everyone, guess we're switching to a plug-in architecture. Sorry." Like it's there's so much context that um when we're talking about our perspectives and opinions, we need like hopefully it's assumed, but unfortunately it's not. But there's very it's very important how much context that is is provided there. So totally get
it on on the advice offering part >> on your thing there. Uh if we return a bit to the topic of testing, I had that experience a couple of well some time ago >> uh where we were deciding on a front end uh library well front end framework for test for end to end testing. So we were using Cypress at that point and we decided to switch to playrite which is written by Microsoft uh and I was a familiar with playright and writing code in C# and the first thing that I came to is like hey we can use playright and write C# code in order for the end to end tests and then it occurred to me wait a bit but do the Qi QA guys know C. So yeah, it's better. We defaulted with play right but we went with the JavaScript part
with the implementation because they didn't need to learn C to get the stuff there. So they could use their own language the JavaScript which they were familiar with and the constraint itself was but hey they're going to waste some they are going to spend better say sometime learning that new library and learning an additional new language is going to cost you too much while you can get the same return on investment with the uh playright in JavaScript. So we went ahead with the JavaScript and that's a constraint that had to that influenced our decision in the long run. So >> yeah, it's a great example, right? It's like there might be things that it's not exactly what you love based on your personal opinion. It might not be something that you've uh used consistently over you might have seen an architectural pattern or a software
like tech stack. You might have used it 10 times before in the past and you're like, I I know how it would apply in this situation, but the team you're working with, those individuals might not know it at all. The the domain that you're going into may not be a good fit for it. Like there's there's so many considerations, right? So, um, on testing though, have you, and this is it's going to be a, you know, like maybe a very open-ended question, so I'm not looking for a like a specific answer, just a curiosity, like when it comes to thinking about testing, when it comes to software architectural patterns, how much does the the goal of having like good tests, good as in what, however you'd like to define that, how much um does the AR architectural decisions influence your ability to have good tests?
Like do you is that something you think about as in hey we're going to use this type of architecture because it gives us a good ability to test or is the testing part like a we'll just figure out how to make it work after. Uh, I'll say it right out. Uh, well, I said it right out. Uh, I've been blessed with the team and the QA guys that are really capable of and when I joined the team, I learned a lot from them. So, uh, they were really amazing at different stuff. So, I've learned how they approach testing in itself. So um I would say that the architectural decisions influence the approach you take for testing. You can still get really good test coverage and I don't really mean test coverage by lines of code but I mean by functionality uh part you can get
really good testing suite there with nearly any kind of architecture let's say. So uh depending on your specific skills. So let's say uh the main criteria that I use to define when we have a good testing suite is we're ready to deploy on a Friday evening without any care. So we just press the deploy button and we go home without even checking the deploy pipeline on >> I'm glad you said it. I'm glad you said it because that is 100% my philosophy because everyone says I do that because I don't want to have to come in on a Saturday and I go yeah I don't want to waste my Tuesday having to fix stuff from Monday. I don't want to waste any day fixing stuff from the deployment before. So why is the weekend any different if you feel that the weekend is a special
time and that's why you can't deploy on a Friday? I'm like, what's broken in your process that doesn't make it automatic? And I know that's it's easier said than done, but I I truly feel like you should be looking at that with priority. >> Yeah, exactly. Because the lines of code do not matter. I've seen in my experience hundreds of useless tests that were there and were basically testing mocks. So I've seen that when you have a class, let's say since we're talking about C# here, we have a class with a couple of dependencies and all those dependencies are mocked out and you're basically testing how you're calling those mocked out dependencies. Yeah, there might be some value in those tests, but I don't feel really convinced to deploy something with those kind of tests because there are hell of things that can go wrong
in there. So we're trying at least right now we're trying to cover as much ground as possible in any different scenarios and we're basically not using one type of testing to get everything there. Uh there is no universal silver bullet or whatever you call it universal solution to everything. You just need to it's part of a gut feeling. It's part of an engineering approach. It's a mix of the two where you know that hey this part of the system is really really critical. So we should cover it up with different kind of test not unit not only unit integration maybe add some end to end tests for user journeys. See how the entire thing works out. This part is okay to unit test it because there is some uh domain logic in there that's not really doesn't really depend on any other uh parts of
your system. Uh we focus a lot on integration testing because usually integration testing is where things go wrong and integration and slashend to end testing with all my uh love for end to end testing. They're really useful when push comes to show because uh and in a good amount I mean you should not go overboard with end to end testing there. uh better I could take a step back here and just approach different uh testing approaches for different companies because in my previous company where we had uh mostly QAS and we had no mostly manual uh testers who were doing the clicking on the side uh a really good return on investment where those end to- end tests written in uh playright because the tests themselves were pretty easy to set up. Uh well, pretty easy because the solutions were easy. The entire setup process
was done by developer obviously. But then the testers could step in and write some basic code that would fill in some forms and just cover a lot of ground and just get a lot of return and uh get a lot a lot of value out of those tests really easily. It wasn't covering the entire system itself. So but it was usually making the life of the testers way easier and way easier to work with the entire thing. >> Uh then come unit tests which basically cover the innermost part of your system the domain part which is usually where I find them useful or from some utilities part which don't have many dependencies on external factors. uh integration testing is usually the part that I love most and depending on what you call unit and what uh on what you call unit we may be talking
about the same type of in the test uh I've had some discussions in the past where the terms were messed up and we were talking about the same thing but in different terms so that's usually the first thing that you need to do is get your terms right there. >> Yep. I've definitely observed that especially when sharing especially with tests like people have you know I might be calling an integration test and a I generally call like an integration test and a functional test the same thing just cuz that's how I'm used to referring to them and someone might say they're different here's why and I'm like okay call it what you want here's the thing I mean um but no you there's a lot of stuff like you and I haven't talked about this before so it's really cool to hear as you're going
through some of these experiences like there is a lot of overlap in in some experiences I've had where uh at the digital forensics company I was working at there was there was a lot of um okay two sides of it one is that we have like basically a search engine so that was like a a huge end toend test in terms of data um perspective right so we would run a search uh programmatically you don't care about the user interface or anything it's just do we get the data that we expect but it's an enormous endto-end test but we also had we had a lot of manual testers and when we would go to release software at one point in and I I don't know if they've completely moved away from this since then but there used to be like uh what we would call
like a build verification test plan and testers would go through like these cases and basically click through the application to make sure that they get you know these expected behaviors and we started to go look like those build verification tests take so much time and like a human has to do it. So why don't we programmatically get something to click through validate because it's so repetitive and it's the point here is not like hey the test isn't valuable. It's just that you as a human being have so much more value that you could be offering if you weren't just clicking. Um, so I I always want to be clear about that that I'm not saying that like people doing manual testing aren't valuable or that those types of tests, those scenarios aren't valuable. It's just that I think that humans have way more value when
they're not repeating things. So we moved towards that and what we found was you kind of were saying this is an interesting part that the setup for those types of tests could be done by a developer was actually pretty lightweight and then the the the QA the testers could go actually write a bit more um specific parts to fill things out. But I think those types of tests work well when your setup is actually not super heavy weight. um if you're sort of your system isn't going to be super brittle that if you change one thing, your 300 test cases now all have to be touched. Um so we've we've encountered that kind of stuff too where like it's I love that you said it's not just one type of test, right? You have different types of tests that you exercise in different scenarios. They
all have tradeoffs. Um there Yeah, there's so much good about what you were saying there. I wanted to ask you a question about because I have I have strong opinions about mocks and stuff but only because uh I've actually had and this is going to sound very odd. I've had a lot of success on one particular team I had in one project that we did I should say product. It was more than just a small thing. We literally only unit tested with box across the entire codebase and it was done to like an extreme. So it was almost like a I don't want to say it was just like a it's not a joke. It was more like can can we do this and is there a benefit like almost like a test of like what are the limits of doing this? And we unit
tested everything. And what was really cool on that team, and this this is why it sounds funny, is that we acknowledge that mocking everything and only unit test is not like it's just not a good strategy to do only one perspective. But what was really cool was that when we mixed that with telemetry, when we had we could watch telemetry reports of an exception happening or bad behavior and we could go to our code as the engineers and go, "Oh, we missed a test case." like literally there's the bug, there's the missing test case, add the test, fix the bug, and we could do it and basically push the update before our tech support could even let us know. So, we had a very interesting feedback loop, but uh anyway, not saying that's the best way to test. I just thought it was an interesting
example of taking it to an extreme when you think about mocks and um you mentioned that when you have few dependencies like basically it's a better situation for unit testing and I would agree right you can isolate because you don't have external dependencies you can just focus on some code what are your thoughts about mocking out external dependencies like say third party services that you have no control over and I don't mean a database because a lot of the times now with our testing tools we can stand up a test database like locally on our machines or in CI and we can do that tear it down but I mean like uh pick pick a service that's uh that your company doesn't own uh some third party API that you have to access but you don't control it like do you use mocks in that
situation or do you have other strategies for testing uh we had the there's two parts that I want to say here. So first of all uh we had in the past the exactly opposite part. So we went with the strategy to go with only integration tests and few unit tests were possible. So we kind of got the other part uh uh other so we got the other part of the spectrum itself >> right >> so uh we got there with integration test and we had some and some unit tests so uh we were testing from inside from outside in the entire system. So that's usually the points where the inside the system points and then the points where the system communicates with some other components other models other stuff around other systems themselves. Um to your question about external dependencies we had I'll try to
be as close as uh ambiguous as possible not to break the NDC uh but we had a case where we had uh Asure Active Directory which we worked on on a system that was creating user accounts and uh in ADB2C in business to client part. uh but we were importing some users from Asia active directory. So at some point in our unit in our integration test suite we had to integrate with Asia active directory uh because creating users in Asia active directory and creating test users would cost a lot of money for the customer. We set up on creating a couple of accounts a couple of test accounts that we would usually use for our testing purposes. So we had an issue when two different pipelines would start at the same time and would reuse those users. We would have flacky tests uh because we
were interacting with the same instance. So one uh instance of the pipeline would create that user. The second one would assume that the user is not created and would try to recreate that user into B2C and then we would have flag tests. So uh we had those issues and I would say postfactum how I would approach that situation right now and how we the solution we did there was basically to push uh the integration with the real instance as far right as possible. >> So we run those tests at the last step. So even if it if a couple of pipelines start at the same time or stuff like that happens, usually it's going to be only one that ends up at that stage in time. So we tried to push in the whole process, not really pipeline, in the whole testing processes, the integration
with that third party as far away as possible and use Mox until then. Uh now I would let's say we're integrating with a thirdparty API and I was looking um at two things contract testing and using stuff like wiremok essentially because you would for example mock out a third party API on your end and we had in microservices because right now I'm working in a microser environment with a lot of different teams sometimes some teams break their contracts some APIs break their contracts so even if you're using and you might be testing against mocks against well- definfined mock but you're not catching those times when those contracts are broken. So it's totally fine to test against mock and if you have some other practices in there to cover up for the spaces or for unforeseen events like contract changes, behavior changes and stuff like that.
uh if you're testing if you don't have any way and you need to test against a real instance I would push it as far away to the right as possible and bring to the left uh the stops because those will uh get you faster up and running and then you can at the left stage just basically test against the real instance. There is a risk there as well that if the third party ABI changes his contract and you catch it only at the end, uh that's going to hurt. Yeah. >> But that's where other types of testing comes into place. So black contract testing that I mentioned before. >> Yeah, that's a that's a really interesting and p like a good persp I say good. I say good only because I think it aligns with my perspective. So it's subjective. Just trying to acknowledge the
bias. I I think it's a good perspective. It's very much in alignment with what I what I've experienced where um the even the idea you were mentioning with wire mocks and stuff, right? Like it's interesting because you can basically have like API responses replayed for you that are legitimate and like you know that you're working with something that was real. And I think for in some situations that can really help for when people are saying like, "Hey, you don't know the behavior of that thing. You're just making it up." And it's like, "Well, no, this is a replayed response." So like that was was real behavior. So that can give you really good confidence, but it's it can be false confidence in terms of that's a real running system and if they h like they're you shouldn't have to go test their system. You want
to test the interaction with it, but you shouldn't have to test that their API does what they say. But it's possible that it breaks. It's possible that it changes. And if you're not in a position where you ever have a signal for that, like you may run into problems. So the the thing I wanted to bring up again was back to the product I worked on that was heavily unit tested with mocks. The thing that we ran into at the end, and for context, this is for mobile phones. So we basically did digital forensic. Yeah, we did digital forensics extracting information from the phones. And the reality is even if you leave a phone sitting there like it could do updates, right? There's all sorts of things that could happen or um yeah like whether it's the OS or the applications that are on the
phone, they can update. So, uh, there would be situations where our tests would just stop working and we're like, what the heck? Like, I got to back up cuz I said that wrong. Technically, we ended up having to build this system where we were working with phones so that we could detect those things. Before we had that only with mocks, we would just assume we were good because we were effectively replaying known state and then we would have customers going, "This isn't working." And sure enough, we would go try a real phone and we were like, "This used to work. Why doesn't it?" So, we did build like a sort of this we went from completely only unit tests with Box to the only other thing we added in was this hybrid thing that could basically just check if contracts were broken. And if the
contracts weren't broken, we were like, we know that we're so heavily unit tested that everything else is fine. But this other thing was our signal to tell us like every night it would run and be like something broke on Samsung phones like buckle up because now tomorrow is going to be a big investigation about what to go fix. So anyway, I just thought it was super cool to have like you can you can refer to testing some interaction with mocks and it will give you some type of confidence but even over that same boundary you have a different type of confidence with a different type of test. So if you're truly interacting with that API that's you know third party thing you start to get a different confidence in in that integration. So, um I just I thought that was a really cool way that
you navigated explaining like we just use different tests and even over the same type of boundary you might you might do different things. So that's that's super cool. Um yeah, you have any thoughts on that? Yeah, a cornerstone there is probably um how easy it is for you to set up those things and how fast those test tests can run because uh let's say I never well I never said anything about dropping those tests. So that's a really good catch there. You should obviously test against the real instance at some point when that point comes. Is it sooner for you? Is it later for you? It's basically really constraint dependent on your end. So that's really for you to decide what kind of approach you're going to take there. But you should never drop uh those kind of tests there for any kind of thing
because integrated integrations are usually from all my years of experience which are not that many by this point. Uh we've had the issues pop up not at the unit part but at the integration part. And it might have not been just because of us. Let's say uh service A was working fine and service B was working fine. The network is between those services. So faulty network. It might not even be the teams that are working on the assets themselves in the wrong there. It might be some integration part. Kafka failing messages uh being uh dropped somewhere, corruption, network issues, all that stuff. >> Yeah. How does your system handle that stuff? What does it do? Like if it if you're not exercising those paths then you're it's basically it's uncertainty right so you can do things to give you confidence around those uncertain areas or
you don't and then you I mean if you're not running into issues that regularly maybe you go that's why we don't invest in it but in my experience it's similar to yours where it's like >> you it only takes a few times for you to be like man this sucks like those types of issues are a pain in the butt we might as well put something there to like give us a signal. >> Yeah, that's probably the reason why so many different testing approaches are there in the first place because I meant smarter people than me or probably got some problems to those problems earlier than me. So it's an engineering job just to well just to to figure out uh what kind of problem and how others have solved those things and just compare the constraints they have compared to your constraints >> and
it's not usually technical macho you can take approaches from other languages other technologies other areas of work uh and adapt those to your specific needs and see what works really for you because at the end of the day it what gives you confidence in deployment on Friday. >> That's exactly it. That's the way I always frame it is if people are debating about you know the best way to test and stuff I always come back to tests are there to give you confidence in the changes that you are delivering. Nothing more nothing less. And I think if you always come back to that it makes thinking about what you should be testing and how you should be testing uh a lot more straightforward. gives you flexibility and allows you to do the right thing. Um, Vicilia, as we wrap up, you had mentioned that you
have a course and I'd like to hear more about your course and if you want to share with the audience about what the course is, where they can find it, I'll make sure to get links and stuff from you after, but would love to hear more about the course that you have. >> Sure. Sure. So, I've been working a lot with model and at some point I had way too many questions addressed to me. So I wanted to cover them all up into a single source. So basically uh it's a lot more on the uh theoretical and on the design part focused. So it's about modular monolith how we were building them and what were the constraints what were the ideas and what were the decisions that we took to get down to a specific situation because I cover a lot of different approaches on
how you can build modular monolith and then scale it out into a microservices system because we've done that in the past and basically approaches that have worked for us experiences that I've had uh all concentrated into a single source. So that's basically what that course is all about about how you can build software, how you can build uh pragmatical software there that you are sure that will you will be able to run and bring value to your client. >> Awesome. And it's cool that it's based on you know experiences. It's like you mentioned it's not only just the theoretical part of here's how this stuff would work but backed by you know we did things a certain way and here's the outcome of it. So I think that's that's a very valuable thing to add. >> That's the part that there are many courses there
that explain technical stuff like do it like this and but the why part is not usually that well covered. So I tried to focus more on the why part and also give some technical insights into how to do that stuff >> right the why is it comes back we were talking about context right like someone might hear oh modular monoliths I know everyone's talking about that I just have to go do that I think that's what happened with microservices right everyone's saying it I have to go do it but when you hear about the why I think that's what gives you like hey if I understand why we do things then I can see if my context is applicable. So that's super awesome. So Vicil, I'll get the the link uh for that from you. What about if people want to reach out to you?
Where can they find you? >> Uh they can find me on LinkedIn. I have my page on LinkedIn. Then you can write directly to me. And I have also YouTube channel where you can watch other stuff I post around. So over there I think you can well I think you can leave comments and I'll try to replay as uh ASAP uh usually but yeah LinkedIn is usually the place where people reach out to me. >> Awesome. Okay, we'll get your channel linked, we'll get your LinkedIn linked and we'll get your course linked as well. So I'll make sure I get all of that from you. Uh Vicil, thank you for this conversation. It's uh it was refreshing for me to kind of hear some things validated. uh also some different perspectives on how you kind of navigate software architecture, how testing ties into things and
and truly the value that both of those things offer in developing software. So, thank you for making the time. >> Thank you very much for having me. It's been an amazing experience for me here. >> Cool. No, I'm glad to hear it. Okay, we will chat soon. Thanks folks for tuning in. Let me know if you enjoy this kind of thing. Do the like, subscribe, comment, all that fun stuff and we'll see you next time.
Frequently Asked Questions
What is the main takeaway from Vasilii Oleinic's approach to software deployment?
The main takeaway is that we should strive to deploy on Fridays without any worries. If we have confidence in our testing and deployment processes, we should be able to hit the deploy button and go home without second-guessing ourselves.
How does Vasilii Oleinic suggest handling proof of concepts (POCs) in software development?
When handling POCs, I focus on defining clear interfaces or contracts between components. This allows us to switch implementation details later without overengineering from the start. It's about balancing simplicity with the need for a solid foundation.
What role does architecture play in software testing according to Vasilii Oleinic?
These FAQs were generated by AI from the video transcript.Architecture significantly influences our testing strategies. A well-thought-out architecture can facilitate better testing practices, allowing us to cover critical areas effectively and ensure that we can deploy confidently.
