ServoyCloud Product Update
ServoyCloud Product Update
Today’s webinar is a product update for ServoyCloud. This is episode 89 in our tech series. We have, I see in a lot of attendees today from all over the plan, it’s really, and I recognize some names. We get some regular attendees in this tech series. However, I know that there are people joining who are also maybe even not so familiar with Servoy. So we’ll try to keep the content relevant for everyone. But this is a product update on the ServoyCloud product. And we’ve been sort of introducing this product over the past couple years. It’s a really a pivotal direction for Servoy. And we thought that it really warrants a nice update to give everyone a feeling for how it can help and where we’re taking it. So on the agenda today, we want to revisit the use case of why ServoyCloud, why are we doing this? And for those of you that follow this series, we kind of did this already at the beginning of the year. We just wanted to do a bit of a roadmap. So this topic has been, I think, covered in other webinars. So we’re not going to go too deep into that. But we do kind of want to talk about at a high level what it is and why we’re doing this. Those of you that are familiar with this tech series webinar, you know that we like to start with a demo. And keep it fun and keep it interactive. Demoing ServoyCloud is a bit more challenging than some of the other demos we do. But I’m always willing to try. So we will do some demos. And then there’s quite a lot of, I think, concepts to cover. Because this is really almost a separate product. So I want to make sure that from a technical point of view, you folks all understand kind of the bits and bots of what you’re seeing in the demos. It’s also nice to go over real world customers and their business cases. And so we have a few of those. And of course, we’re joined by Ian from Portfolio Plus, who’s going to speak a little bit later about their particular. Experience. And finally at the end, we’ll talk about how you can get started. So hopefully this webinar covers it end to end. What is ServoyCloud all the way to how to get started? Well, I think it’s important to revisit, you know, why we’re doing this and what exactly ServoyCloud is. But we have our CEO on this call on this webinar. So Ron, are you there and do you want to maybe talk a bit about why we’re doing this? Yeah, thanks, Shon. Without trying to go back to the webinar we did at the beginning of the year, we talked also quite extenServoy about the share of workload and why we had that, which I think most of the people on this webinar, which is a tech webinar, I always just want to see your cool stuff and want to know how it works. Yeah, I still want to mention, of course, why we do this. We have been seeing a lot of our customers or prospects have struggling with really going to the cloud. It’s not that difficult to spin up a machine somewhere and just put a piece of software on it. But to do that continuously to have it tested continuously to have it the quality of it insured to have it secure. We see a lot of prospects and customers or saw a lot of them circle with that and it took them quite some time and bandwidth to get that to the level which is needed, especially if you want to do a real sauce, run on a public cloud. And then we have a business application, which needs to be up 24 seven needs to perform. And like I said before, needs to be secure. And like I said, some of our customers were ahead of us because, you know, maybe 10 15 years ago, I remember maybe 20 years ago, I think young, everybody’s in the cloud or everybody needs to go to the cloud. And some customers are have been in the cloud for quite some time and were or are ahead of us of what we’re delivering here, but too it takes them a lot of time typically one two, maybe three fte on just operating the cloud without doing anything on just developing software. And it’s always been serve always goal to un-burdened IZ’s software teams to build and deploy their software. So I think that’s we want to keep on un-burdening them. And unburdening them is difficult for Dutchman. So it’s very logical that we start investing this. I think four or five years ago we really made the decision to go forward with this. And if I look at where we are now, I’m really proud that we’ve come so far in it. We’ve got some really interesting and really advanced stuff, which we can offer our customers to just speed up and just focus on what’s really important. And that’s that’s building these great applications for their customers. Excellent. Do you want to do the demos too, Ron? All right. I know. I know. OK. Do you want to do the demo? Yeah. Yeah. Thanks, Don. Thanks, Don. Next time. OK. Yeah, well, this is the tech series. And yeah, of course, it’s important to show cool stuff. So let’s get to it. And then we’ll do more of the review. I’m going to switch to another tab. Do you see another tab in my browser? Or is this still show the data? OK, good. Yeah, that works. OK. Yeah, I kind of want to show what, how it all works. And to do that, I maybe have to do a bit of a story here. So I’m looking at a sample application. This is the sort of sample tutorial application that anyone can use from the Servoy Package Manager. And this is running in, you can see in the web address, this is running locally from my development environment. And I have also the same application running on my development server in Servoy Cloud. Now, these look nearly the same. However, you’ll notice in the one I have in development, I added a products table here this morning just to stick it in there and have sort of a pending change. So products table in the local development environment, no products table in the cloud environment. And I want to start with this because I’m going to show you the pending commit. I’m going to commit the change and push it to my source control. And then we’re going to look at some other things because what’s going to happen is it’s going to trigger some automated building and some deployment and then we’ll come back and check on it. So in order to do that, I’m going to share my Git client. And can you see my Git client? Yeah. OK. So I have these two pending changes here in my Git client. This is the sort of uncommented some code so that this will show up in the menu item. And then there’s also the form itself. I made some changes to the form and the form is ready to show up in the menu item. So I’m going to select both of these. And I’m going to make a nice comment. So I’ll say update for Servoy Cloud demo. And I’m going to commit that. So that’s committed to my local repository. And I’m going to push that one to the remote repository. All right. So that’s been committed to the remote repository. And I’m going to switch back to my browser and take a look at another tab here. So I have logged into the Servoy Cloud Control Center. This is the namespace for this demo application. It’s called Workshop. Because I was using this for also for a training workshop before. And what we’re looking at is the jobs that are set up for some build automation. You can see down the left hand side here, the navigation for the Servoy Cloud Control Center. This is essentially where DevOps person or a product manager or even a developer could come in and manage and orchestrate all of their pipeline. You can see that there was a broadcast message that the commit has triggered a build. And we get some status here that it’s retrieving source code. So what’s going to happen is it’s going to check out from source. Now it’s got to the next step. But it’s going to check out from source. It’s going to build the application. And it’s going to deploy it back to that development server. Now it’s generating the export. So this goes pretty quickly because I don’t have too much other stuff connected to it. Well, while that’s generating the build, we can drill in a bit here and take a look at what’s going on in this particular job. This is my build and deploy job to my development environment. And I’m going to click the gear here to configure it. It has a simplified configuration screen at first. Essentially, I can pick what version of Servoy that I want to build. This is nice. If you’re generating a manual export, it’s in whatever development environment you’re in. But here we can pick any Servoy version. The latest will always go to the latest public version. There are some switches here for some test automation. We’ll get into that in a moment. But these are the types of additional tests we can run. Code coverage and analysis, security scanning, unit testing, and of course end to end UI testing. Another really important build property is the branch that it’s going to build from. Mousing over any of these parameters shows some info about them. So it’s going to check out from the develop branch. This is really important when you get into a real pipeline scenario and you want to do something like a Git flow. And you actually want to have different jobs running from different branches. And I’ll talk a bit more about that in a little bit. There are, of course, many advanced options. A bit of a lag when I’m screen sharing. So we are going to deploy a Docker image. And I have some dedicated environments provisioned for me in Servoy Cloud. I have a development and a UAT environment. And then there are some configurations about the type of trigger. This one is a trigger from source. But you could do on a poll or on a cron or just do it manually. From there, there’s really a bunch of advanced parameters having mostly to do with the deployment export options. So all of the kind of configurations that happen that would normally be in the properties file are exposed here and configurable. So I won’t get into those. But at a high level, it’s really what version of Servoy, any additional test automation you want to have, what repository and branch, and then a bit about timings and everything like that. You can see that it’s finished with the war and it’s just generating a report of the build in the progress in the background. I’m going to cancel this. We’ll let that keep building, because next it’s going to have to deploy as well. I’m going to take a look at a different job here. This is the build and deploy to the UAT target. So this is a different job in my pipeline. And in this case, I’ve configured it to do some additional things. For one, it can build off a different branch. This is nice for a Git flow. For example, I might, as a developer, just commit everything to a feature branch or to a developer branch. And then at some point, when my work is done, I want to merge that to a test branch. And of course, the merge commit to that test branch could trigger another job. And so that could be this one here. If we look at some of the, on the right-hand side, these are all prior builds. And we can go into the details of some of those builds. I’m going to start with this one here, which is red. And it’s marked red because it’s failed. And it failed intentionally. So let’s take a look at that. So the reason that it failed is because N2N testing failed. And that’s part of, that was one of the options that was enabled on this job. If I scroll down a bit, you see really all of the artifacts that are bundled together in this build. So a build is kind of a bundle of all artifacts from one instance of a job. And that contains what commits were attached to, that maybe triggered it. And that could be more than one commit because, again, that could be a push of many commits. We take a snapshot of the configuration that was for that particular job, because that could be important to look back at. Then all of the artifacts that get generated, of course, there’s a first successful build. There’s Docker images and war files, if those options are enabled. And then there’s some logs about the actual exporting and testing that’s going on. That could be useful to troubleshoot a build that may be failed because of some errors. Down here, we see the report from the build log and N2N test reports. We can click that. We also could have clicked this little red icon up here. And let’s have a look at the N2N testing. So you can see that this is a very simplified example, because there’s only two scenarios that are tested. And one of them passed and one of them failed. If I scroll down here and I expand this, I can see, OK, here’s the one that passed. And this is the test language and all the steps that it goes through and the time, the running time as it goes through it. Essentially, this is going through and running a test that it’s logging in and then entering some text in the search bar and then running a search. And then validating that one of the records in the result and the grid had the text that was searched there. And that one went through and it passed. This one failed. And when a test fails, you get some error here and you also get a screenshot. So this is actually running in a headless web driver. And I can actually see what was going on when the test failed. Again, these are UI test meant to test user flows. So it’s a test that a user can do a certain thing. And it’s really nice for catching regressions. And a typical application could have many, many of these types of tests. The other kinds of tests are unit tests, which are more about testing units of code, ideally for business logic API type stuff. And that’s real verify data integrity and some other transactional type things. That can also be automated and the same kind of reports generated. Mm. Let’s go back to our jobs list here. And I think the development job finished. It looks like it finished. So let’s check on that. So we can see that this job finished because it’s green. It’s on the latest version, servoid 2021 12.1. I’m going to look at the environments for my servoid cloud account here. Now I have two environments provision here. I have one called development and one called user acceptance. That particular job, which generated that latest builds right now, is deployed to the development environment. If I were to click this rocket ship, it would open it up. However, I already had it open. And so this just refreshed. And I have to relog in. And hopefully, I will see my products table that I pushed out of development. Oh, yeah. There it is. OK. So in this case, I didn’t run any tests, but I did generate the build automatically from a source commit. So this is really nice for developers. They’re just working. They’re committing to a certain branch. And you want to log in and verify quickly that everything looks good on, in this case, the Dev Server. The other example that I gave was the UAT server. That’s the one that had the tests running on it. I want to take a look at another build here, which had a couple other tests that were automated. In this case, we did code analysis and code coverage. And you can see that the reports were generated down here. My Zoom thing got in the way. I’m waiting for it to minimize. There we go. And you can’t click on them. So let’s take a look at code coverage results. This is another report that gets generated. The purpose of a code coverage report is to analyze this, it sort of monitors the application when the test environment is spun. And it analyzes all of the source code and how much of it actually is run when tests are running. And then it gives a report on basically how well the source code is covered by testing. And if you have a poor score, then you know that you probably need more testing. And of course, I only have those two tests, which actually surprisingly does cause a lot of code to run, logging in and navigating. Although the tests are very simple, the power of end-to-end testing is that it does cause the application to run like it would in real life and execute all the code. And then you can monitor it here for coverage. So these are reports that you can drill in. You can see that the only one of the only areas where I scored better than terrible was here on one of the JavaScript files for the navigation framework. Basically it’s saying that three out of six statements were executed or half of the code in that particular JavaScript file was run. So obviously I want to have better test coverage and add more of those end-to-end tests who high test coverage score shows you that you’re going to catch regressions more easily. The other type of code, the other type of test is a code analysis. And this is more of a structure of the code itself. This is like a lint tool on the can run on JavaScript. And it essentially analyzes the complexity and how maintainable the code is. And it’s really looking at every single file. So some people like to use lint tools. And so it’s nice to have that connected to test automation because yeah, maybe you don’t want to run this every build, but maybe once a week you schedule some deeper testing and you go ahead and just keep an eye on the scores and make sure they don’t change. I would say in all honesty, there are some things about lint tools that don’t really line up with some of the structures in Servoy. Like you’ll get warnings on variables that don’t get used, but they get used in another scope. So if they’re public, then that’s not a real vulnerability. But overall, it does give you some scores on complexity and maintainability based on lint type tooling. And you can drill into any one of these and it will actually give you, if you get into the source code, it’ll start to give you some hints and ideas about what could be going wrong. A lot of it is this never used type stuff. So that is most of the test automation. The other thing that I didn’t go over was security scanning. And we can enable OASP scanning when test automation runs to scan for things like cross-site scripting attacks and those types of vulnerabilities that are in the OASP standard. And I think when we talked to Ian in a little bit about portfolio plus did, that was a big part of their application. And we have them to thank, I think, for this being in the product. So don’t let me forget to come back to that. OK. OK. There are a couple other things that I want to show you in this demo. So we saw, I hope that shows how development and test automation works as part of a continuous delivery. But that’s not the whole picture. So what I’m going to do is switch over to another pipeline, a real pipeline of a real customer, and take a look at some of the things that they’re doing. This is new base. This is a Dutch ISV, and we’ll go a bit more about who they are and what they do towards the end of this presentation. But one of the features that Servoy Cloud offers is integrated agile management. And so in their dashboard, they can see how they’re doing sprint-wise. So some KPIs about velocity and sprint burn down also connected to some of the tasks that could be in the sprint. The tasks can be linked to integrated source control. And so we plan to take that a step further and even be able to see. If you look at a build, you can see all of the commits. We showed that all of the commits for a build. But then also what tasks are solved in that build. So you really start to get deep insight and more total control over the whole software factory. Another customer that I want to highlight in a different pipeline here. This is now at the dashboard level on the top top. It’s Kenco Engineering. That’s a US-based manufacturer. And I will talk a bit more about what they do, but just looking at their dashboard here, they do production hosting in the Servoy cloud. And you can see some of the monitoring that’s going on here. This is showing connected users over time for the different environments. And you can see that the production environment in green shows really what’s happening over time. Hush law. And we can change that and look at what just happened over the past two days, for example. And you get a sense of what’s happening on each environment. So we’re actually monitoring each container that is all the containers that are connected to those environments. I want to show you the environments you for Kenco, because they have kind of the full pipeline and a real flow. They go from development to user acceptance to pre-production to production. And they do that with like a gift flow. So if you look at, if you were to look at the production job, you’d see that goes from the master branch. And I think the user acceptance goes from like a UAT branch. So every time they do a merge commit to a closer to production branch, it triggers a new build. And I don’t know if they do these production builds manually or from a merge commit. I think they do it from a merge commit to manage the production rollouts. And I think they pushed a production, like every morning they did that 7am this morning. And you can see, yeah, you can see it going back 7am, the previous day, 6am, the previous day. So you get a history of their rollouts to the production environment. So I hope that, well, one other thing I want to say is that I’m not showing here. We saw in the dashboard monitoring users. We also monitor the containers at a lower level too. So CPU, memory, all the logs, database logs, Servoylogs, that sort of thing. And we aggregate those and we can report kind of aggregate views of those. And we’re integrating that into the dashboard here right now it’s in a separate tool, but it’ll be integrated shortly. And you can actually kind of chop up the logs the same way you can chop up the history of connected users. The other thing that we’re looking to do going forward is more application analytics and also being able to do the same sort of container monitoring on containers, which are maybe not even running in the Servoycloud. So there’s a lot of exciting stuff going on around monitoring and analytics that you see here. Anyway, that’s the cool stuff demo part of this presentation. So I hope that triggers you guys to thinking and maybe get some questions going. What I’d like to do is kind of just recap what we saw because like I said, it’s hard to demo a pipeline and production hosting in a demo fashion in a linear way. So hopefully I can kind of recap in an organized way what we’re looking at. So some of the capabilities and concepts. First of all, I want to make sure I clarify that Servoycloud is kind of two facets. One is these high availability production environments. And so these are hosted in our cloud. And it’s really focused on having applications which are always online, always ready, monitored, and analyzed, and with notifications if we know that things might be going wrong. So it’s more than just bare metal hosting. It’s really focused on Servoyapplications. And it’s focused at the application that will not just on the, like I said, on bare metal hosting. If something goes wrong in your application and you’re hosting on AWS, it’s not their responsibility. But here it’s really focused on Servoyspecific applications. And I think that’s the big difference. Another thing about the hosting environment is all of the pipeline stuff that I showed that flows to production. So it’s quite easy to take that pipeline to the next step and put it into that high availability environment because it’s just one other environment that you provision in your servoic cloud account. The other facet of servoic cloud is the pipeline itself. And this really encompasses everything from source code up to the point where you’re deploying. So we automate the build test, deploy cycle with all of the tests that I showed you and provision these dedicated environments, which can be targets for deployments. We can combine that with agile project management, integrated source management. And these pipelines are flexible. When it gets, when you get into real world cases, people call meadows with very specific needs and questions. And we’ve been able to, I would say, bend the pipeline to real world scenarios. I’m pretty successfully. We can get into that in a little bit. We saw that the automation starts from source control. So we offer Git repositories hosted on these on-demand repositories. And we manage the trigger callbacks for commit triggers to kick off building and testing. It’s really nice for a Git flow type. So if you actually connect jobs to branches and you flow through the pipeline, you have something which goes through succeeding levels of rigor to the point where it’s ready to put into production. We also, something that people ask for, they want to keep their source code in-house. So we do support on-premise source control as well, or if you host it with another provider and you don’t want to move it, that’s not a problem. And we do support SVN and other types of mirrors as well. So we can take a look at those use cases. We don’t host those, but we can support them to integrate with the pipeline. We saw on some of the build configuration that building happens on different kinds of triggers. So the most common one is what I showed where as soon as I pushed to my remote repository, it kicked off a job. Another approach that it can be taken is polling. So we’re gonna, you know, poll every hour, every 15 minutes or whatever it might be to look for changes. And this is more useful in situations where we don’t have that webhook configurability if we’re dealing with a third-party source control system. We can also do things on a timer. This would be ideal. So we have some customers that run like a cursory level of testing, maybe on a UAT job, but then they have another job that they set up that maybe once a week goes really deep and tests a lot more tests because the tests can take a while to run if you have a really good coverage. And they maybe don’t need those so frequently. So they could say, yeah, once a week at, you know, in the middle of the night, we’re gonna run a deep suite of tests and look at those reports. And of course, you can do it on demand. So you can push the play button and kick off a build as well. I pointed this out right at the beginning. I don’t know if it had a high impact on you, but to me, this is a big feature that you can actually easily change the Servoy version that you’re building, the tests that you run, all of that. And so we support, you saw when I had that combo box dropped down, list all the versions. So any version of Servoy going back to, I think, 8-3. And a few sort of tags like latest and then nightly and release candidates. And so what’s really nice is you can pick the nightly snapshot and put and do a build and run some tests on, our own stack. And I think this is really important when you have a business critical application that you’re not just testing your code, but you’re really testing the complete stack. And so we allow Servoy cloud customers to go ahead and run their tests on our code. And I think that’s a win-win for everyone because we get to know that there was an issue, something we did that broke one of your tests or failed a deployment, then we know sooner. So we have customers that are testing, for example, the 2021-03, the Q1 build that’s coming up. And this is really nice with you, if you file like a bug fix or a feature request and you see in our support queue that it’s marked as resolved, that means it actually went into revision control and that night it’s gonna be built. So you can go ahead and say, okay, yeah, that does fix our issue or you come back and say, no, it didn’t fix our issue or it really smoothes things over. You saw that when I clicked into a build that it really had a bunch of different artifacts in it. So I just wanna list those here. The two main ones are Docker images and War files because if you wanna go and deploy these somewhere else, you need those. And the other things you saw were all the test logs. So we had the code coverage and analysis and then to end test reports that go in there. Something that you might have seen that I didn’t really drill into was like, there was like a warning icon with a number next to it. These are the build markers. You see these in the IDE when you, just as you’re working, it shows you how many errors or warnings you have. And you can’t generate a build if you have build markers which are errors. All of those will show up. So even in hindsight, if you go back and look historically at a build, you can see that there were, hey, there were some warnings on that build and you can drill into that and see what they were. So you really have a nice historical archive of everything. We saw the commit hashes and comments. I think in the future we can take that a step further and also link that to agile project management and get the tasks in there. And get the commit hashes even linked back to source control tool as well. If you wanna dig deeper into that. And all of these artifacts they go to a dedicated S3 bucket. So you can download them directly from the Cloud Control Center, but if you need to automate it in a certain way, you can go to that S3 bucket, except for the doctors, they go to our Docker repository. That can also be automated. We saw that there are dedicated environments. In the example I gave, there was a dev environment and a UAT environment. All of the jobs can be one of the options to what’s the target. And so you can deploy to an environment in Servoy Cloud or you can choose no environment. And this is common where people just wanna get the build artifacts and take them and go on their merry way. The other use case for not having environment is the one I gave about running a weekly test. You don’t really wanna deploy anything there. You just wanna get the test reports and be done with it. So you may or may not deploy to an environment. Something about the deployment that I just sort of scroll through those advanced parameters. But there’s really a lot there for like deployment configuration templates. Things like plugging in license codes or what’s the admin username and password for the app server. These are things that you would normally put in like the export if you’re doing it manually through the wizard. Those are there and can be configured. What I didn’t show that’s nice is there’s a file you can put under revision control too that actually has the properties in it and there’s variable substitution in there. So you can actually substitute environment variables as well. And this is ideal because you might wanna take one build and put it on another server. And some of those environment variables are gonna change, right? Like the database connection stuff, et cetera. All of that is made sort of parameterized and can be factored out of the whole pipeline. Something else which is I think specific to Servoy applications it would be some of the static resources, plugins, beans, database drivers, web components, any static imagery that you wanna put. That can also be all be bundled under revision control so we have some mechanisms there. So there’s really a lot going on under the hood with a little bit of training that and how to use the pipeline that really smooth over deployment if any of you have ever done deployments. And then you realize you forgot to include a plugin jar file or something that’s really can be a headache and here it can all be automated and checked. Something else we do is we monitor real time as the build’s going on you saw in the demo part. Once I made that source code commit I was getting these notifications to the screen and there was a progress bar. Something that I didn’t show that you can configure are notification hooks. So I get an email when one of those jobs will fail and it’ll send me a deep link in an email. I also get a chat notification in my chat tool. So that’s quite nice. And because I’m dialed into a lot of pipeline projects I get a lot of notifications about build statuses. So we looked at test automation. It’s really a critical aspect of Servoy cloud. I don’t want to get into the guts of it but something I didn’t show was the actual test code. That’s just the text files that go under source control. We have a really simplified language for writing those tests. You got to see it a bit when I showed the test report. It’s kind of a natural language. So it doesn’t take a full fledged developer to make tests. The environment for testing is spun up and down when the tests are run. So it’s not necessarily the same environment that you deployed to. And this is ideal because if you think about data, changing in data could also cause tests to fail. And so we provide a mechanism to seed the test environment with the test data as well. So you could have different tests, suites with different data sets that go with it. And that could be pretty tricky to manage on your own. So we provide a way to also automate that. So you can really say, I’m going to run this test and I’m not going to get all these false negatives because someone edited a record or something like that. So that’s pretty important detail but important. Unit testing is, it’s an old fashioned test but it still has definitely its place. And I didn’t show it in this case but just like the end to end test. And that’s good for covering business logic and data integrity and that sort of thing. We saw the code coverage and analysis. I think that’s self-explanatory. Security scanning, we added this a while back to also run some scans for vulnerabilities. And this produces a report as well. That you can look at and make sure that your build is just going to be passing in a wasp test. So if you need to get certified, you can know this in advance and you can know that if some commit down the line broke it and this can happen, especially if you make like a custom component, web component, it could cause a wasp vulnerability and if you need to be certified, you want to know that the moment you made that commit and not when you’re trying to deploy it months later. We saw a bit of the agile project management at Agile Project Integration. This is an add-on but when it’s plugged in, you get some extra dashboarding in, you know those KPIs in the dashboard and again, you can get some task-linked source commits to. So there’s some extra benefit there. Something that I didn’t demo but it was there in the list of jobs is a promote job. This is really important. Suppose that you have, kind of gone through the pipeline and passed all the tests and you’re ready to go into pre-production or into production. At that point, you do not want to rebuild from source. You want to take the test-adverified build artifact, as is and sort of move it onto other environments. And that’s what the promote job can do. You can essentially take the build artifact and promote it to that environment and ensures that you get the same build of software. And we also guarantee from a pre-production to production environment that those environments are identical in the sense that sometimes you can say, well, I got this bug and I don’t get it in my dev server, but I get it in my production server and everything’s the same. And then you look around and you find that there was some configuration on the server that was different that caused the bug. In this case, you get the same software and we guarantee between staging and production and production environments that those are the same as well. So that really helps to ensure quality. We offer any number of environments. We have the pipeline ones which are for building and testing. And then we have the high availability production pipelines. And what’s nice is you can add environments for things like training or a sales demo or that sort of thing and be able to push builds to those environments as well. Yeah, some things that are kind of the nitty gritty details. I talked a bit about, I talked a lot about this on the other slide. We also support Jasper Report integration. We have a lot of customers that use Jasper Reports. Again, those are our static files. So we can take files under revision control and make sure that they’re in the deployment. Okay, so I hope that covers, I see some questions coming in and that’s great. I hope that covers kind of the capabilities of Servoy Cloud and what it can do. But I think for you to really understand, we should talk a bit about Servoy Cloud and in the real world with real customer projects. I want to look at some of the ones where we showed their pipeline, we were looking at Kenko engineering as a manufacturer. They make parts for asphalt and plants, construction machinery. And so they use, they had a Servoy application running in production for managing their shop floor. And they, I think they were running on Amazon and they were running into issues with memory and stability, some data integrity issues. And we didn’t even realize that they were in production with this application and they came to us for help. We moved it over to the Servoy Cloud and you had to look at their pipeline. So we put it back into a sort of a get flow with various stages to ensure what’s going into production is what was intended. And yeah, they’ve made a lot of progress on stability. So now they can really focus on, just on plant operations and not on software, essentially. The other pipeline I showed with the Agile integration was new base. They sell, you know, mid-market ERP solutions and essentially their philosophy is, if they want to focus on their customer, their really customer focus, their really UX focused and they were found themselves. I should say that they’ve been with Servoy for a really long time. They were on the older desktop client, and they’ve moved through the various offerings we’ve had. And so for them, this is a natural evolution and they didn’t want to, you know, they didn’t want to learn web technologies when the NG client came out and now they don’t want to learn DevOps to be able to go for SaaS and Cloud. So they want to just kind of outsource that and stay focused on functionality and value. And I like the quote from their founder, Control-Alt-Delete the Rest. But I have a, I think the best use case to go over is Portfolio Plus because we have Ian Galeshan with us and I’ve worked with Ian. He’s actually been on the tech series before. So he’s a moderate celebrity here in this space. But Portfolio Plus was a real early adopter of Servoy Cloud even before we called it Servoy Cloud and they came to us, excuse me, with a consumer banking application that had to be built in essentially in record time and it was their first foray into customer, or, you know, end user consumer facing stuff. So quality and UX was at their forefront and they really chose Servoy because of, because of the test automation and the pipeline. But I would like to ask Ian to join us and talk a bit more about that Ian, are you there? Yes, I’m here. How’s it going? It’s going well. Thanks for sitting through the, the pretty much the whole of the webinar before joining us. Did you learn anything new? There’s some things that I had forgotten to remember. I’m jidney about number one being the, the linting that I personally, maybe other parts of my company have gone in and made use of that. But, you know, I need to do a deeper dive into your linting as well. But yeah, we really, when we made our Servoy app and we needed to have a lot of regression testing because we have, you know, the active development of the Servoy application and our backend API simultaneously. So using the Servoy Cloud, it really integrated into our pipeline. So when the, when the builds are, are triggered and built in the Servoy Cloud, we’re actually, they’re actually talking to the backend API that is part of, that is filled as part of our pipeline. So that integration was, was key. And we developed a great many tests and scripts and for regression. And that was, which are now broken and you need to refactor them, right? Yeah. I’m going to be working with Servoy to, to come up with a methodology that is more efficient for us. Yeah, yeah, for background portfolio plus went through sort of a refactor on their code base. And so they, that also means they have to refactor some of their tests as well. I think what you said Ian about the connecting to the API server is plays into something we said about, about parameterizing the build artifacts from the job configuration. I think you guys have different endpoints for different, like you know, the sandbox environment and then a production environment for, you know, where the API endpoints are. If I remember correctly, that’s all factored outright and you guys use environment variables for that so that when you take a build image and you put it through QA and then you want to put it in production, you don’t have to do any extra configuration. Yeah, that’s been key to that’s been a real time saver. Yeah. And also, you know, being, we’re not using Git currently, we’re using SVN and that was another thing that we’ve prioritized and we’re able to pull from our own SVN repository. Right, right. And you guys had, I was talking about the OAuth scanning and I wanted to remember to bring it up here because I think we have portfolio plus to thank for that. Can you talk about the certification you guys have to go through and some of the vulnerabilities that they check for and that we’re even discovered? Yeah, I think at a high level because, yeah, it was another team that was working with you on that, but certainly we’re building online banking applications for various customers. And so they obviously, the security requirements are high. And we needed a way to ensure that when we committed code, we caught it right away that, hey, there’s, you know, there’s a cross site scripting vulnerability here. Rather than all the way down the line, when we’re going through the certification process and actually paying a third party and they come out and they, you know, and they find it. We wanted to fix it before it got to that point because it’s quite expensive to do multiple iterations of certifications. Right, right. And so what happened in that, I remember portfolio plus came to us and gave us their reports from the testing and some of the things were things that we had to fix kind of in our web components layer and some of the things were things that you guys had to fix. But I remember that when we started making changes to our own code base is when our infrastructure team added the OAuth scan to make sure those changes stayed that way. So it could be possible that a web component that we shipped could break OAuth standards in the future. So now it can be scanned, you know, during build, but also, you know, the Servoy code and the portfolio plus code together are what really I get scanned. So we have, you guys, to thank for that. And I think that a lot of other customers could benefit from that. I had one other question Ian. What are you guys working on now? I think you’re doing mobile apps. Can you talk a bit about how that’s going and how does that play into Servoy cloud? Absolutely. So when we’re talking online banking for end customers are, you know, our Servoy app obviously then needs to become a mobile app. So on that journey, because it was already mobile ready in a browser, but the cloud build from Servoy actually gave us a great way to, we would build our mobile apps against the Servoy cloud hosted solution. And that was a quick win for one thing. You’ve already got set up with a proper SSL certificate itself signed like our internal environments work, which meant that every time we did a build, we could point the mobile app to it and that made testing much easier. And we have the mobile app was a progressive web app as well as being able to promote to the iOS and Android stores. Right. Yeah. I think that’s a good point you brought up that I wasn’t even really considering you’re talking about. A lot of the infrastructure work that goes behind the scenes in a pipeline is an ongoing maintenance thing. So if you’re building your own pipeline, you also have to continuously maintain it. And that means doing things like renewing SSL certificates and generating SSL certificates. I hadn’t even thought of that, but yeah, I can see why that’s helpful to get a mobile app certified like you did. The other thing that I wanted to point out is Ian’s talking about taking a web application and delivering it as a progressive web app or even a native Android or iOS. One of the things that we’re looking into for extending Servoy cloud will be to add a build service for taking an application and generating the binaries needed and handling all of that complexity as well. So some of the things that we’ve worked together on, we’d like to actually put in the cloud to more easily create those as another job maybe. So that’s I think something that we can work on together in the future. Ian, I want to thank you. You’re always welcome at the tech series and I hope to have you back soon for more cool stuff. Great, thank you for having me. I also want to invite Ron to talk about how to get started because if you like what you saw so far, maybe you want to know, okay, well, what is it cost? How do I sign up? What can I do? Ron, are you still with us? Yeah, I’m here. Yeah, that’s good content. I hope I really liked what they saw. I think it’s pretty extensive. So I guess people want to know what does all this stuff cost? And like Sean said, we split up in basically two big parts. One is what we call pipelines, everything which leads up to some kind of artifact. We have three flavors of that to get started which we call base. So you need this what we call cloud control center, which is the UI which is shown on show show. And then you have the analytics and source control, the case man images in there. If you, if it’s old, everything you need to get builds to a dev environment basically and to get a war or a Docker out of that. If you up at a notch, you get more focused on quality. You do things like a unit testing, go to analysis. You probably want to build more often. You see there, we do build throttling in the base versus higher levels. It’s very used, but we try to keep customers not building continuously every minute stuff if you’re in base. If you up at an inch even further, you start doing things like end to end test. You have Docker repositories. So on the left, it’s, yeah, everything you need to get started on the right end is everything you need to have really quality assessed builds. So we have price over that, these are a per month. And I think it makes sense that we try to keep these prices as low and as low as possible. And for you guys to get started easily. Even more interesting, I think is when you go into production. So that’s on the next slide, Sean. And we probably put these prices up shortly on our website too, guys. If you look at production, there’s a couple of things which I think needs to be noted is that we always start a production with the lowest is what we call our uptime guarantees is 96%. And now this is the seems low, but it’s a full stack uptime. So everybody knows that if you go to Amazon or as your local cloud provider, that getting high availability on infrastructure or operating system, that’s easy. But we guarantee uptime on the whole stack, which means including the database, including server TomCAD, including everything. So this is not very metal hosting or Amazon. So we don’t like to call this hosting, we like to call this production. And to even update a notch, I think for us that we can guarantee functionality and performance in production environments. It requires an end-to-end test for what you want than guaranteed on, or we can do that. And I think that’s really where most of our customers wanna go because that makes them not have to worry at all about going into production. So as long as you provide functional end-to-end test, you can show that the end-to-end test ran in your pipeline, we will guarantee that that functionality will run in production and it will perform. So I think that’s, I think like I said, really where you wanna be, well, to be completely un-burdened. We have two pricing models, one is for ISVs, which is based, everything we do is try, we try to based on usage, linked at the end to somewhere to what the infrastructure costs us. It’s very difficult for now, and maybe we’ll come up with an open pricing model, which we can put on, but it’s typically an outdated. And we can tailor it with or without your current license. So typical new customers go to our cloud and they put the license model in it, but we also have customers, which are not clouding production, which brought their own license model, which I already had. The enterprise model, that’s based on concurrent sessions. So that’s on the next slide. And that’s, again, we’ll probably put that shortly on our website. And I made two samples, because the model is not that simple, because we have a couple of things which we use to calculate these prices. One is, of course, users, that’s concurrent sessions. One is uptime guarantees, and the other one is the big ones are the support window. So in what window we put these guarantees. So the simplest model you can start with 10 users, because you $200 a month, then you have the 96% uptime office hours support and guarantees. And it goes up to 50 or 100, and you can go to unlimited users on this. And I think Sean explained it in the slides before. This will scale. It will automatically on demand scale out application servers to 10,000, 10,000s, 100,000s of users. It will do the same for the database, of course. So Tomcast will spin out. Database will be bigger and bigger. And we can still keep on guaranteeing all these. These prices are excluding licenses. So we have our enterprise license model. And we have add-ons. So in the pricing, there is, of course, some file and database storage included. Some people just need more than our average, or our numbers. You can, you can the dash into a buy more database storage or file storage. You can buy more environments, pre-production environments. So there’s a bunch of add-ons you can buy. But I guess the sales guys from Srivoy will tell you all those prices. So yeah, it’s very easy to start, I would say. I think this sample makes it clear. Okay, well, thanks for that, Ron. Since you’re here, though, you have to stick around and, oh, yeah, we have one more. Can you tell us the special that we have? Yeah, I think the special is, again, for the two parts we have, we’re so excited and I really want people to get a taste of this. So if you sign up before the 15th of this month, you get the first three months really big discount. If you go for a base, you only pay $100. So if you go for a month, you go for the QA level 250, QA plus 500. So those are big discounts. And this is for the first three months. And also if you go in production, and it will take you some time to get your stuff on there. So you’ll have some time. You get the first six months, 30% discount on production price. And I think this really makes it worth people’s while to get a taste of it and eventually really go to production and get the advantages of not having to worry about dev ops anymore. Yeah, and one thing that I know from experience is you can start tomorrow. We can get all of these environments spun up on demand, automated, that easy. So. Yeah. Okay, well, now you have to help me answer questions Ron, because you stuck around to the end. Oh, damn. So I’ll leave these slides up some helpful links. Of course, if you’re interested in this, you can contact sales at Servoy.com. And I’m going to ask Evo to help us Evo. Did we get some, I saw some questions coming in throughout the demo. Yep. Very good news. You’re 11 minutes late and you’ve got 11 questions to answer. Oh, geez. So negative one minute per question. Yes. Go. Go. First question when you were talking about notifications, you’re getting while you’re using the pipeline. If that can be integrated with Slack. Yeah, I believe it can be because we were using Slack as a company. We switched to Zoom chat and we switched all our notifications from Slack over to Zoom. So we used to actually have it in Slack. I think anything with a webhook type of architecture can do that. But we’d have to look into that. We expose it for customers. OK. That was a question. What repositories you can use? I saw a quick answer already. I think it said yet. Yeah. So we offer on-demand Git repositories in Servoy cloud. But as you heard from Ian, they were using SVM. Their legacy code base was in SVM. And they wanted to stay standardized on that and have their code in one place. So we can also work with third party or remote or on-premise repositories as well. But the two repository types that we support are Git and SVM. And we offer Git repositories integrated in our cloud. But we can connect either of those remotely. And we even have some customers that have some less common repository types. And in that case, you can set up a mirror as well. So pretty much any source controlled type that’s out there. I think we can work with. And of course, it’s easiest to host it with us if you don’t have a preference. And it doesn’t cost anything extra. We manage it for you. Cool. I got a question here. I assume from a software vendor, it’s saying we do about half of our deployments on-premise. So I assume that the rest is in the cloud. Would I have to host in a Servoy cloud? Right. Yeah. Definitely sounds like a software vendor. Yeah. Yeah. Yeah. The good news is that you can keep that half and half if you wanted to. And you don’t have to. So to take advantage of the pipeline, you don’t have to host in Servoy cloud. If you wanted, if you were interested in taking the part that you are hosting and putting into the Servoy cloud, but you have stubborn customers that refuse to go up or for some other reason they don’t want to go up, you can do the hybrid pipeline that we discussed and take the artifacts and put them to production environments, which are not in Servoy cloud. And in fact, we are looking to extend the sort of orchestration component to on-premise environments as well, where we can some of the monitoring that you saw could even extend to remote environments as well. So the answer is yes. Cool. Question about databases. Question areas do you support SQL Server, but I assume that people want to know about other. Right. Right. Yeah. So yeah, we support, I think, in Servoy cloud, we can do post-crest UL SQL Server. We have what is my SQL? Excuse me. And then I think some of the other database you have to bring your own license, like Oracle or progress, that sort of thing. But out of the box, I think we do SQL Server, Post-crest UL, and MySQL. OK. There was a question about SVM, but I think you just answered that. Yep. What kind of servers are you running? I want to run to answer this. Dell EMs. No. No. No, it’s not a secret that we run on Amazon. But this is not really server. So this is service. And it’s based on Docker. So yeah, somewhere in the end, there is servers, of course. I don’t think we even know what really is there. But basically, this runs on Docker. So you see services, which are spread across machines at some point. You do see in the Docker, what kind of memory limits there are. But that’s all managed by Srivoy, by the Christopher Cloud basically. All right. Question Sean, this will run in Australia. And actually, we were talking about, we were talking with a software vendor in South America, as well, the other day. So what kind of coverage do we have? Yeah, and we can do anywhere we can put a AWS cluster, right? Yeah, what’s our geographic range? Well, today we have I think three regions. We can put anywhere, well, depending on how the demand from customers in that region is if we need to put it in a region where there’s only one customer will probably charge an amount for that. We can put it in any region where there’s Amazon. It’s no secret that we run today on Amazon. We could switch, but that’s where we are. We are. No. And Amazon has a server park in Australia, right? Oh, yeah. Yeah. All right. Then some compliance questions, ISO, GDPR, and HIPAA is one that they just left a webinar because they had to be in an agile meeting. The HIPAA is one that came up yesterday, as well. Can you talk about that, Sean? Those three? Yeah. Yeah, HIPAA is also part on how you build your application, but we can be HIPAA compliant on the back end. Ron, you’re in the right part of the world to answer GDPR. Yeah. Totally. Well, I think the GDPR question I saw coming by is about how compliant we are. Well, I think the short answer is GDPR. And of course, as together with our customer application, we have to be compliant. So technically, there’s nothing holding us back from being compliant, right? So yeah, that’s the answer there. Yes, we are compliant. The question here was very specific whether we can mask data. So I think that means anonymized. You talked about Sean about the data seating. We have a mechanism, basically, it’s like a script, which you can use over your production data to both purges and to anonymize it to make testing data out of it. We have that. But that still work for a developer, because of course, no technology understands what’s personal data and what’s not. So you have to really point to that and set up the mechanism how to replace that. Well, we have a way to do that. But again, yeah, you can be compliant with share of our cloud to GDPR. Cool. Anything specific about ISO or nothing specific? We are today not certified. We’ve chosen not to do that yet, because it’s a lot of procedures. I think we would comply probably, but there’s a lot of the J.D. paperwork to be done. And we will pick that up at one time. So. OK. But of course, Amazon, which we run on, is that? So. Yep. How do you guarantee performance and measure performance? I think Sean, you showed a lot of that already in the dashboard, right? Well, we wasn’t really performing in the dashboards. We can guarantee performance as Ron said on anything which has a test to cover it so that if it performs well under test, then when it moves to production, it will also perform well. I don’t know, Ron, do you want to say anything else about that? Yeah, so we run the same end to end test both in your test environment as in pre-production. Pre-production is a copy of production infrastructure-wise. That’s why you need it when you want such guarantees. So for instance, if the production environment is constitutes of, let’s say, six servers or something or a minimum number of nodes, then in the same infrastructure, we will spin up, run the end to end test on and then measure the performance there under a certain load. And that’s how we guarantee that there is the deviation from that performance is kept OK. All right. Then I think I’ve got a feature request here. Are there any plans to integrate the cloud dashboard into developer? No, I need to. I guess. I want to answer that because I wanted to have this before we even had the cloud dashboard. So I wrote Ron, I maybe you remember a couple years ago when we were working on this. The initial thought was to do this and kind of have it in developer. So I think eventually we could get there. I’d like to know more about the why behind the feature request. But yeah, I think there’s opportunity for tighter integration with the development environment that is sort of Servoy cloud aware because today the IDE is really just an IDE for dealing with source code. But as it could become more Servoy cloud aware, we could do some interesting things. All right. What does the test code look like? Yeah, it’s a fair question. Good question because we showed the end-to-end tests. But and that they had run and passed or failed and there was a report. The test code is a simple text file. It’s like a natural language test language. Like I given that I navigate to this form and I push this button type of language. So there’s a webinar about that. Did it there? Yeah, I think it’s kind of old. We did do webinar some maybe two years ago or a year and a half ago. I could show one right now. But I think we’re getting on in time. But yeah, sign up for the promotion and then we’ll get you started with some sample tests. But it’s a simple text file and it goes under revision control. Cool. There’s a last question, but open question. What is planned in the future? OK. Yeah, I think we talked a bit about when we were talking with Ian and their work with mobile apps. We’re seeing this a lot of customers that are building the ng mobile applications. So these are web applications which are wrapped to be distributed as native iOS and Android. And the process for doing that is outside of Servoy. It’s kind of tooling and process oriented. And we really think we can help because again, that’s one of those generic problems that looks the same no matter who you are. And it’s a lot like a job with a lot of configuration and automation that go into it. So we’re looking at adding, we have a prototype and we’re looking at putting that in Servoy cloud. There’s a native desktop version of the same for deploying apps that can talk directly to file systems and hardware and things like that for desktops. We’re also looking at putting that builder in the cloud as another job. And I think we can do a lot more with container monitoring. There were some things that we do already that I didn’t show. So look like looking at logs from the database and from Servoy and from Tomcat and looking at container stuff like RAM and CPU to aggregate that and show it. But we can even go, I think, to the next level which is application analytics. So looking into how your application is being used by all of your users no matter where they are if they’re on premise or in our cloud or in your own cloud and aggregating that and showing it. I think that now that we have a foundation, there’s a lot of fruit there to pick. So we’re excited about what’s to come. Cool. I think that’s a wrap or not. Yeah, I think it’s dinner time in Holland. So we should let Ron go. Sounds like it. Well, thank you. Thank you, Ian, for joining us. Thanks, everybody, for the great questions. Thanks, Sean, for presenting this and a ROM for helping out. Thank you. Have a good morning or a good afternoon or a good evening. Oh, good dinner for you, Ron. Thank you. A lot of people. Thank you, Ian, especially. Bye, everyone. Bye. Bye. Thank you, Sean. Take care. Bye.