The three core issues were...
Unexpected delays explained within Azure. After the theme song I'm going to talk all about it. Hey, what's up? Marcel Martens here of the Cloud Secrets Podcast. Welcome back, glad you're here, hope you're doing fine. I just brought my kids to school and daycare so I'm off to the office now, but I want to share a story with you that was particularly difficult to locate and it's taken us several weeks, if you follow me, I did some WVD Podcasts or episodes about... Well, issues that we're having and our first impression, et cetera. But the thing that eventually... Well there were three core issues.
One was unexpected delays. We couldn't explain them, we saw routing issues within the network of Microsoft. So we're in the same region as the data center of Azure is. So we should have like low latency, I expect within 10 to 50 milliseconds. But when you follow the trace, you see that there's a major... well, bump and that's between two hops. There's a 100 plus millisecond delay and that's what's causing... Yeah, what should we call them.
Subscribe to Cloud Secrets channel on YouTube
The user can notice that there is a delay because the response times or responses on the screen aren't as fast as they expect to be. So we ended up opening a ticket with Microsoft and of course there was nothing wrong with their network as you might expect. Ohhh boy. It took some persuasion and persistence to get up higher in the chain and finally we arrived or spoke with an engineer at a level that makes sense and who bites his teeth in into this issue.
And he started digging and digging and digging and finally ended up that our IP space that we own, we bought several years ago, we bought our own IP space at Ripe and our IP block, as you can call it, a couple of thousand addresses that we use for public services for our customers and services of our own.
For this particular customer on their domain controllers we were forwarding the DNS to the Google DNS servers, 8.8.8.8 and 8.8.4.4 for the secondary and while Google of course also forwards it's DNS to DNS root servers, they ended up forwarding it to the, no, let me tell it correctly.
When Google asks some other DNS server for the address, it also uses a source address and based on that source address, Microsoft routes you to a specific zone or region and they were giving me American IP address as a source. So we ended up being routed across the globe before we reached our virtual desktop.
That gave a delay, an extra delay of over 100 milliseconds, which was... pretty much a showstopper because the customer was experiencing these kinds of lags and interruptions or delays within, when they scroll through a web browser or whatever. Or typing in orders in their order system.
And since we were doing a proof of concept for this particular customer, we need it to run as smooth as possible and at least with the same response time and performance as the current solution they're using. Besides that issue, we also had delays within the application that run over the express route.
And today's testing day and we're going to try if those are sold as well. If they are screen delay related or that they are latency related due to the Azure express route.
And the third one was a different application based on the Omnis (Tiger Logic) engine that was really starting, four or five times slower than any current desktop situation within Citrix.
My opinion is that it has to do with the OS difference and now with the old solution they are using is based on 2008 R2 and the new solution is based on Windows 10. So there's a big difference in operating system.
And we finally managed to contact the developers and we're going to try to upgrade the software to the later version, which officially supports Windows 10. But the application on its own is pretty slow because I thought it was a client, a three tier application. So we've got a database server, we've got an application server, you've got the application running on the desktop, but they switch over to a different method so that most of this software and calculations are starting up or are loaded during startup of the application.
So that takes a longer time, doesn't sound really clear and convincing to me. So I gave them feedback and they're also testing to get these two applications run the same as in the current situation and when they do, then we got a successful proof of concept and we can move forward to migrate all applications towards WVD.
So a rather technical on this one, big takeaway was that Microsoft was routing traffic through the wrong region and that ended up getting higher delays or higher latency. Now Microsoft is changing our I block, our IP space is getting assigned to the region that we're in. So we always get the shortest path to the services and servers within Azure.
I hope you like this. If you do, by the way, please rate and subscribe because I like to increase my number of followers or and not my followers, there's no meaning or purpose on its own, but I like to help more IT professionals like yourself or business owners. So if you rate, subscribe, and share this, I get a broader reach and I can help more people. So thanks in advance and I'll see you guys next time.
Thank you very much.
Bye bye,
Marcel Martens.
Every business needs email, data, protection and security. Here's how I like to make an impact to the world and make it a safer "online" place.
Connect with me on social media!
Facebook, Instagram, LinkedIn, Twitter
…
Unexpected delays explained within Azure