Beliebte Suchanfragen
|
//

Web Performance Optimization: Serverside Software

18.12.2010 | 6 minutes of reading time

This part of my series covers the most traditional part of WPO: The (custom written) software that runs on the server. So optimization also includes all our software design decisions and coding. Examples might be biased to Java, as this is my main language.

Build Scalable Designs

Before joining codecentric, projects I worked on were usually ending up in terrible troubles and coming late (they were big enough so that this could not only be my fault). As a result all systems and scalability tests were canceled. Actually that didn’t matter that much, as they were planned at the end of the projects, where design bugs are impossible to fix. At codecentric we are working on improving our agile process every day to make our projects successful. We need to think about a scalable design from day one. The question “What do we need to change to add 2 servers?” should ideally be answered with “nothing“. So how to get there? Some of the factors for a scalable design have already been mentioned in my post about infrasturcture-oriented WPO .
Agile methodologies really help here, but even without you should run experiments with scalability early on. You should as well do at least basic load testing to understand load patterns in your application. I really would like to see this integrated into definitions of done and executed all the time, but I do believe doing this at all early on will bring big improvements.
But just testing scalability does not realize it magically. It has to be designed. Gernot Starke talked at our last Meet The Experts about the dilemma that architects are facing when trying to learn about good design. His argument was that even with big companies like Facebook , Twitter , Ebay , Amazon and Google talking about their software designs quite often, those are rarely applicable to software most of us design and write every day.
I think he is right. And I think he is wrong. Yes we might not have hundred thousands of data queries every second, but the design allowing this might scale and work better than what I would have come up with, even for my mid size customers application. So it is of course good to learn those patterns.

Upgrade third party early

Not just our coding and design make up the system performance. There are also plenty of other software products involved. I am assuming that we use at least a dozen external software products when building our applications. Thats not a bad thing. We do not need to write that code, which saves us a lot of time. But perhaps even more important: We do not need to be experts. We do not need to be experts on rule-systems, managing network connections, caching, encoding, compression and a lot more. We (almost) can concentrate on building our solution. So when we trust them building good components to build upon, why don’t we upgrade often? In the recent years, more and more software makers put “performance” in their release notes. Almost every release of every software brings performance improvements, but we usually do not take them.
For me there are two reasons for that

  1. Fear of backwards incompatible changes
  2. Chaos in third party management with a inflexible process

Both are valid reasons. However the first one just gets worse. After a certain point the changes required to upgrade have accumulated to a big pile that nobody wants to touch. So I recommend to upgrade often to benefit from all the performance improvements your external experts make. From my experience there is an interesting correlation between fear and (performance) gain. Upgrading Application Server, Databases, UI Frameworks, Rule Engines usually gives much better performance than changing a version of Apache commons-lang. But they are feared more. I think the reason for that is that those parts are huge and complex, which is exactly the reason for them offering so much optimization potential. But when fearing problems, how could you decide to use it at all?
The second one however is difficult to solve. Many people think throwing Maven at the problem will solve it, but I think the real reason is often a messy design or just ignoring the issue, which technologies cannot rescue. On the technical side things like maven for build management and dependency declaration and for example OSGi for managing this at runtime are really helpful, but can never even out design issues. I do believe this can be managed in one way or another.

Choose the fastest communication protocol

Communication protocols are very important in a distributed system. However we often fail at doing good decisions there. With all the SOA hype we all build software that use SOAP Webservices. Which is worst protocol at all – performance wise. Also often services are either too small or designed incorrectly, so that a lot of data has to be transferred or many remote invocations have to be made. But assuming a good design, the protocol can make a difference. There are public available benchmarks, like the JBoss Remoting Benchmark , or a Java benchmark by Daniel Gredler as well as many other you can google for. But ideally you do run your own benchmark for your use case.
With respect to AJAX there are only 3 formats which all work over the HTTP connection.

  1. XML – But I think nobody is using that. Too much data overhead.
  2. JSON – Protocol of choice for most developers. Often plain key value pairs. Fast, as it can be translated directly to JavaScript Objects.
  3. JavaScript – Instead of just JS Objects some people transport code that will be executed. Allows fancy tricks, but is an indicator for a too generic interface.

An interesting addition to that will be WebSockets , which is coming with next browser releases and already somewhat supported through solutions like Kaazing.

Gain insight into your application performance

The server side application is a big black box for WPO, but unfortunately a main contributer to the lack of great performance. You cannot compensate a slow serverside response with WPO tricks, but you need to investigate why the response is slow. For that I recommend an application performance monitoring solution. Unlike traditional systems monitoring, it opens up the black box and can look inside. Those APM solutions usually support just a single programming language, and we think that for Java AppDynamics is the best solution on the market, which is why it is in our portfolio. When you have a monitoring solution running on your production, you will get a fast pointer to code and design areas that cause your application to slow down. When you are working to fix those issues, you usually use a profiler on a developer machine to capture every tiny detail of the transaction you are trying to improve. For Java I can recommend YourKit and JProfiler .

The server side software is usually not looked in detail at by many WPO folks. The reason for that is that this area is now new, but still an important factor. At codecentric we have a lot of experience in solving those performance issues, both on design and on deep-inside-frameworkcode level. In my last episode of this series, I will talk about the most hyped area of WPO. The clients, meaning browsers, and the tuning potential they offer.

My WPO series:

  1. Introduction into Web Performance Optimization
  2. Web Performance Optimization: The Infrastructure
  3. Web Performance Optimization: Serverside Software
  4. Web Performance Optimization: Client Side
|

share post

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.