Category - SW

2014 with Millennium IT

Founded in 1996 by Tony Weerasinghe, MillenniumIT has established itself as one of the leading software companies in Sri Lanka. Starting off as a systems integrator, the company has specialised in delivering high performance capital market software solutions.

2014 was another great year for MillenniumIT, with several key projects going live, the initiation of new deployments and also bagging several esteemed awards in the industry. It is also great to see them giving back to society through various CSR projects, especially the housing project in the north. As we step into 2015, let’s take a look at the many achievements of MillenniumIT for 2014, through the below infographic.


MillenniumIT Rated as the Most Respected Technology Firm in Sri Lanka

LMD’s Most Respected ranking of the Sri Lanka’s top businesses for the year 2014 is just out. Millennium IT is rated as the most respected company in the Technology sector in Sri Lanka this year.

Millennium IT is a technology innovation driven company that powers more than 30 global and national financial institutions around the globe. MillenniumIT’s systems are used by exchange businesses around the world including London Stock Exchange, Borsa Italiana, Oslo Børs, Turquoise, ICAP, the London Metal Exchange, Johannesburg Stock Exchange and a series of emerging market exchanges. Millennium IT has provided technology for London Stock Exchange group and powers one third of the entire European equities market and is one of the fastest trading systems in the world. Founded in 1996, MillenniumIT was acquired by the international diversified exchange business London Stock Exchange Group in October 2009.

This is the 10th issue of the special edition that measures peer perceptions of corporate admiration. LMD has given a give a face-lift to the ranking system this year by adopting the Olympic Ranking System. This year, they are being awarded gold, silver and bronze medals!

Millennium IT has awarded 3 GOLD, 1 SILVER and 3 BRONZE and is ranked at 34th the list across all industries. Virtusa is ranked at 39th with 2 GOLD, 2 SILVER and 5 BRONZE. DMS Software technologies has secured 61st place with 1 GOLD and 1 BRONZE medal. WSO2, MicroImage has ranked 73rd and 92nd respectively. Last year, Virtusa cooperation was the only Sri Lankan software business that got its name in the list appearing in the second fifty of companies.

Most Respected ranking is conceptualized by LMD, and conducted by Nielsen. Rankings are based on survey results on opinions expressed by respondents going in to details of why the companies were perceived as being the ‘most respected.’

We, the TechWire editorial team congratulate Millennium IT and other companies for their achievement. We anticipate information technology companies to show their true colors in the years to come, with far greater contribution to Sri Lanka’s economy in getting its way as “most respected”.





Free Anti Virus Guards. Which one is for you?

If you are wondering about having a good virus guard, well, here are some little details about freely available and most famous virus guards on the internet. If I didn’t mention about a virus guard that means either it is not that good or I haven’t had a chance to use it and experience it.

#1 Bitdefender


My personal favorite! Even though it is not very famous, you simply can’t proceed without talking about the best you can find, can you? This doesn’t have a cool appearance or whatsoever. However it literally does the job. Which is detecting threats. It protects your computer from the all kind of viruses and it automatically keeps the virus database up to date. Mostly it doesn’t interrupt you at all when detecting viruses but of course it gives you a small notification balloon. Simply it diverts the virus to a quarantine folder from where you can easily recover the file, if it is a false action or if it is an important file to you. Only con is that there is no internet security but guess what, if you are using Google Chrome as your default browser, it has a small add-on called; Bitdefender TrafficLight, which will give you an average protection when you browse. This one is super light, fast, and commercial free product.

#2 Avast


This one was my #1 before I got to know about Bitdefender. However unlike Bitdefender, it covers some extent in internet security in the free version. It has a nice, gadgetmatic, windows 8 like appearance. High download rate in and high popularity in this industry. So many options are available to change and customize its behavior. It has an option called “Sandbox”, where you can run an application in a secure environment so that it won’t harm your computer if it has a threat. According to VB100, latest comparative statistics ( / Oct-2013), when you compare Avast against Bitdefender, its proavitive and reactive detection levels are lower.

#3 Avira


This one is quite famous and has a competition in the free virus guard market with AVG. Avira could be pretty annoying if you are a person that should not to be interrupted while you are working. It always pop ups notifications, update details, and all sort of stuff using the system speakers. Detection wise, it’s nearly in Avast detection range.

#4 AVG


Name by itself says it all. Anti-Virus Guard or AVG is the most used virus guard of all time. It gives you enormous features even in the free application; automatically scanning removable devices, schedule scans, and much more. AVG can secure your browsing up to a certain extent. But these extra features can consume a lot of memory from your RAM and it may slow down your PC or laptop. Sometimes you may have difficulties in the uninstallation process too.


            I have used many virus guards personally and have experienced all most everything on the free market and some of the paid products. By in my experience, I advise you to consider following points when you are going to use a free virus guard.

If you are looking for a hustle free, smooth running, let him do the job type product, go for Bitdefender.

If your want some advance options that you need to take authority and take care of your own protection, you can use either Avast or Avira.

I personally don’t advise you to use AVG, unless or otherwise you know how to manage advance options that are available in AVG. This product can be quite useful if you know how to use it very well.

At last, final decision is yours. If this article somehow helped you to get an idea on famous free virus guards, that would be enough for me.

Wish you everyone a threatless PC! 😉

2013 with WSO2 – Infographic

WSO2 founded by Dr Sanjeewa Weerawaran in 2005, has progressed in leaps and bounds in the past 8 years. Providing clients with an open source middleware platform, WSO2 has attracted some big clients such as eBay, and has won several awards as well.

2013 has been quite a year for WSO2, from providing services to Boeing and Deutsche Telekom, to extending its footprint to the mobile domain. As we have embarked on the new year, lets take a look at the impressive achievements of WSO2 in 2013 through the below infographic.

2013 with WSO2---infographic


Being one of the top players in the Sri Lankan ICT industry, 99X Technology obtained two awards at the recently concluded National Best Quality ICT Awards (NBQSA).

The two awards are as follows

WAG : Web Accessibility Guide – Bronze Award, R&D Category

ALMUR :  Merit Award, Tools and Infrastructure Applications

NBQSA Awards

WAG it seems is a toolset which is aimed to make surfing the internet that much easier to the visually handicapped. This provides assistance to users with total blindness, low vision and colour applications. It additionally enables the visually impaired people to apply accessibility corrections. It is wonderful to see projects such as WAG being implemented, where technology is used for the betterment of society. Kudos to the project team!

The ALMUR is a product which facilitates binary decision making using fuzzy loging on cases expressed in approximate human terms. A typical question asked from ALMUR would be: ‘If the student’s Math grade is high, English grade is low and Geography grade is somewhat high, should we pass the student?’

ALMUR libraries are currently being used by ‘Norge-serien’ and ‘Boating Norway’ iOS apps for fuzzy logic based decision making.

“Research and development is fundamental to our strategy, keeping in line with our aim to constantly innovate. Hence, the award signifies our commitment towards reaching this goal”

– 99X Technology Co-Founder and CEO Mano Sekaram on his company’s achievements at the NBQSA Awards.

99x project team

Have you ever read iTunes License Agreement(EULA) ?

If you’ve ever downloaded an app via iTunes, Apple has probably insisted that you first agree to its End User License Agreement, or EULA. More evidence that you didn’t read the iTunes EULA? and you probably didn’t notice the last line of paragraph (g). It reads “You also agree that you will not use these products for any purposes prohibited by United States law, including, without limitation, the development, design, manufacture or production of nuclear, missiles, or chemical or biological weapons.” So please, do not create weapons of mass destruction in iTunes. You wouldn’t want to lose access to your playlist on your quest for world domination.

Don’t give in to FOSS phobia

In the world of business, where there is risk, there is always an opportunity. Opportunists take calculated risks to gain higher profit where others are too faint-hearted even to try. This article is about such an opportunity.

FOSS Phobia

If you work for a small to medium scale (may be even large) enterprise software vendor you probably have experienced the disease known as FOSS phobia. This condition causes irrational behavior on the part of management when the company’s software requires third party software components to perform effectively and those components are available through a choice of commercial closed source offerings as well as free and open source software (FOSS) projects.

FOSS-phobic management fails to even consider this choice rationally and decides in favor of commercial closed source offerings. This is because the primary pathogen in this case, the marketing guy from the closed source component, provider is offering a support contract that a free and open source software project cannot match. He also plants this idea of a need of a support contract deep into the manager’s mind so that he’ll decide against free & open source software at any given opportunity, effectively making him a victim of FOSS phobia. Common sense criteria such as licensing, performance, quality of software and stability won’t even come in to consideration after that.

The idea of this article here is not to promote the definite use of free and open source software at any such given opportunity, But rather to emphasize the point of conducting a proper risk vs. gain analysis of choosing either option on a case by case basis.



First let’s consider the primary selling point from that marketing guy for their closed source software component, the expensive but comprehensive support contract. Why would you require support in the first place? Firstly this is to gain initial know how on configurations, setup and tuning for the component. Secondly support will be required in order to diagnose and fix a production fault or a malfunction with the given software component. You don’t have to think too much in order to understand the huge incentive the third party software component provider has, in order to make his software seem more complex and hard to learn as possible. This is particularly true of markets where there are only few dominant vendors (less competition) for the same component. Good examples for these kinds of vendors can be found in the enterprise database and application server markets.

Naturally this abuse of power is one of the reasons why the FOSS movement came into being in the first place. FOSS projects do not have an agenda to sell you software, but they do care about gaining popularity among users so that the project can thrive. Therefore you can bet on it that the more popular and successful a FOSS project is, the easier it would be to learn, configure and adapt it for your own requirements and you’ll face also less issues while on production. You’d also observe that the more popular FOSS components have a great supportive community of followers as well as companies that are willing to provide support for it if needed. But I stress again that this is not enough reason to make a general case for using FOSS software components or otherwise.

So how would you go about evaluating a commercial software component vs. a FOSS component to be used with your company’s own software? As I mentioned earlier this would have to be done on a case by case basis. Following are some general rules of thumb for evaluation of FOSS components to be used with your software.

  1. Make sure the license is not infectious – Some FOSS licenses oblige you to open source your own code once you integrate the FOSS component into your code. More popular non-infectious FOSS licenses are Apache license, Eclipse public license and MIT license.
  2. Check if the software component satisfies your requirements (functionality, performance etc.).
  3. Make sure there’s enough documentation online even for the core parts of the code that you might never even touch in the beginning.
  4. Get an understanding of how active their forums are, in terms of number of questions asked/answered as well as how active the core developers are in participating in the conversations. Higher the activity the better for you in finding answers for common problems as well as problems specific to you.
  5. Check if there’s commercial support available if you were to require some assistance in the future.
  6. If multiple FOSS components are available, compare/contrast between them using above criteria to select the better one.
  7. Finally compare/contrast with the available commercial closed source alternatives, if you can afford it. Author’s personal experience is that FOSS components that filter through above rules of thumb beat commercial alternatives by a mile most of the time, even without considering the cost savings in using FOSS components.

    Imagine you made the choice to go with the FOSS component even when there is a risk that you yourself might have to provide support for the component in the future without help from the original authors. You might even be using that component in one of the most mission-critical parts of your software. But this might result in some huge opportunities for your company to profit from in the future.



    First opportunity presents itself when the software developers of your company become familiar with that external FOSS component. This might result in better tuning of your software as well as that external component to suit your needs, making your software perform better in the future. This might also result in you being able to provide better support for your clients as well. Also you’ll not be constrained by an external party not to do any changes to the components as the business requirements change. A good example of this can be found in how Google’s Android operating system is used by different software and hardware vendors. Some like Amazon have even managed to fork and customize it according their needs in their Kindle Fire product, revealing a great opportunity for profit.

    Another way you stand to gain by your risky investment is when your company requires some vertical integration or diversification. With your software developers becoming experts on those FOSS components you use as support for your software instead of treating them as black boxes, you’ll be in a nice position to sell support for those components as well, profiting in turn from other peoples FOSS phobia. Or you may even be able to build a complete product over it, extracting a profit from that as well. If you look closely you can see this happening all over the enterprise software field right now. Some of the companies that do that already include RedHat with JBoss software and OpenStack, Mulesoft with Tcat application server, Pivotal with Hadoop and Liferay with their Portal solution and many more.


    I hope this article has convinced you of the potential gain of adapting and integrating FOSS components into your own software. Though you take a bit of risk in using some of the stuff without any support contracts, it’d still be better in the long run than spending money on support contracts that may end up being a burden to you as well as your clients.

    More importantly if you do find a FOSS project useful in some way please make sure you do contribute back to it in whichever way you can as well.

    (Image sources: 1, 2, 3, 4)

A Logic that Urinates (Part II)

Note from the editor: This is the last of the two-part article series on how real world problems can be solved using fuzzy logic. Part I of this article can be found here.

Fuzzy logic brings in a solution

The brilliant intellectual Lotfi Zadeh introduced Fuzzy Logic in 1973, which constitutes a beautiful solution to problems like the one articulated above. Fuzzy Logic, in its core, builds a sophisticated mathematical framework that can translate semantics expressed in vague human terms into crisp numeric form. This enables us, humans, to express our knowledge in a particular domain using a language familiar to us, but still make that knowledge solve concrete numeric problems. For example, our knowledge can be expressed like the following rules that are used to determine whether a candidate should be chosen for a job depending on his experience, education and salary expectation levels.

  • If Experience is High, Education level is Medium and Salary expectation is Low, then hire the guy.
  • If Experience is Medium, Education level is High and Salary expectation is High then do not hire the guy.
  • If Experience is High, Education level is Somewhat High and Salary Expectation is Very High then do not hire the guy.

Rules like above can be formed using our knowledge, experience, gut feeling, etc about the domain. The collection of rules is typically termed Fuzzy Rulebase. We feed the rulebase into a Fuzzy Logic System (FLS), which aggregates and stores this knowledge. Most notably, the FLS stores the knowledge in a numerical logic that can be processed by a computer. After that FLS is capable of answering questions like the following.

If the Experience level is 4 (out of 5), Education level is 3 and Salary Expectation is 4, should we hire the guy?

No need to mention that, the richer the rulebase is, the more accurate is the outcome.

Lotfi Zadeh

A Little further insight into Fuzzy Logic

How does Zadeh’s new logic perform its wonders under the hood? If fuzzy logic is a complex and amazing structure, the magic brick it is built by is the concept termed possibility. Zadeh’s genius is to identify that, in human discourse, likeliness does not refer to likelihood, but to membership. Let me exercise my teaching guts to describe this in a more digestible form. When we humans are confronted with a question like “How hot is 25oC?” we do not think it like “What is the likelihood that 25oC can be considered hot?” (Do we?). We rather think it like “Up to which degree does 25oC belong to the notion ‘hot’?”. To put the same thing in different terms, it’s about “the degree of belongingness to something” but not ”the probability of being something”. You might now be thinking of giving up the idea of becoming Zadeh’s follower, but I suggest you to hold on and give it a second thought.

I believe that Zadeh touches the real nerve in the problem when he makes this distinction between likelihood and membership. After understanding this by heart it’s an easy ride into the rest of the theory. As the next step we can introduce a term for concepts like ‘hot’, ‘beautiful’ or ‘high’. Fuzzy logic calls them fuzzy sets. A fuzzy set is an association between a number and a possibility value (Possibility is a number between 0 and 1 – just like probability, but at the same time radically deviating from it conceptually).


Following figures provide examples. First one defines the fuzzy set “LOW” when input values are given from 1 to 5. For instance, if an examiner evaluates a student’s proficiency in a subject with a number between 1 and 5, how much will each mark mean to be ‘LOW’? We know that 1 is absolutely low. Therefore we consider the possibility of mark 1 being in the fuzzy set ‘LOW’ as 1.0 (maximum possibility). Also we can agree that mark 5 cannot be considered ‘LOW’ at all. So its possibility of being ‘LOW’ is zero. Marks between these two have varying degrees of membership in the fuzzy set ‘LOW’. For example, if the examiner gives mark 4 we consider student’s proficiency to be ‘LOW’ only with a degree of 0.25.

The fuzzy set ‘HIGH’ (last plot) is defined in a similar way. What about the middle one though? It’s not a fuzzy set that stands for a brand new concept, but one that stands for a modification of a previously defined concept. The modifier ‘VERY’ is applied to the concept ‘LOW’. Noting that the modifier is an intensifying modifier (one that further strengthens the concept) we square each possibility in ‘LOW’ to get possibilities in ‘VERY LOW’. Gut feeling says that membership of mark 4 in fuzzy set ‘VERY LOW’ should be less than its membership in ‘LOW’. And the numbers resemble that notion.

  • Possibility [4 is VERY LOW] = 0.25 * 0.25 = 0.0625




It’s not difficult to grasp the idea of fuzzy variables. They are fundamentally measurements or entities that can take fuzzy sets as their values. Example fuzzy variables can be temperature, student’s proficiency, candidate’s experience and so on. After that we can combine a fuzzy variable with a fuzzy set to construct a meaningful statement like “temperature is LOW”. These can be termed atomic statements. To express it formally, an atomic statement is of the form:

  • <fuzzy variable> is <fuzzy set>

Now we walk the next step by combining several atomic statements into a compound statement. A compound statement would look something like “Student’s math proficiency is LOW, English proficiency is HIGH and music proficiency is MEDIUM”. These types of statements are useful when making judgments based on a combination of factors. For instance, a judge panel might want to make a final decision on whether to pass a student from the exam by looking at all his subject level proficiencies. Suppose that the judge panel decides this: “If there is a student whose math proficiency is MEDIUM, english proficiency is MORE OR LESS LOW and music proficiency is HIGH we will pass him”. This is termed a fuzzy rule. More appropriately, a fuzzy rule is a compound statement followed by an action. Another rule can be: “If the student’s math proficiency is VERY LOW, English proficiency is high and music proficiency is LOW we will fail him”.

If the judge panel can compile a bunch of rules of this form it can be considered to be the policy when evaluating students. In fuzzy logic vocabulary we call it a fuzzy rulebase. It is important to note that a fuzzy rulebase need not be exhaustive (meaning that it does not have to cover all possible combinations of scenarios). It is sufficient to come up with a rulebase that covers the problem domain to a satisfactory level. Once the rulebase is fed to a fuzzy logic system it is capable of answering questions such as “If the student’s math grade is 3 (in a scale of 1 to 5), English grade is 2 and music grade is 3, should we pass him?”. This is all one needs to understand to use a fuzzy logic library. Inner workings of the theory on how it really derives the answer based on rulebase are beyond the scope of a blog post. Also I think that 90% of readers would be happy to learn that the math bullshit is going to end from here.

Application of fuzzy logic into our location detection problem

Let me repeat our problem; we receive location coordinates in iPad with varying frequencies and tolerance levels. By looking at tolerance values and location coordinate frequency, we need to determine whether the device is in high accuracy or low accuracy location detection mode at a given time. We decided to determine the location detection mode every 30 seconds. At each 30 second boundary we used location coordinate values received within the last 30 seconds to determine the mode. All location related processing for the next 30 seconds are performed with respect to this newly figured out mode. For instance, if we decide that the device is operating in high accuracy location detection mode, we assume that the device operates in the same mode until we perform the evaluation after another 30 seconds. For this, we used following two parameters as inputs in the problem.

  • Tolerance values for best two location coordinates (coordinates with lowest tolerance values) within past 30 seconds
  • Number of location coordinate values received within past 30 seconds (highest possible value is 30 as we configure the device to receive location coordinates every second. However, when in low accuracy mode, number of coordinates received within 30 seconds is way less than 30).

We defined each of the 3 inputs to take values within a universe with 5 fuzzy sets: VERY LOW (VL), LOW (L), MEDIUM (M), HIGH (H) & VERY HIGH (VH). Then we worked out a bunch of fuzzy rules using these inputs and fuzzy sets. Rules are derived using gut feeling decisions on the domain. Following figure shows a part of the rulebase we constructed.


We defined each of the 3 inputs to take values within a universe with 5 fuzzy sets: VERY LOW (VL), LOW (L), MEDIUM (M), HIGH (H) & VERY HIGH (VH). Then we worked out a bunch of fuzzy rules using these inputs and fuzzy sets. Rules are derived using gut feeling decisions on the domain. Following figure shows a part of the rulebase we constructed.

At run time we determine numerical values of our 3 input parameters. An example input set can be:

  • tolerence1 = 10 meters, tolerence2 = 25 meters, reading count = 20

Using the rulebase, fuzzy logic system is capable of deriving an answer to the question “Which location detection mode the device is currently operating in?”. Our experimental results were exciting. With the aid of fuzzy logic we arrived at a sensible solution that provides accurate results to a problem that is almost unsolvable using conventional crisp logic. Our app is now in AppStore as the most popular navigation app in Norwegian market.

Image sources: 1, 2, 3

A Logic that Urinates (Part I)

It is said, in North Korea, children are taught that their loving ex-leader Kim Jong Il did not even have to urinate because he was so pure. No excrement generated in his body due to his super-human purity. As a software engineer I think that most of our code behaves also like Kim Jong Il. Our code is ultra pure so that it surpasses every contingency in general human discourse occurring due to imprecision and vagueness. For instance, the statement start_time = 3 in a computer program literally means it while in human conversation “The event starts at 3” does not imply a rigid spontaneity. In a statement like the latter, we humans implicitly agree that the start time can fall within an acceptable time window centered around 3. In any society, a deviation of few seconds will be accepted. However, the program would not accept 3.0001 or 2.998.

This behavior with programs generally works. If you are developing accounting software, you probably don’t want to jeopardize your program logic with vagueness in human thinking. In fact lot of practices are involved in software delivery (such as manual testing, unit testing and code quality analysis) to cut the crap introduced by humanness and make the software as pure as Kim Jong Il. However, there are cases that a programmer is left with no other option but to stretch boundaries of his thinking a little further than the psychotic notion of purity and to get along with the real world while encountering its inevitable vagueness. Following is an example from a recent programming experience of mine.

Recently we implemented an iPad navigation app for a leading Norwegian GIS (Geographic Information Systems) company. The app is intended to help people when going on boat rides by providing specialized maps. In addition to this main purpose it is bundled with lot of other useful features such as path tracking, geo-fencing, geo-coding, social media sharing, etc.

Not surprisingly the app needs to determine user’s current location to enable most of its features. For this, we employed location services API that ships with iOS. It uses various sources such as GPS, access tower data and crowd-sourced location data when determining the geo-coordinates of the device location. Each of these location sources has different implications on accuracy and battery consumption. For instance, GPS is by far the most accurate method, but drains the battery faster. On the other hand, wifi iPads that amount to a significant fraction of the iPads that are currently in use, do not have GPS receivers. The only accessible location information for them comes from crowd sourced location data from wifi hot spots that agree to share their location. Inevitably these location coordinates are less reliable. One nice thing with iOS location API is that, along with every location coordinate it also provides information on how accurate (or how inaccurate) the reading can be. This is called the tolerance value. For example, when we see a location coordinate (in the form latitude – longitude) 35.45o, 4.87o with tolerance 20 meters, we know that the user’s actual location can be anywhere inside a circle of radius 20 meters centered at the point (35.45o, 4.87o). With our experiments we figured out that when GPS is used to determine the location, tolerance level is as low as 5 meters. However, with wifi iPads, the best we observed was 65 meters. To make things more complicated, even the iPads with GPS receivers, at times, can go low down in accuracy (with tolerance levels as high as several hundreds of meters). This particularly happens when the device is in the vicinity of objects like huge concrete structures or large trees that effectively blocks GPS signals.

Need to determine location accuracy mode

Experimentation clearly suggested that there are two disparate modes that an iPad can be operating at a given moment with respect to location detection; high accuracy mode and low accuracy mode. These two modes are characterized by the following behaviors.

High Accuracy Mode (HAM) Low Accuracy Mode (LAM)
Tolerance value is low for most location readings Tolerance value is high for most readings
Location readings are received in regular intervals (can be as frequent as once in a second) Location readings are received less frequently (usually only few times in a minute)

When in high accuracy mode we can treat received location coordinates as the actual location. In addition we can happily ignore intermittent low accuracy readings (readings with high tolerance values – these can occur even in high accuracy mode occasionally). In contrast, the programmer has to make every attempt to use all acceptable readings (readings without crazily high tolerance, such as more than 1 km) when in low accuracy mode since only few location readings are typically received during 1 minute. Also, corrections may need to be applied (depending on the purpose) since the accuracy level is low. Because the developer has to apply two kinds of logics depending on the accuracy mode, it’s necessary to determine the mode that the device is operating at a given moment. One should also note that the mode could change with time; for instance, when a person is moving with a GPS iPad, the device can be operating in high accuracy mode mostly, but can also switch to low accuracy mode when it is close to a big building.

Difficulty in drawing the line between two modes

The first (and probably the toughest) challenge is to correctly figure out whether the device is operating in HAM or LAM. It doesn’t take much thinking to identify that one can use both tolerance value and location reading frequency to determine the mode. If most tolerance values are low and the device is receiving location readings in regular intervals, it should be in high accuracy mode. However, formulating the logic is not as simple as it sounds because it needs explicit manifestation of numeric boundaries between the two modes. For example, let’s say that we decide to conclude the operating mode as HAM when the best tolerance is less than 20 meters and 15 readings or more are received within a period of 30 seconds. It’s not difficult to illustrate the problem associated with this approach. Consider the following 3 cases.

Case 1: Best tolerance is 18 meters and 15 readings are received within 30 seconds.
Case 2: Best tolerance is 21 meters and 15 readings are received within 30 seconds.
Case 3: Best tolerance is 18 meters and 13 readings are received within 30 seconds.

Intuition suggests that most probably the device should have been operating in the same mode in all 3 cases. However, our previous logic with stubborn numeric boundaries results in case 1 being identified as high accuracy mode, while the other two being recognized as low accuracy. Can you see the problem here? The problem is not about using numeric boundaries (we have to do that as long as we program for a Von Neumann computer). However, the problem lies in selection of the numeric boundary. What justifies selection of 20 meters as the tolerance boundary? Similarly, how confident are we, that the frequency boundary should be 15? A sufficient probe into the problem would reveal that it’s almost impossible to develop a sound heuristic that determines these boundary values “accurately”.

Where exactly is the problem?

The problem really lies on the discrepancy between skills of humans and computers. Humans are clever in dealing with concepts than with numbers while computers are better in handling numbers. This is evident in that we could distinguish between the two modes easily when we were talking in terms of concepts (to reiterate our previous statement -> ‘If most tolerance values are low and the device is receiving location readings in regular intervals, it should be in high accuracy mode’). The moment we try to put this logic in terms of numbers, we run into chaos. This is a clear case where ‘pure logic’ leaves us in a desperate abyss.

Await Part 2…
Await Part 2 of this article where we explore how fuzzy logic brings in the solution to this problem.

Image sources: 1, 2, 3, 4

Introducing the WSO2 App Factory

By definition WSO2 App Factory is a multi-tenant, elastic and self-service Enterprise DevOps platform that enables multiple project teams to collaboratively create, run and manage enterprise applications.

Oh! kind of confusing? Yes, as most other definitions, only a few will grab what App Factory means from the first look at its definition. If it’s explained it in simpler words, WSO2 App Factory is a Platform as a service (PAAS) which manage enterprise application development from the cradle of the application to the grave.

(Still confusing…? Figure below illustrates the move from the traditional on-premise software to cloud based services. You can see the Platform as a service in the third column.)

PAAS illustration

Unless it is a university assignment or test, every real world application development has to undergo several phases until it is ready to go live. Applications has to be designed, developed and sent to QA for testing. Then, QA has to test them rigorously before approving for production. Then the bug fixing and stabilization phase. When the software is ready, it gets deployed. Finally when the application completed its job, it is needed to be retired.

Organizations have to use a number of tools in each of the above phases. For instance, developers may be using SVN for creating code repositories, maven or ant for building the projects, JIRA for ticket tracking and various other tools for finding bugs in the application. Above tools are independent of each other which results in organizations having to put a considerable effort in deploying those tools. If you are a developer, QA manager, system administrator or a DevOps or any other stakeholder who is involved in application development, there is no doubt that you have endured the pain of above and you might be wondering “Is there one single tool which does the work of all of the above tools?”. WSO2 App Factory does exactly that. By using App Factory you gain all the support for your application development, all under one roof.

Individual building blocks of the App Factory is illustrated in the below diagram.


Diagram 1 depicts the components of the App factory. Management portal, what is the main interaction point to the system is at the center. Source code management, issue trackers and other features are accessible via the portal. When a developer created an app via the management portal, he is provided with a space in the repo , space in the build environment and a project  in the issue tracker and so on. You clone from the repository you are provided into your development machine. Then develop the application with your favorite programming IDE and commit. WSO2 is planning to rollout a browser based IDE in the future to make the complete lifecycle run on the cloud. The application you are developing is continuously built in the cloud using your built tool. If automatic build is enabled, the build process will be triggered automatically when you commit. If auto deploy is enabled, the app will be deployed in the development cloud automatically after the build. Then after the development is completed, the apps will be promoted to the test cloud.  This promotion will retire the apps from the development cloud and deploy them in the test cloud. QA department will test them, promote to the production or staging cloud if tests pass or demote again to the development cloud if fail. The ultimate step is to send the apps to the app store enabling users to discover the apps. The most interesting thing is, all the above tasks can be executed using a single tool via a single management portal.


Features of App factory

    1. Self-Provisioning of the workspace and resources such as code repository, issue tracking, build configuration and bug finding tools… etc.
    2. Support a variety of application

○     Web applications
○     PHP
○     Jaxrs
○     Jaxws
○     Jaggery
○     WSO2 ESB
○     WSO2 BPEL
○     WSO2 Data services

  1. Gather developers, QAs and DevOps of the organization to the application workspace
  2. Automate continuous builds, continuous tests and development activities
  3. One click solutions for branching and versioning
  4. Deploy application into WSO2 rich middleware stack
  5. No need to change your way of doing things

○     App factory can be configured to integrate with your existing software development life cycle.
○     Integrate with your existing users via LDAP or Microsoft Active directory

WSO2 AppFactory applications integrated

Yes, WSO2 App Factory is customizable. For instance organizations are not required to use the tools that App factory supports, they can plug in a tool of their preference. It is a matter of integrating another tool. Different organizations have different workflows, still App Factory can be configured to suit their own workflows.

In summary WSO2 App Factory is a cloud enabled DevOps PAAS for enterprise which manages the entire life cycle of an application. It leverages the application development giving enterprises a competitive advantage in the cloud.

Enough of talking, so help yourself by visiting App Factory preview in live. It is free and open source.

This article is just a bird’s eye view of the WSO2 App Factory. Visit its home page to broaden your knowledge. Good short video about the product is shown below: