TechWire

Category - SW

99X TECHNOLOGY SWEEPS UP TWO AWARDS AT NBQSA 2013

Being one of the top players in the Sri Lankan ICT industry, 99X Technology obtained two awards at the recently concluded National Best Quality ICT Awards (NBQSA).

The two awards are as follows

WAG : Web Accessibility Guide – Bronze Award, R&D Category

ALMUR :  Merit Award, Tools and Infrastructure Applications

NBQSA Awards

WAG it seems is a toolset which is aimed to make surfing the internet that much easier to the visually handicapped. This provides assistance to users with total blindness, low vision and colour applications. It additionally enables the visually impaired people to apply accessibility corrections. It is wonderful to see projects such as WAG being implemented, where technology is used for the betterment of society. Kudos to the project team!

The ALMUR is a product which facilitates binary decision making using fuzzy loging on cases expressed in approximate human terms. A typical question asked from ALMUR would be: ‘If the student’s Math grade is high, English grade is low and Geography grade is somewhat high, should we pass the student?’

ALMUR libraries are currently being used by ‘Norge-serien’ and ‘Boating Norway’ iOS apps for fuzzy logic based decision making.

“Research and development is fundamental to our strategy, keeping in line with our aim to constantly innovate. Hence, the award signifies our commitment towards reaching this goal”

– 99X Technology Co-Founder and CEO Mano Sekaram on his company’s achievements at the NBQSA Awards.

99x project team

Have you ever read iTunes License Agreement(EULA) ?

If you’ve ever downloaded an app via iTunes, Apple has probably insisted that you first agree to its End User License Agreement, or EULA. More evidence that you didn’t read the iTunes EULA? and you probably didn’t notice the last line of paragraph (g). It reads “You also agree that you will not use these products for any purposes prohibited by United States law, including, without limitation, the development, design, manufacture or production of nuclear, missiles, or chemical or biological weapons.” So please, do not create weapons of mass destruction in iTunes. You wouldn’t want to lose access to your playlist on your quest for world domination.

Don’t give in to FOSS phobia

In the world of business, where there is risk, there is always an opportunity. Opportunists take calculated risks to gain higher profit where others are too faint-hearted even to try. This article is about such an opportunity.

FOSS Phobia

If you work for a small to medium scale (may be even large) enterprise software vendor you probably have experienced the disease known as FOSS phobia. This condition causes irrational behavior on the part of management when the company’s software requires third party software components to perform effectively and those components are available through a choice of commercial closed source offerings as well as free and open source software (FOSS) projects.

FOSS-phobic management fails to even consider this choice rationally and decides in favor of commercial closed source offerings. This is because the primary pathogen in this case, the marketing guy from the closed source component, provider is offering a support contract that a free and open source software project cannot match. He also plants this idea of a need of a support contract deep into the manager’s mind so that he’ll decide against free & open source software at any given opportunity, effectively making him a victim of FOSS phobia. Common sense criteria such as licensing, performance, quality of software and stability won’t even come in to consideration after that.

The idea of this article here is not to promote the definite use of free and open source software at any such given opportunity, But rather to emphasize the point of conducting a proper risk vs. gain analysis of choosing either option on a case by case basis.

why-go-opensource

Rationale

First let’s consider the primary selling point from that marketing guy for their closed source software component, the expensive but comprehensive support contract. Why would you require support in the first place? Firstly this is to gain initial know how on configurations, setup and tuning for the component. Secondly support will be required in order to diagnose and fix a production fault or a malfunction with the given software component. You don’t have to think too much in order to understand the huge incentive the third party software component provider has, in order to make his software seem more complex and hard to learn as possible. This is particularly true of markets where there are only few dominant vendors (less competition) for the same component. Good examples for these kinds of vendors can be found in the enterprise database and application server markets.

Naturally this abuse of power is one of the reasons why the FOSS movement came into being in the first place. FOSS projects do not have an agenda to sell you software, but they do care about gaining popularity among users so that the project can thrive. Therefore you can bet on it that the more popular and successful a FOSS project is, the easier it would be to learn, configure and adapt it for your own requirements and you’ll face also less issues while on production. You’d also observe that the more popular FOSS components have a great supportive community of followers as well as companies that are willing to provide support for it if needed. But I stress again that this is not enough reason to make a general case for using FOSS software components or otherwise.

So how would you go about evaluating a commercial software component vs. a FOSS component to be used with your company’s own software? As I mentioned earlier this would have to be done on a case by case basis. Following are some general rules of thumb for evaluation of FOSS components to be used with your software.

  1. Make sure the license is not infectious – Some FOSS licenses oblige you to open source your own code once you integrate the FOSS component into your code. More popular non-infectious FOSS licenses are Apache license, Eclipse public license and MIT license.
  2. Check if the software component satisfies your requirements (functionality, performance etc.).
  3. Make sure there’s enough documentation online even for the core parts of the code that you might never even touch in the beginning.
  4. Get an understanding of how active their forums are, in terms of number of questions asked/answered as well as how active the core developers are in participating in the conversations. Higher the activity the better for you in finding answers for common problems as well as problems specific to you.
  5. Check if there’s commercial support available if you were to require some assistance in the future.
  6. If multiple FOSS components are available, compare/contrast between them using above criteria to select the better one.
  7. Finally compare/contrast with the available commercial closed source alternatives, if you can afford it. Author’s personal experience is that FOSS components that filter through above rules of thumb beat commercial alternatives by a mile most of the time, even without considering the cost savings in using FOSS components.

    Imagine you made the choice to go with the FOSS component even when there is a risk that you yourself might have to provide support for the component in the future without help from the original authors. You might even be using that component in one of the most mission-critical parts of your software. But this might result in some huge opportunities for your company to profit from in the future.

    state-of-oss-adoption

    Opportunity

    First opportunity presents itself when the software developers of your company become familiar with that external FOSS component. This might result in better tuning of your software as well as that external component to suit your needs, making your software perform better in the future. This might also result in you being able to provide better support for your clients as well. Also you’ll not be constrained by an external party not to do any changes to the components as the business requirements change. A good example of this can be found in how Google’s Android operating system is used by different software and hardware vendors. Some like Amazon have even managed to fork and customize it according their needs in their Kindle Fire product, revealing a great opportunity for profit.

    Another way you stand to gain by your risky investment is when your company requires some vertical integration or diversification. With your software developers becoming experts on those FOSS components you use as support for your software instead of treating them as black boxes, you’ll be in a nice position to sell support for those components as well, profiting in turn from other peoples FOSS phobia. Or you may even be able to build a complete product over it, extracting a profit from that as well. If you look closely you can see this happening all over the enterprise software field right now. Some of the companies that do that already include RedHat with JBoss software and OpenStack, Mulesoft with Tcat application server, Pivotal with Hadoop and Liferay with their Portal solution and many more.

    Conclusion

    I hope this article has convinced you of the potential gain of adapting and integrating FOSS components into your own software. Though you take a bit of risk in using some of the stuff without any support contracts, it’d still be better in the long run than spending money on support contracts that may end up being a burden to you as well as your clients.

    More importantly if you do find a FOSS project useful in some way please make sure you do contribute back to it in whichever way you can as well.

    (Image sources: 1, 2, 3, 4)

A Logic that Urinates (Part II)

Note from the editor: This is the last of the two-part article series on how real world problems can be solved using fuzzy logic. Part I of this article can be found here.

Fuzzy logic brings in a solution

The brilliant intellectual Lotfi Zadeh introduced Fuzzy Logic in 1973, which constitutes a beautiful solution to problems like the one articulated above. Fuzzy Logic, in its core, builds a sophisticated mathematical framework that can translate semantics expressed in vague human terms into crisp numeric form. This enables us, humans, to express our knowledge in a particular domain using a language familiar to us, but still make that knowledge solve concrete numeric problems. For example, our knowledge can be expressed like the following rules that are used to determine whether a candidate should be chosen for a job depending on his experience, education and salary expectation levels.

  • If Experience is High, Education level is Medium and Salary expectation is Low, then hire the guy.
  • If Experience is Medium, Education level is High and Salary expectation is High then do not hire the guy.
  • If Experience is High, Education level is Somewhat High and Salary Expectation is Very High then do not hire the guy.

Rules like above can be formed using our knowledge, experience, gut feeling, etc about the domain. The collection of rules is typically termed Fuzzy Rulebase. We feed the rulebase into a Fuzzy Logic System (FLS), which aggregates and stores this knowledge. Most notably, the FLS stores the knowledge in a numerical logic that can be processed by a computer. After that FLS is capable of answering questions like the following.

If the Experience level is 4 (out of 5), Education level is 3 and Salary Expectation is 4, should we hire the guy?

No need to mention that, the richer the rulebase is, the more accurate is the outcome.

Lotfi Zadeh

A Little further insight into Fuzzy Logic

How does Zadeh’s new logic perform its wonders under the hood? If fuzzy logic is a complex and amazing structure, the magic brick it is built by is the concept termed possibility. Zadeh’s genius is to identify that, in human discourse, likeliness does not refer to likelihood, but to membership. Let me exercise my teaching guts to describe this in a more digestible form. When we humans are confronted with a question like “How hot is 25oC?” we do not think it like “What is the likelihood that 25oC can be considered hot?” (Do we?). We rather think it like “Up to which degree does 25oC belong to the notion ‘hot’?”. To put the same thing in different terms, it’s about “the degree of belongingness to something” but not ”the probability of being something”. You might now be thinking of giving up the idea of becoming Zadeh’s follower, but I suggest you to hold on and give it a second thought.

I believe that Zadeh touches the real nerve in the problem when he makes this distinction between likelihood and membership. After understanding this by heart it’s an easy ride into the rest of the theory. As the next step we can introduce a term for concepts like ‘hot’, ‘beautiful’ or ‘high’. Fuzzy logic calls them fuzzy sets. A fuzzy set is an association between a number and a possibility value (Possibility is a number between 0 and 1 – just like probability, but at the same time radically deviating from it conceptually).

fuzzy2_3

Following figures provide examples. First one defines the fuzzy set “LOW” when input values are given from 1 to 5. For instance, if an examiner evaluates a student’s proficiency in a subject with a number between 1 and 5, how much will each mark mean to be ‘LOW’? We know that 1 is absolutely low. Therefore we consider the possibility of mark 1 being in the fuzzy set ‘LOW’ as 1.0 (maximum possibility). Also we can agree that mark 5 cannot be considered ‘LOW’ at all. So its possibility of being ‘LOW’ is zero. Marks between these two have varying degrees of membership in the fuzzy set ‘LOW’. For example, if the examiner gives mark 4 we consider student’s proficiency to be ‘LOW’ only with a degree of 0.25.

The fuzzy set ‘HIGH’ (last plot) is defined in a similar way. What about the middle one though? It’s not a fuzzy set that stands for a brand new concept, but one that stands for a modification of a previously defined concept. The modifier ‘VERY’ is applied to the concept ‘LOW’. Noting that the modifier is an intensifying modifier (one that further strengthens the concept) we square each possibility in ‘LOW’ to get possibilities in ‘VERY LOW’. Gut feeling says that membership of mark 4 in fuzzy set ‘VERY LOW’ should be less than its membership in ‘LOW’. And the numbers resemble that notion.

  • Possibility [4 is VERY LOW] = 0.25 * 0.25 = 0.0625

graph_low

graph_verylow

graph_high

It’s not difficult to grasp the idea of fuzzy variables. They are fundamentally measurements or entities that can take fuzzy sets as their values. Example fuzzy variables can be temperature, student’s proficiency, candidate’s experience and so on. After that we can combine a fuzzy variable with a fuzzy set to construct a meaningful statement like “temperature is LOW”. These can be termed atomic statements. To express it formally, an atomic statement is of the form:

  • <fuzzy variable> is <fuzzy set>

Now we walk the next step by combining several atomic statements into a compound statement. A compound statement would look something like “Student’s math proficiency is LOW, English proficiency is HIGH and music proficiency is MEDIUM”. These types of statements are useful when making judgments based on a combination of factors. For instance, a judge panel might want to make a final decision on whether to pass a student from the exam by looking at all his subject level proficiencies. Suppose that the judge panel decides this: “If there is a student whose math proficiency is MEDIUM, english proficiency is MORE OR LESS LOW and music proficiency is HIGH we will pass him”. This is termed a fuzzy rule. More appropriately, a fuzzy rule is a compound statement followed by an action. Another rule can be: “If the student’s math proficiency is VERY LOW, English proficiency is high and music proficiency is LOW we will fail him”.

If the judge panel can compile a bunch of rules of this form it can be considered to be the policy when evaluating students. In fuzzy logic vocabulary we call it a fuzzy rulebase. It is important to note that a fuzzy rulebase need not be exhaustive (meaning that it does not have to cover all possible combinations of scenarios). It is sufficient to come up with a rulebase that covers the problem domain to a satisfactory level. Once the rulebase is fed to a fuzzy logic system it is capable of answering questions such as “If the student’s math grade is 3 (in a scale of 1 to 5), English grade is 2 and music grade is 3, should we pass him?”. This is all one needs to understand to use a fuzzy logic library. Inner workings of the theory on how it really derives the answer based on rulebase are beyond the scope of a blog post. Also I think that 90% of readers would be happy to learn that the math bullshit is going to end from here.

Application of fuzzy logic into our location detection problem

Let me repeat our problem; we receive location coordinates in iPad with varying frequencies and tolerance levels. By looking at tolerance values and location coordinate frequency, we need to determine whether the device is in high accuracy or low accuracy location detection mode at a given time. We decided to determine the location detection mode every 30 seconds. At each 30 second boundary we used location coordinate values received within the last 30 seconds to determine the mode. All location related processing for the next 30 seconds are performed with respect to this newly figured out mode. For instance, if we decide that the device is operating in high accuracy location detection mode, we assume that the device operates in the same mode until we perform the evaluation after another 30 seconds. For this, we used following two parameters as inputs in the problem.

  • Tolerance values for best two location coordinates (coordinates with lowest tolerance values) within past 30 seconds
  • Number of location coordinate values received within past 30 seconds (highest possible value is 30 as we configure the device to receive location coordinates every second. However, when in low accuracy mode, number of coordinates received within 30 seconds is way less than 30).

We defined each of the 3 inputs to take values within a universe with 5 fuzzy sets: VERY LOW (VL), LOW (L), MEDIUM (M), HIGH (H) & VERY HIGH (VH). Then we worked out a bunch of fuzzy rules using these inputs and fuzzy sets. Rules are derived using gut feeling decisions on the domain. Following figure shows a part of the rulebase we constructed.

rulebase

We defined each of the 3 inputs to take values within a universe with 5 fuzzy sets: VERY LOW (VL), LOW (L), MEDIUM (M), HIGH (H) & VERY HIGH (VH). Then we worked out a bunch of fuzzy rules using these inputs and fuzzy sets. Rules are derived using gut feeling decisions on the domain. Following figure shows a part of the rulebase we constructed.

At run time we determine numerical values of our 3 input parameters. An example input set can be:

  • tolerence1 = 10 meters, tolerence2 = 25 meters, reading count = 20

Using the rulebase, fuzzy logic system is capable of deriving an answer to the question “Which location detection mode the device is currently operating in?”. Our experimental results were exciting. With the aid of fuzzy logic we arrived at a sensible solution that provides accurate results to a problem that is almost unsolvable using conventional crisp logic. Our app is now in AppStore as the most popular navigation app in Norwegian market.

Image sources: 1, 2, 3

A Logic that Urinates (Part I)

It is said, in North Korea, children are taught that their loving ex-leader Kim Jong Il did not even have to urinate because he was so pure. No excrement generated in his body due to his super-human purity. As a software engineer I think that most of our code behaves also like Kim Jong Il. Our code is ultra pure so that it surpasses every contingency in general human discourse occurring due to imprecision and vagueness. For instance, the statement start_time = 3 in a computer program literally means it while in human conversation “The event starts at 3” does not imply a rigid spontaneity. In a statement like the latter, we humans implicitly agree that the start time can fall within an acceptable time window centered around 3. In any society, a deviation of few seconds will be accepted. However, the program would not accept 3.0001 or 2.998.

This behavior with programs generally works. If you are developing accounting software, you probably don’t want to jeopardize your program logic with vagueness in human thinking. In fact lot of practices are involved in software delivery (such as manual testing, unit testing and code quality analysis) to cut the crap introduced by humanness and make the software as pure as Kim Jong Il. However, there are cases that a programmer is left with no other option but to stretch boundaries of his thinking a little further than the psychotic notion of purity and to get along with the real world while encountering its inevitable vagueness. Following is an example from a recent programming experience of mine.

tw_ipadmaps
Recently we implemented an iPad navigation app for a leading Norwegian GIS (Geographic Information Systems) company. The app is intended to help people when going on boat rides by providing specialized maps. In addition to this main purpose it is bundled with lot of other useful features such as path tracking, geo-fencing, geo-coding, social media sharing, etc.

Not surprisingly the app needs to determine user’s current location to enable most of its features. For this, we employed location services API that ships with iOS. It uses various sources such as GPS, access tower data and crowd-sourced location data when determining the geo-coordinates of the device location. Each of these location sources has different implications on accuracy and battery consumption. For instance, GPS is by far the most accurate method, but drains the battery faster. On the other hand, wifi iPads that amount to a significant fraction of the iPads that are currently in use, do not have GPS receivers. The only accessible location information for them comes from crowd sourced location data from wifi hot spots that agree to share their location. Inevitably these location coordinates are less reliable. One nice thing with iOS location API is that, along with every location coordinate it also provides information on how accurate (or how inaccurate) the reading can be. This is called the tolerance value. For example, when we see a location coordinate (in the form latitude – longitude) 35.45o, 4.87o with tolerance 20 meters, we know that the user’s actual location can be anywhere inside a circle of radius 20 meters centered at the point (35.45o, 4.87o). With our experiments we figured out that when GPS is used to determine the location, tolerance level is as low as 5 meters. However, with wifi iPads, the best we observed was 65 meters. To make things more complicated, even the iPads with GPS receivers, at times, can go low down in accuracy (with tolerance levels as high as several hundreds of meters). This particularly happens when the device is in the vicinity of objects like huge concrete structures or large trees that effectively blocks GPS signals.

Need to determine location accuracy mode

Experimentation clearly suggested that there are two disparate modes that an iPad can be operating at a given moment with respect to location detection; high accuracy mode and low accuracy mode. These two modes are characterized by the following behaviors.

High Accuracy Mode (HAM) Low Accuracy Mode (LAM)
Tolerance value is low for most location readings Tolerance value is high for most readings
Location readings are received in regular intervals (can be as frequent as once in a second) Location readings are received less frequently (usually only few times in a minute)

When in high accuracy mode we can treat received location coordinates as the actual location. In addition we can happily ignore intermittent low accuracy readings (readings with high tolerance values – these can occur even in high accuracy mode occasionally). In contrast, the programmer has to make every attempt to use all acceptable readings (readings without crazily high tolerance, such as more than 1 km) when in low accuracy mode since only few location readings are typically received during 1 minute. Also, corrections may need to be applied (depending on the purpose) since the accuracy level is low. Because the developer has to apply two kinds of logics depending on the accuracy mode, it’s necessary to determine the mode that the device is operating at a given moment. One should also note that the mode could change with time; for instance, when a person is moving with a GPS iPad, the device can be operating in high accuracy mode mostly, but can also switch to low accuracy mode when it is close to a big building.

Difficulty in drawing the line between two modes

The first (and probably the toughest) challenge is to correctly figure out whether the device is operating in HAM or LAM. It doesn’t take much thinking to identify that one can use both tolerance value and location reading frequency to determine the mode. If most tolerance values are low and the device is receiving location readings in regular intervals, it should be in high accuracy mode. However, formulating the logic is not as simple as it sounds because it needs explicit manifestation of numeric boundaries between the two modes. For example, let’s say that we decide to conclude the operating mode as HAM when the best tolerance is less than 20 meters and 15 readings or more are received within a period of 30 seconds. It’s not difficult to illustrate the problem associated with this approach. Consider the following 3 cases.

Case 1: Best tolerance is 18 meters and 15 readings are received within 30 seconds.
Case 2: Best tolerance is 21 meters and 15 readings are received within 30 seconds.
Case 3: Best tolerance is 18 meters and 13 readings are received within 30 seconds.

Intuition suggests that most probably the device should have been operating in the same mode in all 3 cases. However, our previous logic with stubborn numeric boundaries results in case 1 being identified as high accuracy mode, while the other two being recognized as low accuracy. Can you see the problem here? The problem is not about using numeric boundaries (we have to do that as long as we program for a Von Neumann computer). However, the problem lies in selection of the numeric boundary. What justifies selection of 20 meters as the tolerance boundary? Similarly, how confident are we, that the frequency boundary should be 15? A sufficient probe into the problem would reveal that it’s almost impossible to develop a sound heuristic that determines these boundary values “accurately”.

Where exactly is the problem?

tw_fuzzy3
The problem really lies on the discrepancy between skills of humans and computers. Humans are clever in dealing with concepts than with numbers while computers are better in handling numbers. This is evident in that we could distinguish between the two modes easily when we were talking in terms of concepts (to reiterate our previous statement -> ‘If most tolerance values are low and the device is receiving location readings in regular intervals, it should be in high accuracy mode’). The moment we try to put this logic in terms of numbers, we run into chaos. This is a clear case where ‘pure logic’ leaves us in a desperate abyss.

Await Part 2…
Await Part 2 of this article where we explore how fuzzy logic brings in the solution to this problem.

Image sources: 1, 2, 3, 4

Introducing the WSO2 App Factory

By definition WSO2 App Factory is a multi-tenant, elastic and self-service Enterprise DevOps platform that enables multiple project teams to collaboratively create, run and manage enterprise applications.

Oh! kind of confusing? Yes, as most other definitions, only a few will grab what App Factory means from the first look at its definition. If it’s explained it in simpler words, WSO2 App Factory is a Platform as a service (PAAS) which manage enterprise application development from the cradle of the application to the grave.

(Still confusing…? Figure below illustrates the move from the traditional on-premise software to cloud based services. You can see the Platform as a service in the third column.)

PAAS illustration

Unless it is a university assignment or test, every real world application development has to undergo several phases until it is ready to go live. Applications has to be designed, developed and sent to QA for testing. Then, QA has to test them rigorously before approving for production. Then the bug fixing and stabilization phase. When the software is ready, it gets deployed. Finally when the application completed its job, it is needed to be retired.

Organizations have to use a number of tools in each of the above phases. For instance, developers may be using SVN for creating code repositories, maven or ant for building the projects, JIRA for ticket tracking and various other tools for finding bugs in the application. Above tools are independent of each other which results in organizations having to put a considerable effort in deploying those tools. If you are a developer, QA manager, system administrator or a DevOps or any other stakeholder who is involved in application development, there is no doubt that you have endured the pain of above and you might be wondering “Is there one single tool which does the work of all of the above tools?”. WSO2 App Factory does exactly that. By using App Factory you gain all the support for your application development, all under one roof.

Individual building blocks of the App Factory is illustrated in the below diagram.

wso2_appfactory-topology

Diagram 1 depicts the components of the App factory. Management portal, what is the main interaction point to the system is at the center. Source code management, issue trackers and other features are accessible via the portal. When a developer created an app via the management portal, he is provided with a space in the repo , space in the build environment and a project  in the issue tracker and so on. You clone from the repository you are provided into your development machine. Then develop the application with your favorite programming IDE and commit. WSO2 is planning to rollout a browser based IDE in the future to make the complete lifecycle run on the cloud. The application you are developing is continuously built in the cloud using your built tool. If automatic build is enabled, the build process will be triggered automatically when you commit. If auto deploy is enabled, the app will be deployed in the development cloud automatically after the build. Then after the development is completed, the apps will be promoted to the test cloud.  This promotion will retire the apps from the development cloud and deploy them in the test cloud. QA department will test them, promote to the production or staging cloud if tests pass or demote again to the development cloud if fail. The ultimate step is to send the apps to the app store enabling users to discover the apps. The most interesting thing is, all the above tasks can be executed using a single tool via a single management portal.

wso2_appfactory-lifecycle

Features of App factory

    1. Self-Provisioning of the workspace and resources such as code repository, issue tracking, build configuration and bug finding tools… etc.
    2. Support a variety of application

○     Web applications
○     PHP
○     Jaxrs
○     Jaxws
○     Jaggery
○     WSO2 ESB
○     WSO2 BPEL
○     WSO2 Data services

  1. Gather developers, QAs and DevOps of the organization to the application workspace
  2. Automate continuous builds, continuous tests and development activities
  3. One click solutions for branching and versioning
  4. Deploy application into WSO2 rich middleware stack
  5. No need to change your way of doing things

○     App factory can be configured to integrate with your existing software development life cycle.
○     Integrate with your existing users via LDAP or Microsoft Active directory

WSO2 AppFactory applications integrated

Yes, WSO2 App Factory is customizable. For instance organizations are not required to use the tools that App factory supports, they can plug in a tool of their preference. It is a matter of integrating another tool. Different organizations have different workflows, still App Factory can be configured to suit their own workflows.

In summary WSO2 App Factory is a cloud enabled DevOps PAAS for enterprise which manages the entire life cycle of an application. It leverages the application development giving enterprises a competitive advantage in the cloud.

Enough of talking, so help yourself by visiting App Factory preview in live. It is free and open source.

This article is just a bird’s eye view of the WSO2 App Factory. Visit its home page to broaden your knowledge. Good short video about the product is shown below:

http://www.youtube.com/watch?v=ljtR37__jFY

The perfect swap in C++

So you are coding this cool app and you need to swap two variables. How does a good programmer do that in C++? The STL (Standard Templates Library) provides the std::swap function which does exactly what we want.

[code language=”cpp”]int a = 10;
int b = 12;
std::swap(a, b);[/code]

That’s easy. But hey, why don’t we go ahead and see actually what std::swap does behind the scenes? A grep in the Apache STL implementation gives us:

[code language=”cpp”]template <class _TypeT>
inline void swap (_TypeT& __a, _TypeT& __b)
{
   _TypeT __tmp = __a;
   __a = __b;
  __b = __tmp;
}[/code]

Woah, all the underscores! But don’t panic just yet. All it does is the grade-school swapping:

[code language=”cpp”]T tmp = a;
a = b;
b = tmp;[/code]

Hmm. That is perhaps the most straight-forward swap implementation. But how does it perform? Look again.

[code language=”cpp”]T tmp = a; // a copy of ‘a’ is created
a = b;  // a copy of ‘b’ is created
b = tmp;  // a copy of ‘tmp’ is created[/code]

That’s a lot of copies for a simple function! What if we could just ‘swap’ the two values without copying?

We google around a bit and find out that the above std::swap implementation is actually the old way of doing things. The new C++11 implementations does this differently. So we check the C++11 include files.

[code language=”cpp”]template<typename _Tp>
  inline void
swap(_Tp& __a, _Tp& __b)
   {
   _Tp __tmp = _GLIBCXX_MOVE(__a);
  __a = _GLIBCXX_MOVE(__b);
   __b = _GLIBCXX_MOVE(__tmp);
  }[/code]

That’s more confusing than the previous one. Again, don’t panic. We can simplify the things. _GLIBCXX_MOVE is defined to be std::move. Let’s just call it ‘move’. So the above function is roughly similar to:

[code language=”cpp”]T tmp = move(a);
a = move(b);
b = move(tmp);[/code]

Now we are scratching our chins. At first glance, the implementation looks much similar to the grade-school swap. And then, there’s this move-thingy. Okay, looking back, we remember that the elements were ‘copied’ in the grade-school algorithm. Instead, it looks like the variables are ‘moved’ here.

And we are right! In the first line, tmp is set to the value of a, while the variable a is (temporarily) invalidated. No copying is done. The previous memory location of a is now the territory of tmp. In the next line, a is set to the value of b, while b is invalidated. Finally, b is set to the value of tmp, and tmp itself is invalidated, which we don’t need again anyway. And the result? The two values are swapped without any copy operations!

How does this moving really work? C++11 introduces the so called “rvalue references”. The ‘move’ function returns the rvalue of the input parameter without triggering a copy construction.

[code language=”cpp”]T &a = x; // normal (lvalue) reference
T &&b = y; // rvalue reference [/code]

A full description will not fit in this post, but you can go through this nice little introduction on rvalue references. You might also want to refresh your memory about lvalues and rvalues.

And let’s call it a day and meet again with another little C++ adventure.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2027.html

ලොව ජයගත හැකි සාර්ථක මෘදුකාංග ඉංජිනේරුවෙකු විය හැක්කේ කෙසේද?

දැනට වසර දෙක තුනක සිට පවතින ලෝක ආර්ථිකයේ පසුගාමී ස්වභාවය හමුවේ පවා “මෘදුකාංග ඉංජිනේරු” (Software Engineering) වෘත්තියෙහි රැකියා අවස්ථාවන් මේ ගෙවීයන දශකය (2011 – 2020) තුළදී 30% කින් වර්ධනය වනු ඇති බව එක්සත් ජනපදයේ කම්කරු සංඛ්‍යාලේඛන කාර්යාංශය පවසයි [1]. මෙය එරට සමස්ත රැකියා වර්ධනය පිළිබඳ බලාපොරොත්තු වන අගය (14%) මෙන් දෙගුණයකටත් වැඩිය. තවද Wall Street Journal පුවත්පත පළ කර ඇති සංගණනයක් මගින් එක්සත් ජනපදයේ වෘත්තීන් 200 ක් අතුරින් හොඳම වෘත්තිය ලෙස මෘදුකාංග ඉංජිනේරු වෘත්තිය නම් කර ඇත [2]. ලෝක ආර්ථිකයේ වත්මන් මන්දගාමී තත්ත්වය යටතේ පරිගණක මෘදුකාංග පද්ධති සහ සේවා සඳහා ඇති වෙළඳපොල තව තවත් වර්ධනය වනු ඇති බව ද වාර්තා වී තිබේ. වෘත්තියක් ලෙස මෘදුකාංග ඉංජිනේරු ශිල්පය සඳහා ඇති ඉල්ලුම පසුගිය දශක දෙක පමණ ඔස්සේ අඛණ්ඩ වර්ධනයක් ලැබූ අතර එය ඉදිරියටත් තව තවත් වර්ධනය වන බව ඉහත තොරතුරු මගින් මනාව සනාථ වේ. එම ශිල්පය මැනවින් අධ්‍යයනය කිරීමට සහ වෘත්තීය (professional) මෘදුකාංග ඉංජිනේරුවෙකු වශයෙන් සාර්ථකත්වයට පත්වීමට අවශ්‍ය කරන කුසලතා ගැන යම්කිසි මග පෙන්වීමක් ලබා දීම මෙම ලිපියේ අරමුණයි.

මෘදුකාංග ඉංජිනේරු වෘත්තිය යනු තොරතුරු තාක්ෂණ (IT) ක්ෂේත්‍රයේ ඇති විවිධ වූ වෘත්තීන් අතරින් එකකි. දෘඩාංග (hardware), පරිගණක ජාලකරණ (networking), දත්ත-ගබඩා (databases), පරිගණක පද්ධති (systems engineering) ආදී විවිධ අනු ක්ෂේත්‍ර රැසක විහිදී ඇති ඉංජිනේරු වෘත්තීන් රැසක් අතර මෘදුකාංග ඉංජිනේරු ශිල්පය ද එක් විශේෂිත (specialized) වෘත්තියකි. මේ එක් එක් අනු ක්ෂේත්‍රයක් සැලකුවද ඒවායේ අතිමහත් දැනුම් සම්භාරයක් දැනමටත් ගොඩ නැගී ඇත. එසේම තොරතුරු තාක්ෂණට අයත් වන ඉංජිනේරු හා තාක්ෂණ විෂයයන් අතිමහත් වේගයකින් වර්ධනය වන බැවින් නව දැනුම් හා ක්‍රමවේදයන් එක් වීමත් පවතින ක්‍රමවේදයන් භාවිතයෙන් ඉවත් වීමත් ඉතා ඉක්මනින් සිදුවේ. තත්ත්වය මෙසේ හෙයින් එක් පුද්ගලයෙකුට මේ සියළුම ක්ෂේත්‍රයන්හි පරතෙරට පැමිණීම ට නොහැකි බව ඉතා පැහැදිලිය. මේ සියළුම විෂයයන් සඳහා පොදු වන යම් දැනුම් ප්‍රමාණයක් ඇති නමුත් එක් එක් විෂයයෙහි ගැඹුරට යන විට එම විෂය සඳහාම සීමිත වන විශේෂිත දැනුම් සහ කුසලතාවන් අවශ්‍ය වේ. එසේම එක් විෂයයක විශේෂඥ වෘත්තිකයකු (expert professional) වීම සඳහා සෑම ක්ෂේත්‍රයකම ප්‍රවීණයෙකු වීම අවශ්‍ය නොවන බවත් මෙහිදී සඳහන් කල යුතුය. ඔබත් අනාගතයේ තොරතුරු තාක්ෂණ වෘත්තිකයකු වීමට බලාපොරත්තුවන අයෙකු නම් මෙහි දැක්වූ විවිධ අනු-ක්ෂේත්‍ර පිළිබඳ දැනුවත් විය යුතු අතර තම වාත්තීය අධ්‍යාපනයේ යම් අවස්ථාවක (උදාහරණයක් ලෙස, තොරතුරු තාක්ෂණ උපාධියක් හදාරණ සිසුවෙකු අවසන් වසරේ අධ්‍යයන කටයුතු ආරම්භ කරන කාලය වන විට) මේ එක් විශේෂිත ක්ෂේත්‍රයක් තමන් විසින් තෝරා ගැනීම යෝග්‍ය බවත් ප්‍රථමයෙන් ම ලබා දිය යුතු වෘත්තීය මාර්ගෝපදේශනය බව කිව හැකිය.

ඉහත සඳහන් විවිධ ක්ෂේත්‍ර අතරින් මෘදුකාංග ඉංජිනේරු ශිල්පය නම් විශේෂිත වූ ක්ෂේත්‍රය දෙස දැන් අපගේ අවධානය යොමු කරමු. ප්‍රථමයෙන්ම මෘදුකාංග ඉංජිනේරුවෙකුට දෘඩාංග සම්බන්ධයෙන් අවශ්‍ය වන දැනුම හා අවබෝධය කෙබඳු දැයි සලකා බලමු.

Software Engineering

රූපයේ දක්වා ඇති පරිදි පරිගණකයක් සම්බන්ධයෙන් දෘඩාංග (hardware) හා මෘදුකාංග (software) යනුවෙන් ප්‍රධාන අංග දෙකක් අපට හඳුනාගත හැකිය. මෘදුකාංගයක් යනු ගොඩනැගිල්ලක් නම්, CPU චිපය, මතකය (memory), දෘඩ තැටිය (hard disk) යනාදී උපාංග සියල්ලේ සංකලනය එම ගොඩනැගිල්ලේ අඩිතාලම (platform) ලෙස සැලකිය හැකිය. සාර්ථක මෘදුකාංග ඉංජිනේරුවෙකු වීම සඳහා දෘඩාංග නිර්මාණය (design) හා නිපදවීමට (construction) හැකි තරමි දැනුමක් අවශ්‍ය නොවේ. එවැනි දැනුමක් අවශ්‍ය වන්නේ දෘඩාංග ඉංජිනේරුවෙකු වීම සඳහායි. (පරිගණකයක උපාංග එකලස් කිරීම, අළුත් මතක පතක් සවිකිරීම වැනි ක්‍රියා hardware engineering නොවන බවත් ඒවා සරල නඩත්තු ක්‍රියා බවත් මෙහිදී විශේෂයෙන් සඳහන් කල යුතුය). දක්ෂ මෘදුකාංග ඉංජිනේරු‍වෙකුට අවශ්‍ය වන්නේ දෘඩාංග මගින් මෘදුකාංග වල නිවැරදි හා ඉක්මන් ක්‍රියාකාරීත්වය සඳහා සපයන සේවා හා යටිතල පහසුකම් පිළිබඳ නිරවුල් අවබෝධය යි. උදාහරණයක් ලෙස, මතක පද්ධතියේ වේගය හා විශාලත්වය අනුව ඇති වන ධූරාවලිය  (memory hierarchy) සහ ක්‍රියාත්මක වන මෘදුකාංගයක් මගින් එම ධූරාවලියේ විවිධ ස්ථර වල ඇති දත්ත කියවන/ලියන වේගයන් (read/write speeds) පිළිබඳ දැනුම දැක්විය හැක. විවිධ මතක චිපයන් හි දත්ත ගබඩා කෙරෙන තාක්ෂණය හා එක් එක් ස්ථරයේ මතක ගබඩා වල පිරිවැය (cost) පිළිබඳ දැනුවත් වීමද වැදගත් වේ. පිරිවැය පිළිබඳ දැනුවත්කම මගින් තමන් විසින් නිර්මාණය කරන මෘදුකාංගයක  ඉක්මන් හා සුමට ක්‍රියාකාරීත්වය සඳහා අධිබල CPU චිපයක් හෝ විශාල RAM චිපයක් මත පමණක්ම යැපීමේ අවධානම වැටහෙයි. බාල ගණයේ මෘදුකාංගයක් නිපදවා එහි ක්‍රියාකාරීත්වය වැඩිදියුණු කිරීම සඳහා මිළ වැඩි දෘඩාංග යොදා ගන්නට යැයි සේවාදායකයන්ට පවසන්නේ නම් එවැනි අයෙක් දක්ෂ මෘදුකාංග ඉංජිනේරුවෙකු වශයෙන් සැලකිය නොහැකිය. පරිගණක මතකය ගැන ගත් උදාහරණයට සමාන උදාහරණ අනෙකුත් දෘඩාංග සම්බන්ධයෙනුත් අදාළ වේ. රූප සටහනේ පහතින්ම දක්වා ඇත්තේ මෘදුකාංගයක ක්‍රියාකාරීත්වය සඳහා ඉතාම වැදගත් වන දෘඩාංග කුලකයන් ය. ඒ එක එකක් ගැන විස්තර වශයෙන් සඳහන් කිරීමට මෙම ලිපියේ ඉඩ නැති නමුත් මෘදුකාංග ඉංජිනේරුවකුට දෘඩාංග ගැන අවශ්‍ය වන දැනුම ගැන යම් වැටහීමක් දැන් තිබිය යුතුය.

Software and Hardware

දෘඩාංග විසින් සැපයෙන අත්තිවාරම උඩ ක්‍රියාත්මක වන මෘදුකාංග ද විවිධ ස්ථර කිහිපයක ට වෙන් කර හඳුනාගත හැකිය. භාවිත මෘදුකාංග (application software) හා පද්ධති මෘදුකාංග (system software) යන ප්‍රධාන කුලක දෙකකට මේවා වර්ගීකරණය වේ. මෘදුකාංග ඉංජිනේරුවෙකුට මේ කාණ්ඩ දෙකටම අයත් ස්ථර වල නිර්මෘණකරණයේ යෙදිය හැකි මුත් සාමාන්‍යයෙන් මෙම ක්ෂේත්‍රයේ රැකියා ඇත්තේ ඒවායින් එක කාණ්ඩයකට සීමා වන ලෙසයි. මේ විවිධ ස්ථර මෘදුකාංග ශිල්පයේ උප විශේෂිකරණයන් (sub specialization) ලෙස හැඳින්විය හැකිය.

පරිගණකයක පරිශීලකයා (user) හට අවසන් වශයෙන් අවශ්‍ය වන්නේ විවිධ කාර්යයන් ඉටු කර දෙන මෘදුකාංග (application software) ය. සිලිකන් පරිපථ හා චුම්භක කොටස් යනාදිය එකතු කොට නිපදවා ඇති උපාංග හරහා විදුලි ධාරාවන් ගමන් කරන විට ක්‍රියාත්මක වන දෘඩාංග සමූහයක් මත ධාවනය වන මෙහෙයුම් පද්ධතියක් (operating system) සහ තවත් විශේෂිත මෘදුකාංග සැකිලි (software frameworks) මත දිවෙන මෘදුකාංගයක් තමන් භාවිතා කරන බව පරිශීලිකයා හට දෘෂ්‍යමාන නොවේ. නවීන වාහනයකට නැග බොත්තමක් ඔබා එය පණ ගන්වා, ස්වයංක්‍රීය ගියරය දමා, වාහනය අභ්‍යන්තරයේ සිදුවන දහසකුත් එකක් ක්‍රියා ගැන කිසිදු අවබෝධයක් නැතිව ‍ එය ධාවනය කරවන රියදුරෙකුත් සාමාන්‍ය පරිගණක භාවිතා කරන්නෙකුත් අතර එතරම් වෙනසක් නැත. ඒ එසේ නමුත්, මෘදුකාංග නිර්මාණකරුවෙකුට තම වෘත්තියේ විශේෂඥයකු වීම සඳහා මේ විවිධ අන්තර් ස්ථර පිළිබඳ ඉතා නිරවුල්, ගැඹුරු අවබෝධයක් තිබීම අත්‍යවශ සාධකයක් වේ. සෑම අවස්ථාවක ම බිම් මට්ටමේ සිට සියළුම මෘදුකාංග තමන් විසින් ම නිර්මාණය කර ගත යුතු බවක් මින් අදහස් නොකෙරේ. යම් යම් අය විසින් පෙර නිමවා ඇති යටිතල මෘදුකාංග හා මෘදුකාංග සැකිලි යොදා ගනිමින් භාවිත මෘදුකාංග නිර්මාණය කිරීමෙහි කිසිම වරදක් නැත. ඵලදායි මෘදුකාංග නිර්මාණකරණයක් සඳහා එසේ යොදා ගැනීමත් අත්‍යවශ්‍ය කාරණයකි. අප විසින් නොකල යුත්තේ එම පහත ස්ථරයන් ගැන අනවබෝධයෙන්, උඩින් පල්ලෙන් අතගාමින් මෘදුකාංග නිර්මාණකරනයේ යෙදීම යි. තමන් නිපදවූ මෘදුකාංගය නිසි පරිදි ක්‍රියාත්මක නොවන මොහොතක ඒ ඇයිදැයි සොයා බැලීමට බිමි මට්ටමට එබී බලා ඒ නිවැරදි කිරීමේ හැකියාව හා අවබෝධය (ability to look under the hood) ඇතිව ඒවා යොදා ගැනීම දක්ෂ මෘදුකාංග ශිල්පියෙකුගේ ලක්ෂණය යි. එසේ නොමැතිව, යම් මෙහෙයුම් පද්ධතියක් මත දුවන අතථ්‍ය යන්ත්‍රයක (virtual machine) සේවා යොදා ගනිමින් ලියා ඇති මෘදුකාංග සැකිල්ලක ආධාරයෙන් යම් භාවිත මෘදුකාංගයක් නිපදවූ පමණින් තමන් මෘදුකාංග ඉංජිනේරුකරණයේ යෙදුනු බව සිතා නොගත යුතුය. සැබැවින්ම එය සරල වූ වැඩසටහන්කරණයක් (a mere programming job) පමණි.

SW_languages

රූප සටහනේ දක්වා ඇති විවිධ ස්ථර පිළිබඳ නිවැරදි දැනුම ට අමතරව මෘදුකාංග නිර්මාණකරනය ට අදාළ වන විෂයයන් හා මෙවලම් ගැන හොඳ න්‍යායාත්මක හා ප්‍රායෝගික දැනුමක්ද තිබීම අත්‍යවශ වේ. Computer Architecture, Operating Systems, Data Structures and Algorithms, Object Oriented Analysis and Design, Compiler Theory, Programming Theory සහ Database Designing වැනි විෂයයන් ගැන නිවැරදි දැනුම දක්ෂ මෘදුකාංග ඉංජිනේරුවෙකුගේ දැනුම් ගබඩාවේ නිත්‍ය හා ප්‍රමුඛ අංගයන් වේ. එසේම compilers, debuggers සහ test tools වැනි මෙවලම් (මේවාද තවත් විශේෂිත මෘදුකාංගයන් ය) ගැන ප්‍රායෝගික දැනුම ද මෘදුකාංග නිර්මාණ ක්‍රියාවලිය (software development life cycle) ගැන අවබෝධය ද ඉතා වැදගත් වේ.

මේ කරුණු සලකා බලන විට විවිධ කෙටිකාලීන පරිගණක පාඨමාලා හැදෑරීම හරහා මාසිකව ලක්ෂ ගණනින් ආදායම් උපයා ගත හැකි මෘදුකාංග ඉංජිනේරුවන් විය හැකි බව පවසමින් සිසුන් නොමග යවන විවිධ දැන්වීම වර්තමානයේ දැකිය හැකි වීම කණගාටුවට කරුණකි. මෙබඳු වෙළෙන්දන් ගෙන් මුලා නොවීමට සිසුන් හා දෙමව්පියන් වග බලා ගත යුතු අතර එබඳු පාඨමාලා යම් කිසි ප්‍රමිතිකරණයකට හා නියාමනකට ලක් කිරීමට බලධාරීන් ක්‍රියා කල යුතු බව අපගේ අදහසයි.

Facebook, Twitter වැනි සමාජ ජාල වල කරක් ගසමින් හෝ පරිගණක ක්‍රීඩා වල යෙදෙමින් නිකරුණේ තම අගනා කාලය අපතේ යැවීම වත්මන් සිසු පරපුර අගාධයට යවන්නක් බව කිව යුතුය. ඊමේල් යවන්නට දැන ගැනීම, අන්තර්ජාලයේ සැරිසැරීම හෝ MS Office වැනි මෘදුකාංග පැකේජයක් අත පත ගා යමක් කර ගැනීමට හැකි වීම ඉහළ මට්ටමේ තොරතුරු තාක්ෂණ වෘත්තිකයෙකු වන්නට සුදුසුකමක් නොවේ. එම දැනුම මගින් යම් අයෙකුට දත්ත ඇතුළත් කරන්නෙකු (Data Entry Operator) වැනි ආරම්භක ශ්‍රේණියක ලිපිකාර රැකියාවක් ලබා ගත හැකි වනු ඇත. කෙනෙකු ලබන අධ්‍යාපනය හා පුහුණුව අනුව ඕනෑම ක්ෂේත්‍රයක වැඩි හා අඩු වැටුප් තල වල ඇති රැකියාවන් තිබෙන අතර තොරතුරු තාක්ෂණයේදීත් මේ තත්ත්වය එලෙසම පවතී. CPU Chip Designer, Software Architect වැනි වැඩි වැටුප් ඇති වෘත්තීන් හි සිට Computer Assembler, Programmer වැනි ආරම්භක වැටුප් ඇති වෘත්තීන් දක්වා ඇති විශාල පරාසයක මෙම රැකියා විහිදේ.

SW_Programmer

සාරාංශ වශයෙන්, දෘඩාංග මගින් මෘදුකාංග වෙත සැපයෙන් සේවා ගැන පැහැදිලි අවබෝධය, විවිධ මෘදුකාංග ස්ථර සහ ඒවා අතර ඇති අන්තර්ක්‍රියා පිළිබඳ අවබෝධය සහ තම වෘත්තිය දැනුමෙහි අරටුව ලෙස සැලකිය හැකි විෂයයන් හා මෙවලම් ගැන නිවැරදි න්‍යායාත්මක හා ප්‍රායෝගික දැනුම යන කාරණා සාර්ථක මෘදුකාංග ඉංජිනේරුවෙකු වීමට අවශ්‍ය ප්‍රධාන අමුද්‍රව්‍ය ලෙස ඉදිරිපත් කළ හැකිය. මෘදුකාංග ක්ෂේත්‍රයේ දක්ෂ වෘත්තියකුට වීමට ඔබත් බලාපොරොත්තු වන්නේ නම් මෙම කරුණු පිළිබඳ හොඳ අවධානයක් යොමු කල යුතු බව පැවසිය හැකිය. ලොව ජයගන්නා සාර්ථක මෘදුකාංග ඉංජිනේරුවන් අප රටෙන් බිහි කල හැකි බව දැනටමත් සනාථ වී ඇත. ඒ කාර්යය තව තවත් ඔප් නැංවීමට යම් කිසි අත්වැළක් මෙම ලිපියෙන් සැපයෙනු දැකීම අපගේ අවසන් ප්‍රාර්ථනය යි.

 

ඉෂාන් ද සිල්වා

ඉංජිනේරු විද්‍යාවේදී‍ (ගෞරව)

ලේඛකයා මොරටුව විශ්ව විද්‍යාලයේ පරිගණක ඉංජිනේරු උපාධියෙන් ප්‍රථම පන්තියේ සාමාර්ථයක් ලබා ඇත.  ලංකාවේ ප්‍රමුඛතම මෘදුකාංග සමාගමක සේවය කරන හෙතෙම මෘදුකාංග නිපදවිමේ ක්‍රියාවලියේ විවිධ මට්ටම් වල නිරත වෙමින් වසර දහයක පමණ අත්දැකීම් ලබා ඇත්තෙකි.

 

යොමු ලිපි:

[1] http://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm#tab-6

[2] http://online.wsj.com/article/SB10001424052702303772904577336230132805276.html

Title image credits: timeunion.com

Auto-matic typing in C++

C++ is a statically typed language by design. In other words, you have to specify which data type your dear variable is supposed to be. If you declare myCoolVar to be an integer, you can’t let him play with his string friends. Type checking is done at the compile time. So myCoolVar will soon be caught red-handed if it goes matchmaking with a string or a vector.

This is contrast to the radical kids in the block like Python and Ruby who don’t care whom their variables play with. They call themselves dynamically-typed. A washed up floating point might become a variable with a character in a few moments.

Cool as it may sound, dynamic typing is not for everyone. C++ is mostly used where performance and reliability are key factors. Thus the C++ standard has always insisted on strong, static typing. But even if you haven’t been programming since the dinosaurs roamed the earth, you may know that it’s not always easy to remember and explicitly declare the correct variable type. The new C++ standard, or C++11  as it’s known, aims to change this.

C++11 isn’t exactly new to the city. Most compilers around, including gcc and Microsoft C++ compilers, now include support for C++11. Among the dozens of improvements it has brought, let’s have a look at the auto keyword.

Suppose you want to take the length of a C++ string called myStr. Easy-peasy. myStr.length() is the way to go. But wait. The return value of the length() function is string::size_t. Besides from being hard to type in, it’s not easy to remember all these return types either. That’s when the auto keyword comes handy. Instead of,

[code language=”cpp”]string::size_t len = myStr.length();[/code]

you can simply write

[code language=”cpp”]auto len = myStr.length();[/code]

The compiler detects the return type of myStr.length() and makes len a variable of that type. So very convenient.

Not convinced? Suppose you have a map of objects that you need to iterate. How do you initialize the iterator?

[code language=”cpp”]std::map<char,int>::iterator it = myMap.begin();[/code]

Ugh. That isn’t the nicest code snippet you’ve written. But hey, we have our friend auto:

[code language=”cpp”]auto it = myMap.begin();[/code]

Skadoosh! Auto has automagically made it an std::map<char,int>::iterator, because that’s the return type of myMap.begin()! Now that does look nice.

What if I want to declare a variable with a default value?

Suppose you want to declare a variable that holds the length of a string. But you first need to set it to zero.

[code language=”cpp”]auto len = 0;[/code]

Ha! That is wrong! This would make len an int, not string::size_t. But C++11 has an answer for that as well. You can use the decltype keyword like this:

[code language=”cpp”]decltype(myStr.size()) len = 0;[/code]

This declares len as a variable of type returned by myStr.size(), but still initializes it to zero.

Is this dynamic typing?

No, not really. Even though it appears as if auto changes a variable’s type dynamically, it actually happens during the compile time. It’s just that you don’t have to explicitly code the type of the variable, the compiler does that for yourself. Which is mighty sweet of C++11.

Is it safe to use auto?

Isn’t letting your variables roam free considered as bad parenting? No. Not only is it safe to use auto in your C++ code, it’s even recommended that you use it wherever possible. Bjarne Stroustrup, the creator of the C++ language himself, advocates its use. Just make sure that your compiler is updated to support C++11.

Image Credits : http://geekandpoke.typepad.com/geekandpoke/2010/01/best-of-both-worlds.html

JAVA developer timesheet, efficiency and stress factors 2012– by ZeroTurnAround (Part II)

The ZeroTurnAround team took attention in the Java developer community with the Java EE Productivity Report back in 2011. It mainly focused on tools, technologies and standards in use and general turnaround time in development activities based on Java as a language. They have comeback this year with their latest survey on Java developer productivity which uncovers very interesting trends about the practical aspects Java development lifecycle.

In their report, they discuss tools and technology usage as well as findings on developer time usage, patterns in efficiency and factors which govern developer stress in general. In Part I of this article, we discussed tools and technology usage findings. Part II discusses the developer timesheet, efficiency and stress based on the survey results of Java developers. Also there are a few interesting Q&A sessions that they had with a few well known geeks in the industry.

Developer timesheet
In their survey on what JAVA developers spend their day at work, they found 3 very interesting points:
1. Developers only spend about 3 hours per day writing code

2. Developers spend more time in non-development activities than we expect them to. For each hour of coding that they do, half an hour is spent on activities such as meetings, reporting and writing emails etc.

3. Developers spend more time firefighting than building solutions

Following is the work breakdown that they’ve found with the activities under each group. Note that “Writing code” is not the same as “Coding” which is generally used to identify writing code, problem solving, QA and strategy altogether.

Developer Time sheet

Here is an extract from the interview with Linconln Baxter III about the findings. Linconln is a Senior Software Engineer at Red Hat (JBoss Forge), founder of OCPSoft, and open source author/advocate/speaker.

Open Quote

 

[quote_simple]

 

ZT: What do you think about finding number 1: developers spend less time writing code than you think? It’s just 3 hours per day it seems.

LBIII: I’m not surprised one bit. The biggest drain on productivity is the constant interruptions to our concentration; that can be co-workers, running a build, running tests to check your work, meetings; it can take up to 25 minutes to regain your focus once you’ve been distracted from your original task.

Think of a brain as if it were a computer. There is waste every time a computer shifts from one activity to another (called a context switch), a very expensive operation. But computers are better at this than we are because once the switch is complete they can immediately resume where they left off; we are not so efficient.

When considering context switching, builds and everything else considered “Overhead” are the biggest distractions from writing code. Even though this piece of the pie is only responsible for 4.63 hours of a developer’s week, in reality, the true impact of this problem is much greater. Once you add in all the other distractions of the workplace, I’m impressed anyone gets work done at all. Every re-deployment is going to cost you an extra 25 minutes of wasted focus, in addition to the deployment cost itself.

[/quote_simple]

Close Quote

 

 

Developer Efficiency

This is what developers think what makes their life inefficient at their work places. It’s not the manager’s view; it’s not the independent consultants view. So there should be something to take from it for all of us.

Developer Efficiency

Majority of the developers think too much multitasking create inefficiency. This is somewhat highlighted in the interview with Linconln Baxter. According to him, each context switch will cause 25 minutes of recovery time before the next task to be productive. No wonder too much multitasking will do to the developers. They will simply be switching between tasks without any of them getting completed. From the today’s organizational environments, it’s mandatory that we all multitask in order for us to achieve better and more efficiently. This simply suggests that we should find the right balance when it comes to software development.

Boring tasks are also identified as promoting under-efficiency. There are debates whether the boring tasks should be completed as soon as possible without letting it make your day dull.

Bad management of own time and work as a whole, having buggy software to start with and lack of motivation to do the job have established once again as common problems in causes of inefficiency.

ZeroTurnAround team has interviewed Matt Raible who has few interesting ideas why the statistics makes sense with his real life experience. Matt is a Web architecture consultant, frequent speaker and father with a passion for skiing, mountain biking and good beer.

 

Open Quote

[quote_simple]

ZT: What can you say about “Bad Management”?
MR: Yeah, what works great for me is to get used to non-standard work hours, and avoiding inefficient wastes of time.

Work long hours on Monday and Tuesday. This especially applies if you’re a contractor. If you can only bill 40 hours per week, working 12-14 hours on Monday can get you an early departure on Friday. Furthermore, by staying late early in the week, you’ll get your productivity ball rolling early. I’ve often heard the most productive work day in a week is Wednesday.

Avoid meetings at all costs. Find a way to walk out of meetings that are unproductive, don’t concern you, or spiral in to two co-workers bitching at each other. While meetings in general are waste of time, some are worse than others. Establish your policy of walking out early on and folks will respect you have stuff to do. Of course, if you aren’t a noticeably productive individual, walking out of a meeting can be perceived as simply “not a team player”, which isn’t a good idea.

ZT: Lack of motivation was cited as another factor preventing developers from being more efficient. Any thoughts on that?
MR: Look, you have to work on something you’re passionate about. If you don’t like what you’re doing for a living, quit. Find a new job ASAP. It’s not about the money, it’s all about happiness. Of course, the best balance is both. It’s unlikely you’ll ever realize this until you have a job that sucks, but pays well. I think one of the most important catalysts for productivity is to be happy at your job. If you’re not happy at work, it’s unlikely you’re going to be inspired to be a more efficient person. Furthermore, if you like what you do, it’s not really “work” is it?

[/quote_simple]

Close Quote

 

 

Developer Stress
Among the answers to the question “what keeps you up at night?” many say “nothing, I sleep like a baby”. But there are following 5 stressors at the top of the problems which usually keeps developers up at night.

It looks like developers are more concerned about the accuracy, completeness and quality of their code. Whether they are competitive in the world’s fastest running industry. Although not by a large margin the most popular stressor “Making deadlines” also there with one out of every four developers are stressed up due to making deadlines. External reasons like software estimation problems and interruptions as well as not managing one’s work can cause working extra hours in catching deadlines.

Developer Stress

In the interview with Martijn Verburg, ZeroTurnAround team gets exposed to a set of useful points each developer should know and adhere to make their lives stress free; at least from the work stress!

Martijn is also known as “The diabolical Developer”, Java community leader, speaker and CTO at TeamSparq.

Open Quote

[quote_simple]

ZT: I was surprised to see that developers are primarily concerned about Making Deadlines. Isn’t that something that The Suits should be worrying about more?

MV: Managing deadlines is something that a lot of developers feel that they cannot learn or is out of their control (e.g. their manager tells them what the deadline is). However, managing deadlines is a skill that can definitely be learned! For example, developers can learn to:

  • Scope work into manageable (1day chunks)
  • Define what “DONE” means (95% is not DONE)
  • Factor in contingencies
  • Communicate risks and issue to stakeholders
  • Learn to prototype ideas to keep the overall project flowing

There are a number of tools to assist you in managing the scope and communication around deadlines, but always remember, “Whatever they tell you, it’s a people problem” so developers should look at their communication and expectation setting first.

ZT: Next, why do you think Performance Issues would rank so highly on the list of developer stress?

MV: Performance and performance tuning is terrifying for most developers because they have no idea where to start. Modern software applications are so complex that it can feel like finding a needle in a haystack. However, performance and performance tuning is actually a “SCIENCE”, not an art and definitely not guesswork. Krik Pepperdine (one of Java’s foremost expert in this field) always hammers home the point “Measure, don’t guess”.

By following the sort of scientific methodology that Kirk and others like him teach, you can systematically track down and fix performance issues as well as learning to bake performance in right from the beginning. There is a host of tooling that can assist in this area as well, but it’s the methodology that’s truly important. I can highly recommend Kirk’s course (www.kodewerk.com – he runs courses worldwide, not just in Crete, so drop him a line).

[/quote_simple]

Close Quote

 

JAVA Tools and Technology usage in 2012– by ZeroTurnAround (Part I)

ZeroTurnAround team took attention in the Java developer community with the Java EE Productivity Report back in 2011. It mainly focused on tools, technologies and standards in use and general turnaround time in development activities based on Java as a language. They have comeback this year with their latest survey on Java developer productivity which uncovers very interesting trends about the practical Java development lifecycle.

tools&technology_leader_board4

 

In their report, they discuss tools and technology usage as well as findings on developer time usage, patterns in efficiency and factors which govern developer stress in general. In part I of this article, we discuss tools and technology usage findings.

Java Versions

Java_Versions

6 years after December 11 2006, Java 6 leads the board. Java 7 (codename ‘Dolphin’) has a 23% usage just after about 6 months (at the time of the survey). There should be something in the new version of Java for the community to accept it at an amazing conversion rate.

 

Java IDEs

Java_IDE

Eclipse has steadily maintained more than two thirds of the IDE usage while Intelli JIDEA and Net Beans have raised their interest among the community by 6% and 5% respectively compared with the previous year. IDEA’s free community version and rapid and extremely active involvement in Net Beans are the major contribution factors behind this shift.

Java Build Tools

Build_tools

Build tools Maven and Ant has secured their positions this year as well. ZeroTurnAround predicts that there will be a trend to move from Ant to Maven.

 

 

Java Application Servers

application_serversApache Tomcat remains the most widely used open source application server.  JBoss and Jetty is securing their place in the community. First released in October 2011, Jetty looks very attractive to the developer community with its lightweight and cool enterprise level features. It supports Servlet 3.0 and has better easy to follow documentation which is a major plus point that attracted developers. Being the giant’s choice Weblogic and WebSpehere are there with more and more features added to the list as well as improved setting up capabilities. For example, Weblogic 12c promised 200 new features and it can be distributed as a zip archive. That means there is no time consuming installers; just unzip and run!

 

Java Web Frameworks

web_frameworksWhile Spring MVC, JSF, Struts and GWT have consistent market share against last year survey results, GWT (Google Web Toolkit) has gained some market share compared to the others. ZeroTurnAround view is that the constant market share is because large projects which have already been implemented and live on older frameworks are not moved to the newer, better frameworks. This is due to its commercially unattractiveness owing to the time and risk factors involved with refactoring the code to suit the newer frameworks. Hence, they think that only new projects will drive the market and it will take time to see clear patterns.

 

Java Application Frameworks

application_frameworks

Spring and Hibernate are the market leaders having more than half of the users using one of them. The percentage usage of these two is very similar as they are used together almost all the time. In third place Aspect J stands with the vision to be a seamless aspect oriented extension to Java.

 

 

Java Standards

java_standardsJava Enterprise Edition’s light weight contenders JPA, EJB 3.0 and CDI has enjoyed wider acceptance in the last year as well. Supported well by Hibernate and EclipseLink, JPA remains at the top scoring a 44%.

 

 

 

 

Code Quality Tools

code_quality_toolsPMD and FindBugs are static analysis tools where CheckStyles checks for code styling according to the standard and readability of your code. Sonar provides a suit of code quality tools.  FindBugs operate on Java byte code rather than the original source code and analyzes for possible problems that can cause trouble if not fixed before releasing. It is a project from the University of Maryland and has plugins available for Eclipse, NetBeans and Intelli JIDEA.

 

Version Control Systems

Almost natural descendant of CVS, Subversion is there at the top with every 2 out of 3 using it. Version control systems today are inspired by the distributed paradigm. GIT and Mercurial are distributed version control systems. This paradigm shift will take on with the Java community as well, and in the years to come, we expect to see a raise in the use of GIT and Mercurial and dip in the others.