TechWire

Author - Nafli

An underwater data center by Microsoft

Data centers are an essential component of Internet usage. Without even realizing it, nearly everyone one in the world makes use of data centers. Be it when you try to stream videos or check your social media updates on your phone, the data processing happens miles away in a remote data center.

In early June this year, Microsoft created a stir when they announced about sinking a data center off the coast of Scotland. The headline brought up several questions. What is the benefit of placing a data center underwater? Isn’t it risky placing electronics underwater in a corrosive environment like the sea? How will maintenance be done? And why off Scotland? Perhaps before answering these questions, we should have a look at the facts from scratch.

The birth of Project Natick

It all started in 2013 when Microsoft employees submitted a whitepaper describing an underwater datacentre that could be powered by renewable ocean energy. One of the major cost drivers of any data center is electricity, which is required to both power and cool equipment.  This paper’s concept of using renewable ocean resources to power-up equipment and also to cool them, got the attention of the Microsoft management. And in late 2014, Microsoft kicked off Project Natick.

Phase 1 – Leona Philpot

As the first phase of the project, Microsoft developed a 10’ x 7’ cylindrical prototype vessel – taking inspiration from submarines – which could house one server rack. The prototype was eccentrically named ‘Leona Philpot’ — a character from Microsoft’s ‘Halo’ video game series. The team could take advantage of the fact that no human intervention would be required to maximize space and forego the need for oxygen in the vessel.

In 2015, Leona Philpot was deployed underwater off the coast of California with sensors placed to monitor the system and track factors such as motion, pressure, heat, and humidity for 105 days. The test results exceeded Microsoft expectations, and they extracted the prototype and took it back to HQ for analysis and refitting.

project natick

Phase 2 – Orkney Scotland

For the current phase of this project, Microsoft decided to get expertise from marine organizations to lead the design, fabrication, and deployment of the Phase 2 data center. The new module is larger, with a length and diameter of 12.2 m and 3.18 m respectively, and a payload that includes 12 racks containing 864 standard Microsoft data center servers with FPGA acceleration and 27.6 petabytes of disk, and has enough storage for about 5 million movies. The internal operating environment is set as 1-atmosphere pressure, with dry nitrogen used.

The European Marine Energy Centre in Orkney Scotland was selected due to the abundant availability of wind and tidal power supply through a grid. On 1st June, the data center was deployed from a docking structure.

While the objective of the first phase of the project was to see if the underwater data center was feasible, the second phase looks at whether the concept is logistical, environmentally and economically practical.

What does this mean for the future of data centers?

Latency looks to be one of the key improvements Microsoft is targeting – signals travel around 200 km/millisecond across the Internet, so if you are 200 km away, one round trip to the data center takes about 2 milliseconds, but if you are 4000 km away, each round trip takes 40 milliseconds. Impact of latency is mostly felt in real time data applications like gaming and video conferencing. Right now, most of the world’s data centers are located in giant complexes, usually far away from urban areas, and this distance to the population increases latency. However, half of the world’s population is located within 200 kilometers of the coast, so placing data centers in the sea would provide benefits.

Traditional data centers also take years to construct and deploy. Microsoft is looking at utilizIng the modular design of these underwater data centers which can reduce deployment time to 90 days. The size of the data center deployed in Orkney is no accident. It can be transported on a 40-foot container lorry, meaning that within 90 days, you can upgrade your data center capacity with standard logistic capabilities.

Modern data centers use a lot of water for the cooling systems – typically 4.8 Liters per kWh of electricity used – while the Natick modules only use existing seawater for the heat exchangers. The use of tidal energy-based power in the future could make the data center zero emission and truly sustainable.

Natick deployment

Environmental concerns

The project Natick modules are environmentally sustainable with zero emissions. However, if there is a concern, it is to do with the heat emitted from the modules. Phase 1 showed that there was minimal heat emitted and that it was very close to the vessel surface. During phase 2, sensors will continue to monitor the heat and noise emitted by the module to assess any environmental impact.

After each 5-year deployment cycle, the data center vessel would be retrieved, reloaded with new computers and redeployed. The current project Natick module is expected to last 20 years – after that, the data center vessel is designed to be retrieved and recycled.

Is it worth the effort?

Data centers have made great strides towards reducing energy usage through methods such as liquid cooling, airflow management, and server virtualization. Even moving data centers to countries with lower temperatures have shown to improve energy efficiency. Phase 1 of Project Natick operated with a highly efficient PUE (power usage effectiveness or total power divided by server power – 1.0 is perfect) of 1.07. However, Google data centers claim to have a PUE of 1.12.

Therefore, the greatest advantage could be gained through lowering water usage (for cooling), increasing deployment speed and improving latency for end users by targeting cities with high populations close to the coast.

Further plans exist to provide energy for undersea modules through tidal power generation, making the modules run completely on renewable energy and completely self-sustaining.

Project Natick is currently a speculative effort but if the next few years of testing show that the concept is technologically and commercially viable, it could very well change the economics of the data center business.

Google Project Fi

Google has been keeping itself busy – and keeping traditional telcos on their toes as well – with quite a few connectivity projects, ranging from Google Fiber to Project Loon. Google is a company looking to connect everyone. One of the latest ventures is Project Fi, which is Google’s foray into mobile networks. Google’s Fi Network basically acts as a separate operator, to which your phone can connect and use – to make calls, SMS or browse the internet.
So how does one sign up for the Fi Network? Well, for the moment, the service is only available in the US and in areas where the Fi Network has coverage, and sign up is also limited by an invitation-only policy, at this early stage.

Access Network

Connectivity to the Fi Network is through Wi-Fi and the LTE networks of mobile operators Sprint and T-Mobile. Google has an agreement with these operators to use their existing network infrastructure, acting as a Mobile Virtual Network Operator (MVNO). The FI Network switches between the better of the two operator networks, ensuring that you get the best possible network connectivity (if 4G is not available, you will be switched to 3G or 2G).

When on the Fi Network, you will automatically be connected to free, open Wi-Fi hotspots that Google has verified as fast and reliable. This is convenient, as whenever you’re on Wi-Fi, you’re not charged for data usage. As per the Project Fi team, there are more than a million of these hotspots around the US.

If you are concerned about security on open Wi-Fi networks, Google has ensured data is secured through encryption using Virtual Private Network (VPN).

Just another MVNO?

Google never follows the norm, and neither does it’s Fi Network Plan – the Fi Basics for $20/month includes unlimited domestic talk and text, unlimited international texts, Wi-Fi tethering to use your phone as a hotspot and access cellular coverage in 120+ countries. Data (charged if you use it over a cellular network) has to be purchased as an extra package, with 1GB of data costing $10. [1]This is a departure from the contract-based and more expensive plans offered by other operators in the US (makes you glad you don’t have pay those data rates in SL).

This is a departure from the contract-based and more expensive plans offered by other operators in the US (makes you glad you don’t have pay those data rates in SL).

Google has also come up with an innovative package which credits cash towards your next bill for any unused data from your monthly package.

fi

Devices

The Nexus 6 is the only smartphone currently supported by Project Fi’s early access network. The Nexus 6 works with a separate SIM that enables access to multiple networks, and has a cellular radio unit designed to work with different 4G network bands/types in the US and abroad, as required for the Fi Network.

The specially designed SIM can store up to 10 network profiles, which enables seamless toggling between different networks.

Another innovative step is that the phone number works with more than just your phone. Google stores your phone number in its data centers, which allows you to connect any device that supports Google Hangouts (Android, iOS, Windows, Mac or Chromebook) to your number.

Making Calls

When on the Fi Network, your calls can be made over Wi-Fi with no separate app required, and your call will automatically switch between cellular and Wi-Fi networks based on the signal quality. If you are using any other device, you can make calls or message using your number in Google Hangouts. It should be noted that international calls over Wi-Fi will have an extra charge.

Conclusion

Google’s ultimate goal is to push people to spend more of their online lives using Google products, because when that happens, it brings more traffic to its search engine, YouTube, and other services such as Gmail and maps.

It will be interesting to see how the Fi Network will perform once it is opened up to more subscribers. Key to making the project Fi Network tempting to users will be the offer of high-speed data without being charged (on Wi-Fi). It remains to be seen if a high number of users can be sustained by the available open Wi-Fi networks. Google will also have looked at, or will be looking at, ways to optimally manage providing services to high data using/streaming subscribers as well as casual subscribers over its multi-network platform.

Mobile operators may also benefit in the short term by selling more network capacity to Google. However, if all goes well with Project Fi, Google could be pushing more people to using its networks, generating more ad revenues and creating a shakeup of the mobile network’s pricing model.

 

 

 Project Fi
Google has been keeping itself busy – and keeping traditional telcos on their toes as well – with quite a few connectivity projects, ranging from Google Fiber to Project Loon. Google is a company looking to connect everyone. One of the latest ventures is Project Fi, which is Google’s foray into mobile networks. Google’s Fi Network basically acts as a separate operator, to which your phone can connect and use – to make calls, SMS or browse the internet.
So how does one sign up for the Fi Network? Well, for the moment, the service is only available in the US and in areas where the Fi Network has coverage, and sign up is also limited by an invitation-only policy, at this early stage.
Access Network
Connectivity to the Fi Network is through Wi-Fi and the LTE networks of mobile operators Sprint and T-Mobile. Google has an agreement with these operators to use their existing network infrastructure, acting as a Mobile Virtual Network Operator (MVNO). The FI Network switches between the better of the two operator networks, ensuring that you get the best possible network connectivity (if 4G is not available, you will be switched to 3G or 2G).
When on the Fi Network, you will automatically be connected to free, open Wi-Fi hotspots that Google has verified as fast and reliable. This is convenient, as whenever you’re on Wi-Fi, you’re not charged for data usage. As per the Project Fi team, there are more than a million of these hotspots around the US.
If you are concerned about security on open Wi-Fi networks, Google has ensured data is secured through encryption using Virtual Private Network (VPN).
Just another MVNO?
Google never follows the norm, and neither does it’s Fi Network Plan – the Fi Basics for $20/month includes unlimited domestic talk and text, unlimited international texts, Wi-Fi tethering to use your phone as a hotspot and access cellular coverage in 120+ countries. Data (charged if you use it over a cellular network) has to be purchased as an extra package, with 1GB of data costing $10. [1]
This is a departure from the contract-based and more expensive plans offered by other operators in the US (makes you glad you don’t have pay those data rates in SL!).
Google has also come up with an innovative package which credits cash towards your next bill for any unused data from your monthly package.
Devices
The Nexus 6 is the only smartphone currently supported by Project Fi’s early access network. The Nexus 6 works with a separate SIM that enables access to multiple networks, and has a cellular radio unit designed to work with different 4G network bands/types in the US and abroad, as required for the Fi Network.
The specially designed SIM can store up to 10 network profiles, which enables seamless toggling between different networks.
Another innovative step is that the phone number works with more than just your phone. Google stores your phone number in its data centers, which allows you to connect any device that supports Google Hangouts (Android, iOS, Windows, Mac or Chromebook) to your number.
Making Calls
When on the Fi Network, your calls can be made over Wi-Fi with no separate app required, and your call will automatically switch between cellular and Wi-Fi networks based on the signal quality. If you are using any other device, you can make calls or message using your number in Google Hangouts. It should be noted that international calls over Wi-Fi will have an extra charge.
Conclusion
Google’s ultimate goal is to push people to spend more of their online lives using Google products, because when that happens, it brings more traffic to its search engine, YouTube, and other services such as Gmail and maps.
It will be interesting to see how the Fi Network will perform once it is opened up to more subscribers. Key to making the project Fi Network tempting to users will be the offer of high-speed data without being charged (on Wi-Fi). It remains to be seen if a high number of users can be sustained by the available open Wi-Fi networks. Google will also have looked at, or will be looking at, ways to optimally manage providing services to high data using/streaming subscribers as well as casual subscribers over its multi-network platform.
Mobile operators may also benefit in the short term by selling more network capacity to Google. However, if all goes well with Project Fi, Google could be pushing more people to using its networks, generating more ad revenues and creating a shakeup of the mobile network’s pricing model.

Reference
[1] https://fi.google.com/about/faq/#plan-and-pricing-1
http://timesofindia.indiatimes.com/tech/tech-news/Inside-Googles-Project-Fi/articleshow/47161570.cms

Inflight Internet and Inflight WiFi

Access to the Internet has become so commonplace for many of us, that even a few minutes without being able to chat with friends, upload a selfie or just catch up on your friend’s Facebook activities is compared to a catastrophe. Be it basement carparks, elevators or even deep in the Yala forest, people need to be connected.

One of the few places that have remained relatively free from demands of constant Internet access has been the seat of an airplane. Many of us are probably used to switching off our mobile devices while in flight, and amusing ourselves with inflight entertainment or trying to take a nap within the constricted space assigned (unless you are in business class).

Making inflight phone calls has been possible for a while, through onboard phones that made use of satellite communication, since 1998. Wireless communication was deemed off limits due to fears of interference with on-board systems. After receiving the all clear from the aviation authorities, in 2008, passengers using Emirates (and now many other airlines) have been able to use their mobile phones in flight. However, a voice call requires comparatively smaller bandwidth and probably made more economic sense to airlines.

While most of the world’s major airlines all provide in air Wi-Fi, flight service ranking company Routehappy’s analysis[2] from its report on “Global state of in-flight Wi-Fi” shows that on a global scale, 24% of all air miles will have some form of Wi-Fi connectivity; while in the US, 68% of air miles will have Wi-Fi connectivity. It should be noted that only 15% of all air miles of non-US airlines had Wi-Fi connectivity.

So how do you provide broadband connectivity in air?

Firstly, you have to install the Wi-Fi system in the plane – which consists of an external antenna and in-cabin systems including the internal wireless access points. Inflight communication solution providers like OnAir – which is used by SriLankan Airlines – provides both inline and retrofitted solutions for most commercial airline types. Check this video of an installation taking place on a United Airlines plane for an idea of the work required.

The inside of a plane is a difficult environment for radio signals; the tunnel shape of the cabin causes lower losses but creates power addition of local signals causing fading at certain points, while the number of passengers in the plane provide additional obstacles. Therefore, careful modeling and planning of access point placement and power settings need to be done before deployment in each model of plane.

Connecting a plane to the World Wide Web (Backhaul)

  • – Air to Ground (ATG) – Deployed in the US and Canada, cellular-based technologies are used, beaming 3G signals (EVDO in the 3GHz & 850MHz spectrum) from the ground-based towers into the sky and delivers peak speeds of 3.1Mbps. Newer versions of this technology (ATG4) increase the potential connection speed up to 9.8Mbps by using EVDO Rev B and directional antenna, which more efficiently captures the beam being sent up from the tower at ground level. [7]
  • – Satellite – For satellite connectivity, an antenna is mounted onto the top of the plane, inside a “radome” (a domed enclosure). The antenna transmits data rates at 10-30 Mbps to the aircraft. Most satellite operators currently use the Ku band (12-18 GHz band) for mobile connectivity but are looking to use the Ka band (26-40 GHz) in the near future with the advent of technologies that mitigate the rain-fading issues with the band. [7]
  • – Ground to Orbit (GTO) – This hybrid technology proposed by aero-communications service provider Gogo for planes flying in North America. GTO uses a combination of a satellite antenna on top of the plane to receive the signal and the ATG antenna under the plane to return the signal to earth and promising download speeds at a peak of 70 Mbps [1]. Inmarsat is also planning a hybrid satellite and ATG network partnering with Alcatel Lucent for the European continent [5].

 

Your Wi-Fi experience in the air, therefore, can vary significantly from airline/region to just the aircraft type. Inflight Wi-Fi has come a long way from the initial 332kbps incarnation when satellite bandwidth was at a premium. This year, satellite operator Inmarsat is set to launch more of its Global Xpress system [4] satellites which will be the first high-speed broadband network to span the world. This is set to offer improved downlink communications speeds of around 50 Mbps, with up to 5 Mbps on the uplink side, this version uses the Ka band and steerable spot beams to deliver high-speed broadband connectivity, and to provide capacity where and when it’s needed. With increased demand, we can expect other satellite operators to be planning to offer similar services along with GTA solutions in the near future.

 

References

  1. http://commercial.gogoair.com/connectivity/technologies
  2. https://www.routehappy.com/insights/wi-fi
  3. https://www.youtube.com/watch?v=OsuvlmDWYuA
  4. http://www.inmarsat.com/service/global-xpress/
  5. http://www.inmarsat.com/press-release/alcatel-lucent-joins-inmarsat-technology-partner-development-first-eu-aviation-network/
  6. http://airfax.com/blog/index.php/2013/07/09/flight-focus-ifec-with-a-wi-fi-focus/
  7. http://flightclub.jalopnik.com/how-in-flight-wifi-works-and-why-it-should-get-better-1593043880

3D Printing is changing the world!

The first working 3D printer – or as the process was termed back then, stereo lithography – was created way back in 1984 by Chuck Hull of 3D Systems Corp. Since the beginning of the 21st century, there has been a large growth in the sales of these machines, and is set to grow further now that their price has dropped substantially. According to Wohlers Associates, a consultancy, the market for 3D printers and services was worth US$ 2.2 billion worldwide in 2012, up 29% from 2011. As producers become more familiar with the technology, they are moving from prototypes to finished products. Last year, Wohlers reckons more than 25% of the 3D-printing market involved making production-ready items. [1]

3D printers make things from a particular material by building them up one layer at a time, rather than the traditional method of removing material by cutting, drilling or machining – which is why the process is also called ‘additive manufacturing.’ Based on the requirements, there are many techniques used in 3D printing, and with changes needed to be made on just the software level, multiple items can be manufactured without the need for costly retooling of machines. This has made 3D printing a popular way to make one-off items, especially prototype parts and craft items.

Fused Deposition Modelling

One of the most popular techniques is Fused Deposition Modeling (FDM), where an electrically heated nozzle forces out a wire or filament of thermoplastic, which sets as it cools. Multiple heads can extrude different colours. FDM is the mechanism used in many of the small 3D printers used by hobbyists, and they have become more affordable. More capable 3D printers cost much more and big industrial systems, like laser-sintering technique using machines which are capable of printing aerospace parts in titanium, cost as much as one million dollars.

Bringing 3D printing into the home

The main obstacle keeping the process from going mainstream is that in order to print something, you first need a digital model from a CAD programme turned into a series of cross section data to tell the printer what to make. Models for many common objects are freely available online. However, if you wanted to print a model of a rare or custom piece, you would have to create a digital model on your own.

That’s where the MakerBot Digitizer comes in – a desktop device that scans any object up to about 8 inches in diameter. Just place an item on its rotating platform, and the Digitizer uses two lasers and a webcam to create a 3D digital file of it within minutes. Once the digital scan is completed, an object can be manufactured right away by feeding the resulting file to a 3-D printer, which is much easier and faster than using software to design a digital model from scratch.

Makerbot Digitizer

For example, if you lose a piece from your favourite chess set, you can simply scan one of the remaining pieces and print a replica that will be identical in size and shape though maybe not in colour or weight. More significantly, astronauts aboard the International Space Station could scan and print replacement parts for broken or lost components instead of having to wait weeks for them to be delivered. [2]

Competing with Mass Manufacturing

While 3D printing has been getting a lot more attention recently, most people still see it as more of a fad or novelty. Despite the great strides in 3D manufacturing, it is not about to be used for mass manufacturing any time soon. Even though the technology is improving, the finish and durability of some printed items can still be short of the standards required. Additive manufacturing is also much slower and 3D printers can’t churn out thousands of identical parts at low costs.

3D printers do, however, have their advantage, which is why they are being used by some of the world’s biggest manufacturers such as Airbus, Boeing, GE, Ford and Siemens. Production of spare parts for older components which are out of production, development of prototypes, customised hearing aids, plastic dental braces or even prosthetic jaw bones are just some of the current commercial uses for 3D printed components.

car-printing

In the US, the new F-35 fighter jets manufactured by Lockheed Martin have up to 900 parts identified that could be produced using 3D printing. The Chinese aeronautics industry also started using 3D printed titanium components and has some of the largest additive printing machines in the world, recently producing a whopping 5-meter long titanium part. [3]

Future of 3D Printing

The significance of 3D printing in mass production will rise with the development of systems that are capable of printing electrical circuits directly onto or into components. The process uses specially treated silver which can be printed using an inkjet. Using this process will also reduce waste that occurs during chemical etching for conventional circuitry. A trial system for printing mobile-phone circuits directly into the handset case was recently installed on a production line in China. If successful, it could result in slimmer phones or more room for additional electronics, or even larger batteries within a mobile phone shell.

Future of 3D printing

Calling 3D printing a novelty is definitely not justified, and while the average domestic user may not really have a need for it on a regular basis just yet, you can still use the nearest 3D printer to perhaps make a replacement for a minor vehicle part, broken cupboard handle, a protective case for a phone or any other valuables (and hopefully not to print a gun). [4]

3D printing could have a significant impact in markets such as Sri Lanka where access to customised manufacturing is limited. Much further in the future, there are plans to use a combination of bio materials and 3D printing for producing food – even tissues and organs. You are going to hear a lot more about and quite possibly use a 3D printed product in the near future.

 

  1. http://www.economist.com/news/technology-quarterly/21584447-digital-manufacturing-there-lot-hype-around-3d-printing-it-fast
  2. http://edition.cnn.com/interactive/2013/11/tech/cnn10-inventions/?hpt=te_t1
  3. http://www.thomasnet.com/journals/machining/additive-manufacturing-taking-flight-in-aerospace/
  4. http://www.forbes.com/sites/andygreenberg/2013/11/14/3d-printed-gun-stands-up-to-federal-agents-testfiring-except-when-it-explodes-video/

Image Credits : livescience.com, wired.com, foxnews.com, makerbot.com

Babel – Thoughts on automated language translations

Babel

Babel was a city (now thought to be Babylon), where legend has it, the people attempted to build a tower that would reach into heaven. As this was an enormous task, it required much time and cooperation among the people who incidentally all spoke the same language.

Hearing of this endeavour, God is said to have come down to see the city and declared, “Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do. Let us go down, and there confound their language, that they may not understand one another’s speech.”

And so God confounded the attempts of the builders by confusing their language into many mutually incomprehensible languages. Soon discord arose, the tower was left unfinished and the people of Babel scattered across the world.

Whatever your theological belief may be, this story is an interesting allegory. While used as a reason for the existence of the numerous languages in the world, it also illustrates how differences in language often lead to loss of cohesion.

Babel Tower

The Babel fish

In his entertaining novel The Hitchhiker’s Guide to the Galaxy, science-fiction writer Douglas Adams came up with an unusual solution to the problem of understanding multiple languages across the universe – the Babel fish.

Described as “small, yellow, leech like and probably the oddest creature in the universe the Babel fish feeds on the energy of brain waves around it, and excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with nerve signals picked up from the speech centres of the brain which has supplied them.

The practical upshot of all this is that if you stick a Babel fish in your ear, you can instantly understand anything said to you in any form of language. The speech patterns you actually hear decode the brainwave matrix which has been fed into your mind by your Babel fish.”

First steps towards a Babel fish

While text and voice based translation applications have been around for a while,  NTT Docomo made a giant leap late last year with the launch of an Android based voice translator for phone calls – the Hanashite Hon’yaku app. This app provides voice translation of the other speaker’s conversation into a required language as well as providing text readout.

The free service is already being used by Docomo customers, with translations possible on any smartphone, because the app utilises Docomo’s cloud servers for processing. However, the user must be a subscriber to one of Docomo’s packages to be able to use the service, so sadly it is not available on other operator networks.

Docomo will soon face competition from France’s Alcatel-Lucent which is developing a rival call translation product, named WeTalk. The service is to be compatible over any landline and is said to be able to handle Japanese and about a dozen other languages including English, French and Arabic. The firm said all this could be done in less than a second. However, it has opted to wait before the speaker has stopped talking to start the translation after trials suggested that users preferred the experience.

These applications are far from perfect, with errors occurring due to inability to recognize various accents and nuances in a language. The best voice translators typically have an error rate of 20-25%, which is just not good enough especially in business environments.

Babel Translator

Smarter Translators

Microsoft Research and the University of Toronto made a breakthrough in improving translations by using a technique called Deep Neural Networks, which is patterned after human brain behaviour. The researchers were able to train more discriminative and better speech recognizers than previous methods.[ 1 ]

Back in October 2012, Microsoft researchers demonstrated software that translates spoken English into spoken Chinese almost instantly, while preserving the tone of a speaker’s voice – an innovation that makes conversation more effective and personal.

The demonstration was made by Rick Rashid, Microsoft’s chief research officer, at an event in Tianjin, China. “I’m speaking in English and you’ll hear my words in Chinese in my own voice,” Rashid told the audience. The system works by recognizing a person’s words, quickly converting the text into properly ordered Chinese sentences, and then handing those over to speech synthesis software that has been trained to replicate the speaker’s voice. [ 2 ]

As Rashid explains in the Microsoft blog, “it required a text to speech system that Microsoft researchers built using a few hours speech of a native Chinese speaker and properties of my own voice taken from about one hour of pre-recorded (English) data, in this case recordings of previous speeches I’d made.”

As IBM’s Jeopardy champ “Watson” has shown, with enough information computers using neural networks can identify puns and wordplay in languages and learn to respond to questions involving them.

The Future

With further improvement in translation technologies, real time perfect translations could move from science fiction to science fact in the very near future. Wearable technology such as Google glass may soon be able to incorporate real time translation using cloud based services.

In a country where linguistic differences have and still affect such a significant portion of the population, such translation applications could be very useful. There should be some significant effort and backing put into developing translation services in the local market – an example of which is the website translation services developed by Dialog which, however, is only available for English to Sinhala translations. It is a small step but should be used as motivation for local developers to get involved in including Sinhala and Tamil to existing translation technologies – voice and text – in order to create applications that can help break language barriers.

While not so fantastical as a tower that reaches heaven, we may soon be able to embark on the next great project, which will hopefully help in understanding one another a little bit better in the future.

References

http://blogs.technet.com/b/next/archive/2012/11/08/microsoft-research-shows-a-promising-new-breakthrough-in-speech-translation-technology.aspx#.UUrU1merSfs

http://parivarthaka.dialog.lk/

http://www.technologyreview.com/news/507181/microsoft-brings-star-treks-voice-translator-to-life/

Cut the Wire

Mobile broadband to fuel ICT growth

According to ITU data, internet penetration in Sri Lanka had reached 15% of the total population, meaning that there are roughly 3.2 million users in Sri Lanka – as of 2011 [1]. This also represents annual growth of approximately 3% from 2008 onwards, which is steady rather than remarkable.

While the percentage of internet penetration in Sri Lanka is better than most countries in South Asia (and almost as much as the penetration in Pakistan), what is surprising is that only 1.7% of the population have a fixed broadband connection [2]. The same pattern can be seen among other South Asian nations as well.

What is holding back internet penetration in Sri Lanka?

Sri Lanka already has the advantage of having a high literacy rate among its population. In addition, according to World Bank data, up to 76% of the country has access to electricity. This should be an indicator that the infrastructure is in place for a boom in ICT penetration in the country.

However, the cost associated with traditional equipment such as laptops and desktops means that such devices remain a luxury to many Sri Lankans outside of the Western Province. Additionally, the monthly rentals and perceived low value addition through owning a computer does not provide adequate incentive for many people to part with their hard earned money.

A monopoly in wired broadband

The lack of competition in fixed data services has resulted in a situation where there is little incentive for government owned ISP to push aggressively to promote and add new subscribers to their network. Its goal of increasing its subscriber base to 600,000 by 2014 [3] will bring fixed broadband penetration to 2.8 %.

Mobile technology as the catalyst

Mobile penetration in Sri Lanka was a staggering 87% in 2011, and based on the numbers it would seem that a majority of Sri Lankans use their mobile connections to access the internet. The highly competitive mobile telephony market has seen the implementation of 3G networks theoretically capable of up to 42 Mbps throughputs. This has driven many frustrated domestic internet users to dump their slow wired connections and opt for mobile broadband dongles instead.

The existing mobile network infrastructure also has the advantage of having provided coverage to a great deal of populated Sri Lanka, and this is ideal to build on for future broadband enhancements.

Wireless futureWith 4G LTE (fixed wireless) already available to subscribers, the gap between the service offered by fixed and mobile services is closing fast. Operators are already scrambling to launch mobile versions of 4G LTE, as they continue to bring the latest mobile broadband technologies to the country in a bid to gain a competitive advantage in data services.

Why do we need growth in ICT?

The reasons for the country requiring ICT skills are well known and have been elaborated by a number of policy makers. However, it is not enough to only make equipment available. Users must have a meaningful purpose for connecting to the internet – i.e. apart from checking up on Facebook. The availability of online government services, price lists of essential goods, the ability to contact doctors for consultation and the availability of other such information on demand would significantly improve the standard of living in rural communities. Distance learning courses from universities could provide notable skills improvement to students and should also be used to promote the benefits of the internet.

A government sponsored ICT campaign is already underway, and we can only hope that this raises significant awareness of the benefits among more of the student population in Sri Lanka.

Subsidising equipment and making services accessible

Smartphones are probably the most feasible equipment that can be used to bridge the gap in broadband availability. Affordably priced smartphones – with useful applications developed for the local market – would help promote the use of online services and speed up ICT growth in the country.

Tablet computers, with new models that seem to be released almost every month and are manufactured by a multitude of vendors, are far less costly and have almost as much functionality as the traditional PC.One method the government should seriously look at, in order to promote internet & ITC usage among Sri Lankans, is to offer both smartphones and tablets at subsidised prices through existing mobile operators.

While focusing on improving the English language proficiency of the population, there should also be a focus on developing local applications in all three languages. This initiative is already underway and further progress will go a long way to help overcome the language barriers faced by many Sri Lankans.

Cutting the wire

It is time to focus on ICT growth using mobile broadband technology. Taking advantage of a mature mobile industry – whose operators also have access to multinational expertise – the government needs to work in partnership and provide assistance to realise the objective of ICT growth in the country.