CW Developer Network


October 4, 2010  10:43 AM

Pulling together: interoperability and the user at the heart of business computing

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Applications, Enterprise, Interoperability, Users

This is a guest blog written by Phil Lewis who is a business consulting director with Alpharetta, Georgia headquartered enterprise software company Infor.

Without promoting his own company’s brand in any blatant way, Lewis gives us a relatively impartial view of how interoperability and connecting applications to execute cohesive business processes should always stem from user perception of business needs and real user control.

hand.png

It is incredible to think that despite over two decades of business computing and the efforts of the multi-billion dollar software industry, companies in 2010 still struggle with the same issues they started tackling when they adopted their first computer:

• Complex integration projects with expensive and unfamiliar technology
• Rigid business processes which are difficult to re-configure
• Proprietary technology creating barriers to interoperability
• Visibility of business problems forcing a reactive approach instead of proactive improvement
• Islands of isolated information and data causing ineffective decision making and complex reporting

Alongside these “traditional” issues we can now add new challenges such as the ongoing debate between on premise vs. cloud applications, where businesses don’t just face the need to make a choice, but a new integration issue when they have made their selection.

The key to solving this history of challenges is ensuring that the technology that is implemented is done so in a way that does not prohibit or restrict future options.

For example, connecting applications to execute cohesive business processes, with a lightweight, standards-based, technology architecture, not only pulls together related business activities such as production and stock, but keeps the door to the future open with a set of connectors capable of integrating with third party applications and cloud services.

To do this without drowning in new technology investment, businesses need to align business processes to put the user in control. That is not to say that the technology should be entirely passive – it should help guide users to execute critical tasks, which support overarching company goals.

This eye on the future and end user focus mean business network reporting will become critical whereby a repository subscribes to messages published by all connected applications. This provides a single data source, which represents data from all business systems, giving a view of the entire business, opposed to an individual application.

Of course, finding applications that are delivered with this level of interoperability built-in is a whole new challenge…

October 1, 2010  1:35 PM

Explaining “Intelligent Workload Management” in simple terms

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Compliance, Data, Management, middleware, Virtualisation

This is a guest post by Mark Oldroyd, a senior technology identity and security specialist with Novell.

In this piece Oldroyd discusses the risks and challenges of computing across multiple environments. Despite the hurdles and obstacles ahead he says, companies can achieve their aims if they strike a balance between flexibility and control.

work.gif

If a business adopts a highly flexible approach by increasing its adoption of virtualisation and allowing employees to work remotely – the corresponding risk to confidential data rises. But try to control application usage and the movement of data too closely – and IT managers risk damaging the competitive advantage of their organisation and its ability to respond quickly to market changes.

According to IDC, a new approach to managing IT environments is required – that approach is Intelligent Workload Management. A workload is an integrated stack of applications, middleware and operating system. These workloads can be made “intelligent” by incorporating security, identity-awareness and demonstratable compliance. Once made intelligent, workloads are able to:

  • Understand security protocols and processing requirements
  • Recognise when they are at capacity
  • Maintain security by ensuring access controls move with the workload between environments
  • Work with existing and new management frameworks

Intelligent workloads offer organisations multiple benefits. They ensure security and compliance; reduce IT labour costs in configuration; improve capital utilisation; reduce provisioning cycle times and reduce end user IT support costs.

Adopting an Intelligent Workload Management approach to IT isn’t a nice thing to do – it’s an essential thing to do in today’s complex IT landscape. Organisations today must deliver seamless computing across any environment, whether that be physical, virtual or cloud if they are to successfully achieve a balance between security and control.

Continued »


September 29, 2010  7:48 AM

AVG launches Internet Security Suite 2011, but what about the Bayesian probability factor?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
'Virus`, analysis, Developers, Kernels, labs, Security

So last night I was able to meet and share canapes with AVG CEO J.R. Smith and the company’s CTO Karel Obluk to celebrate the launch of the new AVG 2011 Internet security suite. What started out looking like a back slapping meet-and-greet did in fact turn into a deep dive technology update and a mathematics lesson in Bayesian probability.

Amidst the crab balls on spoons, miniature vegetarian tacos served by effeminate waiters and the glitzy gloss of a product launch you can sometimes, if you are lucky, get down to the nitty gritty of why a company has gone to market with a new product version. This was my mission…

AVG says its 2011 iteration features enhancements based on community feedback from the company’s global community of more than 110 million users and now includes enhanced web and social network-protection.

But what kind of feedback is this? What kind of enhancements have been made — and is AVG’s so-called People-Powered Protection technology and approach anything more than marketing puff?

I had a similar problem when I attended the launch of Adobe Photoshop Elements last month. I asked the speaker how the company had re-engineered and re-architected the product to extract it and simplify it from the total Creative Suite 5 offering. I got a, “we’ll get back to you on that” – and I’m still waiting.

This was not so much the case last night. AVG does seem to take the back end seriously enough to bring its CTO along to product launches and this guy is a programmer of the old school. Karel Obluk took me through software kernels, re-architecting modules to handle zero-day attacks, how automatic updates are engineered at the back end, how the analysis labs work inside an anti-virus company and where I should look for to get the best beer in his home city of Prague.

The concept is simple, or at least it ought to be. If you want developer/programmers to adopt your product and actually use it- then they are going to need to know more than whether or not it comes in a shiny new box.

AVG Jap.jpg

AVG’s People-Powered Protection appears to be a system where users decide to opt in or opt out of sharing data related to the websites (both safe and potentially malicious) that they visit. Aggregating this data and then feeding it into analysis engines fueled by (among other values) Bayesian probability logic where reasoning can be theorised on the basis of ‘uncertain’ statements – the company says it builds new Internet security power.

Without going into any further analysis of Bayesian probability and the state-of-knowledge concepts of objectivist views versus subjectivist views, there is still a lesson here I believe.

Of course, AVG does have its free to download LinkScanner technology which analyses website content before a user’s browser is directed onward – and it also has its AVG Free model, which, arguably, also brings in a good quotient of faithful converts to the AVG way.

But if you want to sell to your product to families and non-techies then fine, keep your messages pretty much as they are. If you want to seed power users from the top down who will understand why your product is better (if it indeed is) – then drop in some Bayesian probability theory and a CTO briefing. Otherwise, those canapes better be really good.


September 27, 2010  9:23 AM

The “Offline” Web Application

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
APIs, Developers, HTML, OFFLINE, Support

This is the third blog post by Mat Diss, founder of bemoko.com, a UK mobile Internet software company that is focused on developing new ways for web designers to construct better websites that can be delivered across all platforms.

In this blog, Mat looks at the offline support HTML5 delivers for web applications…

A key benefit of native applications (whether desktop or mobile) is the ability to interact with the application whilst you are offline. The HTML5 offline support allows web applications to achieve this. Google has demonstrated its backing for the HTML5 offline support by announcing that they are no longer supporting Google Gears – an early solution for for offline web access – and are backing the HTML5 offline APIs.

So along with strong support from the major browsers, this is an indication that this API will mature and become an essential foundation for the web. Offline storage will be an essential ingredient of any web application that requests information from a user and delivers essential information that a user would want to access anywhere, anytime.

For example when a user fills in a form, or edits some data it is an important aspect of the user experience that the information entered is not lost and causing frustration. With HTML5, changes can be stored locally in the browser and synced with the main server when a connection is re-established.

Information applications – such as travel guides – also provide much more value if you can access information quickly and reliably, even if you are in the tube, on a plane, or in a foreign country on an expensive data plan.

The example below implements a simple local ‘to do’ list manager:

function storeData() {
var newDate = new Date();
var itemId = newDate.getTime(); //creates a unique id with the milliseconds since January 1, 1970
var todo = document.getElementById(‘todo’).value;
if(todo!=””){
try{
localStorage.setItem(itemId, todo);
} catch (e) {
if (e == QUOTA_EXCEEDED_ERR) {
alert(‘Quota exceeded!’);
}
}
}
var todo = document.getElementById(‘todo’).value = “”;
document.getElementById(‘todo’).focus();
getData();
}

function getData() {
var todoLog = “”;
var i = 0;
var logLength = localStorage.length-1;
//now we are going to loop through each item in the database
for (i = 0; i <= logLength; i++) {
//lets setup some variables for the key and values
var itemKey = localStorage.key(i);
var todo = localStorage.getItem(itemKey);

//now that we have the item, lets add it as a list item
todoLog += ‘

‘+todo+’

‘;
}

//if there were no items in the database
if (todoLog == “”)
todoLog = ‘

Log Currently Empty

‘;

document.getElementById(“todoList”).innerHTML = todoLog;
}

function removeItem(itemKey) {
localStorage.removeItem(itemKey);
getData();
}

function clearData() {
localStorage.clear();
getData();
}

window.onload = function() {
getData();
}

Which can be used from the following page:

I’ve got to …

Enter something you have to get done

My to do list…

Nothing to do

Note that once this application has been downloaded by the user it can be used offline (i.e. without a network connection). In a full application you would probably want to persist these changes with a back end server when the user does come back on line which can be done by hooking into the on-storage event handler (see http://blog.bemoko.com/2009/09/16/cool-iphone-handles-the-html5-onstorage-event-handler/)

<!–
By use of this code snippet, I agree to the Brightcove Publisher T and C
found at https://accounts.brightcove.com/en/terms-and-conditions/.
–>

<!–
This script tag will cause the Brightcove Players defined above it to be created as soon
as the line is read by the browser. If you wish to have the player instantiated only after
the rest of the HTML is processed and the page load is complete, remove the line.
–>
brightcove.createExperiences();


September 22, 2010  5:17 PM

Michael Dell talks ‘Zettabytes’ at Oracle OpenWorld

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Data, data-centre, Databases, Dell, Networking, Services, Zettabytes

This enormous IT show started off today with a presentation from Michael Dell. The man himself kicked off with an overview of the partnership that has existed between his company and Oracle since 1995. This week has seen the launch of a new Oracle consulting and implementation practice at Dell, so it’s fairly safe to say that the two companies are snuggling up closer than ever.

Dell went on to detail a comparatively new measure of data in terms of our general awareness, the Zettabyte. A Zettabyte is 1.1 Trillion Gigabytes and studies suggest that by 2020, we’ll all be creating about 20 Zettabytes per year and about one third of that data will pass through a cloud.

Dell went on to talk about servers, storage, networks and services. He lent towards talking about how Dell works at the application level by detailing several large implementations of Dell hardware and services by bringing out customers including Zynga Games (the people behind Farmville) and FedEx too.

Dell is pointing to ‘next-generation’ data centres that will power the new ways we handle data over the next couple of decades – clearly he wants to make sure that his company’s servers are well positioned to drive these data banks – and it would be hard to argue that his relationship with Oracle will harm that objective.

The newly enhanced Dell Services Oracle Practice is designed to drive cost and complexity out of IT infrastructures. The company says that, “Dell Services also helps companies simplify data center operations with assessment, design and implementation services that focus on Oracle Database, including Real Application Clusters, in a standards-based x86 environment. These services are designed to enhance data availability while lowering the total cost of ownership.”

SUncloud.jpg

Given Oracle’s own prowess in database hardware and the newly launched Oracle Exalogic Elastic Cloud, it’s kind of hard to know where Oracle stands on hardware from a corporate policy perspective. On the one had, Oracle says that it buys a lot of Dell kit. On the other hand, the Exalogic box is described as the world’s first and only integrated middleware machine – or to use Oracle’s own words, “A combined hardware and software offering designed to revolutionise data center consolidation.”

Either way, if Dell’s Zettabytes predictions are true then we’re going to see a lot of software application developers and database administrators being kept very busy in months and years ahead.


September 21, 2010  12:14 AM

Larry Ellison: how should we define the cloud?

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Cloud Computing, middleware

If you’ve been following the news for Oracle OpenWorld and JavaOne 2010 so far you’ll have seen a whole heap of products unveiled. The company has quite literally carpet-bombed the newswires with some pretty meaty announcements. There is therefore, I hope, room for some slightly more tangential comments as to the look and feel of the show.

Larry Ellison himself used the first keynote (unusually hosted on a Sunday night) of this event to detail his all-singing, all dancing integrated hardware and software box — the Exalogic Elastic Cloud. The concept behind this product being that it is all the hardware, middleware, processing power and software needed to create a cloud infrastructure.

Under the new tagline ‘Hardware and Software Engineered to work Together’, Oracle is rolling out what it claims to be a total cloud solution – but is it simply, as some commentators have suggested, nothing more than an ‘intelligent server’?

Is this announcement no more than “Cloudwashing” i.e. putting the term “cloud” in front of some otherwise standard piece of kit – or it is true innovation?

Could this really be the cloud in a box – delivered?

Larry 1.jpg

Looking at the hardware on offer here. We can see how Larry defines the cloud.

“The whole idea of cloud computing is to have a pool of resouces that is shared by lots of different applications within your company – at end of month accounts, the resouces are directed towards the finance department, when it’s time for a sales drive, a different department gets what they need and releases the power that they don’t need,” said Ellison.

If the Exalogic Elastic Compute Cloud is indeed a cloud in box, then this is what it takes to build one:

  • 30 servers in a box – 360 cores
  • An Infiband Network – so the servers can all talk to each other (but with a lot of bespoke aditional work from Oracle)
  • A storage device
  • A VM with 2 guest OS – one is Solaris for high end, and one is Linux
  • A strong middleware component

… and the secret sauce? Well, that’s the “coherence software” that conjoins the memories in all of the servers so that they appear as one complete unit.

OK so every tech vendor worth its salt has their own spin on what cloud computing really is – and Larry clearly had a riot of a good time running down the efforts of Salesforce.com and Red Hat as he specified that Oracle’s defininition of the cloud – if he ABSOLUTELY HAD to pick another company – it is probably the Amazon (EC2) – elastic cloud 2.

For Larry, the cloud must be a “platform” upon which you build other applications, which includes both hardware and software and it’s all standards based – and the on demand element is especially impotant here, so much so that Amazon put the term ELASTIC into the term itself (which Larry actually quite likes) – after all it was Amazon that really popularised the term.

Is your head still in the clouds? Spare a thought for the attendees that had to sit through a 10 minute gung-ho boys at sea video of Larry winning the America’s Cup in his massive sailing catamaran.

Tonight, all the UK press are going out for a few beers with him while he asks us to tell us what we like most about open source computing and the community contribution model of software application development before we head on out for burgers.

OK – so that last bit didn’t happen, but all the above did!


September 20, 2010  6:26 PM

Oracle OpenWorld: paving the way for Solaris 11

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Developers, Oracle, Solaris

Oracle has used its Oracle OpenWorld and JavaOne exhibition in San Francisco this week to outline the next major release of the Oracle Solaris operating system. The company says that it is paving the way for the next version of this enterprise level OS by releasing Solaris 11 Express, to provide customers with access to the latest Solaris 11 technology.

The company says that the first Oracle Solaris 11 Express release, expected by the end of calendar year 2010, will provide customers with timely access to the latest Oracle Solaris 11 Express features with an optional Oracle support agreement.

This release is intended to be the path forward for developers, end-users and partners using previous generations of Solaris.

Keynote.jpg

The full version of Oracle Solaris 11 is expected to increase application throughput, exhibit improved platform performance and maximize overall platform reliability and security.

Here at the show itself, rumours suggest that this is some of the first evidence of the wider Oracle technology stack being brought to bear upon Solaris – and that this is positive joint engineering and integration.

When it does arrive in its full blown version, Oracle Solaris 11 is scheduled to contain more than 2,700 projects with more than 400 new so-called “inventions”.

“Oracle Solaris 11 is now increasing system availability, delivering the scale and optimisations to run Oracle and enterprise applications faster with greater security and delivering unmatched efficiency through the completely virtualised operating system.” said John Fowler, executive vice president, Systems, Oracle.

Oracle says that Solaris 11 will virtually eliminate patching and update errors with new dependency-aware packaging tools that are aligned across the entire Oracle hardware and software stack to reduce maintenance overhead.


September 20, 2010  5:31 PM

The advantages of the Geolocation API

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Developers, geolocation

This is the second post by Mat Diss, founder, bemoko.com, a UK mobile Internet software company that is focused on new ways for web designers to construct better websites that can be delivered across all platforms.

In this blog, Mat, looks at the advantages of the Geolocation API…

The Geolocation API already brings with it the possibility for the user to share their location. By design, this is a user choice on a site-by-site basis – choosing to share their location when they feel they’ll benefit and when they trust the service provider. This opens web sites up to the opportunity of location base services, such as finding people or places nearby, combined with mapping services this can all help to create a more pleasurable experience with a service provider – saving time and shoe wear.

You can locate a user by including the javascript below in your page

 

  function getLocation() { 

    if (navigator.geolocation) {

      document.getElementById(“findMe”).innerHTML = “We’re locating you … “;

      navigator.geolocation.getCurrentPosition(success,fail,{enableHighAccuracy:true,maximumAge:600});

    } else{

      document.getElementById(“findMe”).innerHTML =  “Phone not supported”;

    }

  } 

  function fail(error){ 

    document.getElementById(“findMe”).innerHTML = “Error: “+error.message;

  } 

  function success(position){

    document.getElementById(“findMe”).innerHTML = “Located you, at lat:  “+ position.coords.latitude +  ” , lng: ” + position.coords.longitude;

    var redirectUrl = ‘/map/latitude/’ + position.coords.latitude +  ‘/longitude/’ + position.coords.longitude;

    window.location = redirectUrl;

  } 

and then including a find me button like,

 

This example redirects the user to a new URL with the latitude and longitude populated, but you could equally do this with a AJAX call back.

brightcove.createExperiences();

Visit the demo site at: http://bemoko.com/html5demo/i

View Video file: bemoko html5.gelocation.wmv


September 15, 2010  7:15 PM

HTML 5.0 tutorial: the advantages of native media support for delivering video

Cliff Saran Profile: Cliff Saran
bemoko, IE9, iPhone

This is the first post in a tutorial by web application specialist Mat Diss, covering the new features in HTML 5.0.

HTML5 brings native media support to the browsers.  There has been much fragmentation in the format of video and audio that is required for web delivery. Historically this also came with a lack of control for how the media will be displayed (e.g. embedding in a page) and the requirement for extra plugins. This makes it more costly for service providers to deliver media and makes media experiences less than seamless for the end user. By standardising the media support within a web environment this fragmentation can be brought under control, making video deliverable easily accessible to all and make experience more pleasurable for the user.

 

brightcove.createExperiences();

There is still the ongoing battle between the Ogg, H.264 and WebM video codecs.  H.264 brings widely accepted improved video delivery, but in a propriety format.  Google created WebM to provide an open standard to compete with H264.  MPEG LA in response have recently announced that it will indefinitely extend the royalty free use of h264 for free web content. With strong and passionate arguments on either side.  Standardisation brings with it significant performance benefits with device manufactures bringing standard video codec processing into hardware instead of software which can lead to over twice as much battery life.

Mat Diss, is founder of bemoko, a UK mobile Internet software company pioneering new ways for web designers to construct websites that can be delivered across multiple platforms.

Continued »


September 15, 2010  9:30 AM

IBM predictive analytics software helps save Grévy’s zebra

Adrian Bridgwater Adrian Bridgwater Profile: Adrian Bridgwater
Data, IBM

IBM isn’t just a big old behemoth of an IT company concerned with databases, Rational software application development processes, massively powerful Z-series servers, proprietary technology leadership, autonomic computing mechanisms and open source innovation y’know?

It cares about the Zebra too I’ll have you know.

Equus_grevyi_1.jpg

Big Blue execs were in London this week to talk about how the company’s predictive analytics software is playing a key role in helping conserve wildlife in northern Kenya. Working with conservation charity Marwell Wildlife, IBM hopes to help secure a future for Grevy’s zebra, an endangered species with less than 2,500 individuals in the wild.

Winchester-based Marwell Wildlife is conducting a survey on Grevy’s zebra where Kenyan nomadic herdsmen are interviewed about the animal, what they think the key threats facing them are and where they think they are located. The herdsmen have very good wildlife knowledge and interviewing them is a very efficient means of collecting information over this vast and inaccessible area. The charity then uses IBM predictive analytics software to help identify patterns and analyse data, which will help inform decisions on conservation measures to be taken.

“The IBM predictive analytics software is critical in analysing the information we collect from the field. The data from the surveys is vast and complex and requires powerful software to analyse it. The software is ideal for identifying trends and patterns from this data,” said Dr. Guy Parker, head of biodiversity management at Marwell Wildlife.

“In the case of the recent interview survey, the software enabled us to determine peoples’ attitudes towards the Grevy’s zebra. Furthermore, we were able to determine what influence factors such as education level, age, location, and wildlife benefits had upon peoples’ attitudes. This is the kind of complex multi-variate analysis that the IBM predictive analytics software is designed to tackle,” added Parker.

This analytics solution is powered by IBM SPSS predictive analytics software.

IBM Predictive Analytics at Marwell Wildlife


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: