Morgan Fritz: Industrial Design Grad, Fashion Designer, DSX Veteran

You in the Private Cloud: A bi-weekly series of conversations with IBM talent around the world

morganpicture1

Our Chief Designer, David Townsend, takes new designers aside when they join our team and tells them, “This is the most complicated thing you’ll ever design. Don’t be afraid to ask dumb questions. And know that everything else in your career you design after this will be easy compared to what you’re about to do.” This week, I talked to Morgan Fritz, a new designer in our San Francisco Studio, to find out what it’s like to turn some of most abstract and complex software in the world into a product that should be easy, and a pleasure, to use.

Why did you choose to study Design? I studied Fine Arts in high school, but it was too much time alone in a room. I wanted something more social, and something for a bigger cause. I went to Carnegie Mellon to study industrial design and then user experience (UX) – I loved the system thinking. You’re thinking, not just making pretty pictures. IBM is perfect for that.

Yes, ‘Think.’ Exactly! The only downside is that is can be hard to explain to other people what we do, because it’s technical challenging.

morganpicture3

You’ve been here at IBM for 7 months, and your first project was the Data Science Experience (DSX)v, which went from a seed project to full blown GA in a very short period of time, and every customer I’ve shown DSX to, loves it. It’s a tremendous achievement. What was it like to design it? 

Crazy. We came in, and for all of us it was either our first job out of Design School or we were new to IBM, and it was like, ‘Here’s what we’re doing, here’s what a data scientist is, go!’ The pace was what we wanted: fast, like the momentum at college. And to experience a full product from research to launch for my first project on my first job, to be able to see something go all the way through and to be able to point to it and say, “I did that,” that was a crazy opportunity.

Design plays such a critical role in our products. Earlier today I talked to our machine learning team about how we can make sure design is part of everything we do, to help ensure ease of use. From your experience, do you think design is incorporated in a way that will result in a product that’s easy to use? Oh absolutely, but a lot of that comes with growing and learning. The DSX design process started to work after we learned to work with our dev team and with offering management. After GA, we did a retrospective – how could the process have been better? It happened so fast – files were flying around – so we learned so much from the retrospective.

What do you do outside of work? Travel, most recently around California; I just got back from Yosemite. Next is Pittsburgh. I’m launching a fashion line and I have to show a dress at the Carnegie Mellon Fashion Show. This weekend I have to buy the fabric and make it.

What is it about travel you like? The novelty of new landscapes, or more the process of getting there? Insights. I spent 6 months in New Zealand and I carried a sketchbook everywhere. It was nice to be able to spend that much time there because I made friends and then stayed with them in different places. And even in a work context, we design for a certain market, but we have international studios; we’re designing for a larger base than just people from Silicon Valley, CA, so it’s nice to actually understand different places and people.

morganpicture4

What gives you the utmost satisfaction, the product being released, or looking at the design and ensuring that it’s simple enough to use? If you look at satisfaction from an engineering perspective, you’ve coded it, you’ve designed the architecture, and when you release it, that’s the point of satisfaction. There’s no one right answer in design–but there’s this magical moment when you’re like, I think we got it right, I think we nailed it. It’s that point before launch, when we hand off to devs, when we’re not just saying, ‘here’s our design,’ but talking through technical aspects with them: you feel like everything falls into synch.

You’re a group of new talent at IBM. You’re in San Francisco, and there’s a lot of opportunity. What’s is the perception of working here as opposed to, say, at Uber or Lyft? On one side, it’s definitely challenging. Design is just emerging into IBM; we’re still finding a good rhythm to working with developers. On the other side we’re given so much influence. I think that’s so substantial compared to other tech companies, the fact that we can say our small team is part of a core part of this huge product we released. It’s crazy to feel like you have that much influence in such a large company. Looking back on DSX we can say, ‘We did this, everything we designed had an intention, and that intention got followed through to the end.’ That’s such a great feeling.

_________________________________

Name: Morgan Fritz

Years at IBM: <1

Currently working on:  UX for DSX and IBM Data Platform

Hometown: Santa Rosa CA

Top 5 Design Destinations: Queenstown, New Zealand; Santa Fe, NM; Istanbul, Turkey; Santorini, Greece; Santa Rosa, CA

Dinesh Nirmal,  

Vice President, Analytics Development


Follow me on Twitter @DineshNirmalIBM

Phu Truong: Humble Leader, Loves Logic, Hates Calculations

Featured

You in the Private Cloud: A bi-weekly series of conversations with IBM talent around the world

If you’ve seen the movie “Hidden Figures” — and if you haven’t I highly recommend you do, and not just because IBM is a central character — you’ve seen how the race to get a man into space was profoundly affected at the 11th hour by one courageous woman, and the help her boss, her friends, her teachers, and her family gave her to get to that one minute in time when she made a difference.

People first. To build rockets to the stars and machines that think, people need to dream things up, and work with sustained, supported effort to make them real.

We have many talented people in IBM Private Cloud. This year, I’ll continue to meet and talk to as many of you as I can and I’ll post our conversations here every two weeks. My hope is we’ll get to know each other, and feel even more connected and supported in our work.

This week, I was able to lure Phu Truong away from coding on the IBM data platform to meet at IBM Silicon Valley Laboratories, San Jose, CA.

You were an intern with us until just a few months ago. Why did you choose to come to IBM full time, out of all the choices in Silicon Valley? The appeal of IBM is the opportunity to work on new technologies, specifically, new technologies on the back end. A few weeks into my internship the senior engineer I worked with set me to work on learning Node.js® and React. I want to be a full stack engineer so now I’m working on UI, but to be really great there you need a feel for art, and I don’t have that. The back end is pure logic. I loved it, so much so that I started staying very late at night to work.

Untitled-1.png

Some people love their jobs because of people, or culture, but clearly, you love the technical work. How did you decide on computer programming as a profession? I come from Vietnam, and I had no programming background there, to be honest. I studied mathematics at university and planned to go into it professionally, but I’m very bad at calculations. I make mistakes all the time! What I love about mathematics is logic — the feeling I get when I solve a problem using logical thinking is intensely satisfying to me. I feel very good about myself.  So when I came to the U.S., I had a fresh start. I asked my friends to help me find a field that uses logical thinking to solve problems, and they recommended computer science. One week into my first CS class, Data Structures and Algorithms, I knew I’d found my profession.

So now you’re at IBM, you worked on the Data Science Experience (DSX) and now you’re working on the IBM data platform. Are you thinking of following the full path from engineer to Senior Technical Staff Member (STSM) to Distinguished Engineer (DE) to IBM Fellow? I don’t know, that may be too much!

Untitled-2.png

I hear great things about you so maybe not! You’re already mentoring others in your team on Node.js, after being here only a few months. I consider it more like sharing knowledge. When a colleague comes to me with a question, I might know something they don’t and they might know something I don’t. I might say something wrong when we’re working together and that’s an opportunity for them to correct me and for me to learn. Growing up, I helped my younger brother with his schoolwork, so I guess it’s natural for me to help. But it benefits everyone.

What do you like to do outside of work? I like to play Ping Pong with my friends from San Jose State, or go with them to the beach. And I love to travel—I want to go to Cancun, because of all the natural landscapes the beach is my favorite and I’ve heard it’s spectacular there. After that, Paris and London. I love eating out, so much so that I tell my friends I want to marry a chef!

You have an adventurous spirit! IBM is an international company so, I don’t know about Cancun, but travel to Europe is likely. What’s it like living in the heart of Silicon Valley after growing up in Vietnam? I grew up in Saigon, in a very tall, very thin town house: Saigon is famous for thin houses. Here, being surrounded by rolling green hills and close to the beach is wonderful. I think my family worried about me when I moved here, not that it was dangerous, but that I might just chase money and give up on my education: I worked as a waiter, a data entry clerk and a school bus driver, any job I could get, I took. But I never gave up on my education. I think now they don’t worry about me anymore. I think they might be proud of me.

untitled-3

You’ve achieved a great deal here in a very short period of time, making a significant contribution to two products that customers like. It’s tremendous, and I’m happy you’re here. I am as well. I think the biggest difference between Vietnam and here is in education and learning. In Vietnam, education was driven by memorizing things and was not interesting to me. And, we are taught to do exactly what teachers tell us to do; they don’t give students a chance to explore their interests. So to be first at San Jose State and now at IBM where it’s part of my job to learn new skills—well, I like it very much.

Name: Phu Truong
Hometown: Saigon, Vietnam
Currently working on: IBM Data Platform
Favorite Programming Language: Node.js
Top 3 travel destinations: Cancun, Paris, London
Best Vietnamese Food in Silicon Valley: Pho Y 1 on the Capitol Expressway, San Jose

Dinesh Nirmal,  

Vice President, Analytics Development
Follow me on Twitter @DineshNirmalIBM

Node.js is a trademark of Joyent, Inc. and is used with its permission. We are not endorsed by or affiliated with Joyent.

Talking with IBM Talent around the World

Featured

Introducing Kewei Wei, from Beijing China

My founding principle for our organization is, “people first.” I believe that to build great products, you start with talented people and invest in their work and their individual well-being. The great products they make will in turn bring customers.
dinesh-in-china

One of the best things we get from work is the satisfaction of being part of a group—bringing a divers set of skills together to accomplish something that no individual can do alone. We’re wired to feel good when we act together towards a common goal—Sebastian Junger (author of “The Perfect Storm”) writes in “Tribe” that interdependence and community are required for human happiness. If you played in your school orchestra or soccer team, you know the feeling; I get it with my running buddies when we silently sprint the last half mile, pacing each other to the end of the trail.

In the IBM Private Cloud team we have almost 3,000 people, and a list of clients who represent  a significant piece of the financial, government, and commercial world. And it just so happens that we sit at a “tipping point” with customers wanting the advantages of all that cloud offers, plus premiere machine learning technology, in a private cloud environment.

As a team of thousands working in labs from Austin to Zurich we have grown to know each other beyond the sometimes less than energizing interface of email.

I decided to conduct a series of interviews, You in the Private Cloud: look for them weekly, here. It’s a great way to bring new information forward that increases our collective knowledge.

People who are talented and engaged tend to have interesting lives outside of work—hobbies that test limits and relationships that matter, so I’ll be asking about that as well.

To kick off the series I talked to Kewei Wei last week in Beijing. Kewei is the lead developer for Machine Learning on z Systems in our IBM China Development Lab.

screen-shot-2016-11-28-at-3-05-41-pm

Kewei and I had just come from a meeting with a customer – one of the largest financial institutions in the world. They had just told us they want to be part of the closed beta of our machine learning offering on  z Systems – a testament to the tremendous work Kewei and the team have done. We sat down in a quiet part of the office and I asked him what he thought:

Kewei: This is exciting news! We weren’t sure how quickly this customer would move into new technologies.  But I believe they were, to a certain extent, feigning reluctance; they’ve been trying all along to find a balance between moving ahead with machine learning and other new technologies, and keeping the stability and security that they absolutely need to maintain. They cannot take risks with financial information, given their global position and their scale. Now we have an opportunity: if we can show them during the closed beta that we have the ability to be both of these things to them—a protector of security and stability, as we always have been, and now also the conduit to state of the art machine learning technology, they’ll take the risk and move forward with us.

You worked on z Systems for 10 years. Then, 3 months ago, out of the blue you’re told you’re leading Machine Learning on z Systems , and you have 90 days to deliver. How was that work day for you?
To be completely honest, my first thought was, “This is impossible”. In ten years in the z world, we’d never done anything like this; our release schedule was more like 3 years. But we did it, and I believe it’s because we were all passionate about machine learning. We know it’s the technology that’s going to make the greatest change for the world, and that’s something we all wanted to be part of. So we put our heart into it. We learned the open source libraries, the skills we needed to deliver the new micro-services structure, and, critically, we learned how to prioritize. With so much to learn, if we hadn’t done that, we wouldn’t have been able to deliver.

Why is machine learning so exciting to you? What would you like to solve with AI?
I’d like to build machine learning AI to help us perform systems facediagnoses and fix defects. That way, we humans could have time to focus on how to make our world better, instead of just making things work correctly.

screen-shot-2016-11-28-at-3-07-14-pm

A fine distinction. Tell me your take on the Private Cloud.
I think it’s a good thing. We’ve been hearing about the importance of public cloud for a long time. Customers  are telling us they will not be ready for the move to public cloud in a short period of time. But they’re also telling us they can’t afford to wait years for security to mature in the cloud. They need machine learning now, behind the firewall, to be competitive.

Private Cloud represents the reality of the market—what our clients are asking for. It means we’re helping them move forward, but without having to face some of the perceived risks of some public clouds.

I also believe innovation in machine learning will happen in Private Cloud. The foundation of machine learning is, of course, data, and as of today, the majority of critical data is still in the hands of customers who keep it behind the firewall. And customers know they have to use machine learning in their core business, which is behind the firewall. They’ll invest in private AI, and we’ll get to build that.

In terms of innovation, what do you see in China that might come to the US from China’s hyper-evolved app technology, or from the broad consumer adoption of virtual reality there?
Honestly, looking at from technology perspective, there is nothing new in what China is doing today. I think the key is to move fast. Don’t wait, listen to consumers, be willing to take risks, dare to fail and always move faster than your competitors

Where did you grow up, and how did you come to be an engineer in Beijing?
I’m from Jinzhou—it’s a city of about 3 million people (small, in Chinese terms) in Liaoning Province in the northeast of China. My father was a bus driver and my mother worked for an oil company. I am very proud of them! When I was 18 I came to Beijing and enrolled in University. In one of my very first classes, a professor helped me write a Pascal program to simulate a game of chess. I was shocked that this was possible—and in a way I never stopped being shocked, excited, I mean, by what we can do with programming.

What do you like to do when you’re not programming?
Read! I love history books. I just finished “Zhang Juzheng”, I a biography of a famous Chengxiang –a prime minister- in the Ming Dynasty. I’m looking for a book on European history for my next one. I bought Gibbon’s “The History of the Decline and Fall of the Roman Empire” but a few chapters in I realized I’d better start looking for a thinner version of it.

Sensible.
Realistic. I also love to travel with my wife and my 5-year-old son.  I think the most fun thing in life is being in a beautiful place, in the mountains or in a relaxing beach city like Xiamen, with the people you love most.

screen-shot-2016-11-28-at-3-08-48-pm

What do people talk about in China when they talk technology? Are people concerned about security and privacy? Or Artificial intelligence displacing the traditional work of people in the labor market?
What concerns individual consumers far more than security in China is convenience and lower cost. For example, the latest hot business here is food delivery, people love it that we can order a food from a nice restaurant from an app and get it within 30 minutes, for less than we’d pay to eat in the restaurant. Part of the reason it’s so low cost right now is because China’s internet companies are investing without regard to cost: they want to lock in users for long-term return.

It’s almost the inverse of the concerns of government or large enterprise: individuals choose convenience over security every time. For example, you can pay by Alipay or Wechat almost everywhere, including places like vegetable markets where POS probably will never exist. Alipay and Wechat are not as secure as credit cards, but they are far simpler to use. It helps that Alipay and Wechat have committed to compensate consumers if you lose money because of any security defects.

As for AI displacing traditional labor, people in China don’t worry about it much. AI is still a pretty new concept to Chinese. What we know is just AlphaGo or Waston. It still sounds like something far off in the future, not close to our daily lives at all. But AI is starting to draw more and more attention, so this could change soon.

Overall, tech in China is booming, and companies like Alibaba and Huawei are knocking at your door. Why stay at IBM, with those kind of opportunities?
Oh, that’s an easy one: the people. In my opinions you can’t find a better company in the world for talent. I have the greatest people around me, and it’s because of them I had the confidence that I could deliver this project on time.

Kewei, having you on the team has been a blessing: you have been a true IBMer, and delivered machine learning on z Systems in such a short time , an impossible task. Companies would probably not buy our products without people like you. Thank you.

You are welcome. I enjoy it. If I worked somewhere else, I’d have to quit and look for a new job every time I wanted to try something new —but not here. AtIBM, we have tremendous chances to try new things we like to do.

Name: Kewei We
Years at IBM: 11
Lives and works in: Beijing, China
Currently working on: Machine Learning Z Systems
Favorite programming language: PL/X
His top 3 beach books for not-so-light reading: “Three Kingdoms” by Luo Guanzhong, “Blood Remuneration Law” by Li Qiang, and “The Shortest History of Europe” by John Hirst

Dinesh Nirmal,  

Vice President, Analytics Development
Follow me on Twitter @DineshNirmalIBM

Should’ve, Could’ve, Would’ve – Making the Optimal Decision.

Featured

In a previous blog I talked about the value of machine learning and how it could help organizations by making smarter predictions by continually learning and adapting models as it consumed new interactions, transaction and data.  I compared that to how my son embraced learning about the world around him to become gradually smarter, more knowledgeable.  But that doesn’t always mean he is going to make the best decision because he may not have all the information or foresee or correlate past events.

So, I guess there are times when we wonder – or get asked by others – whether the decision we just made was the best possible one.  Think of when you made your last hi-tech or car purchase.  It’s often difficult to judge at the time if it was the best choice as there are many parameters involved including (but not limited to) logic, price, best value, best meets our needs and wants, emotion, political.  How many times have you sought to justify that purchase immediately after by doing even more research on reviews by others?  Don’t worry. It’s a natural human emotion known as post purchase cognitive dissonance.

The collective name for this capability is called prescriptive analytics and it relates to predictive analytics as shown in figure 1 below.

screen-shot-2016-11-16-at-9-48-45-am

Figure 1: Descriptive, predictive and prescriptive analytics relationship.

Making that optimal decision at a business or boardroom level becomes even more exacerbating and complex, involving many people and large sums of money. Every decision needs to balance risk with cost and with benefit.  Quite often there are conflicts of interest, bias, power struggles involving political and emotional agendas that can result in suboptimal decisions being made for the business.  Oh the frailties and flaws of humankind!   The crux of the matter is there are far too many parameters, data points, correlations and patterns for us humans to be aware of to make the optimal decision for every transaction and interaction.  So there are times we need these capabilities to augment and balance our own judgements.  These decisions can range from a simple purchase or where best to distribute warehouse stock, where to place emergency services in the event of a pending disaster or be financial focused or risk based through to decisions that ultimately help preserve or save lives in the medical field.

Decision Optimization at every transaction.

IBM has been providing decision optimization for many years as part of the business logic within applications with products such as IBM Decision Optimization and its market leading CPLEX Optimizer engines.  Together these offerings are providing a collaborative environment for key personas to deliver a powerful solution application to line of business users to make better and faster decisions in planning, scheduling and resource assignment. In short, the aim of Decision Optimization is to remove the guess work from making a decision by “prescribing” and automating the best decision for you. Just to put your mind at rest – the business user / planner still has the ability to change that decision, interact with it – it is something which is recommended, but the end-user can still own that choice of whether the decision will be operationalized or not.

There are three key personas that are involved in building a decision optimization solution as part of the optimization application development cycle (figure #2) namely the business analyst, operation research (OR) expert, application developer – and the LOB consumer of course.

screen-shot-2016-11-13-at-10-01-38-pm

Figure 2: Optimization Application Development Cycle.

Together these personas have a broad range of responsibilities and needs such as:

  • Create optimization models / algorithms.
  • Use APIs to embed in decision making applications.
  • Solve models with the IBM CPLEX Optimizers.
  • Analyze results.
  • Use Rest APIs to embed in decision making applications
  • A Collaborative development environment to build & deploy enterprise-scale optimization based applications
  • Visualize trade-offs across multiple plans, scenarios, KPIs.
  • Optimize costs vs robustness.
  • Recommend decisions hedged against data uncertainty.

You don’t have to be an Operations Research Expert to use IBM Decision Optimization.

The goal is for decision optimization to be consumable making this powerful technology available to decision makers who are not necessarily trained in mathematical modeling.  Business managers need to be able to use their own business language to define a decision model.  To support this, the IBM Decision Optimization R&D team created the proof-of-concept for “Cognitive Optimization”.  The goal of Cognitive Optimization is to allow business users to directly create and work with Decision Optimization models, without requiring the intervention of a mathematical whiz.    It uses a combination of the business data and the business user input, in natural language, to figure the business intent, to suggest potential decision models, and then to use those models for “what-if” and trade-off analysis.”

Similar with what was announced for the Watson Machine Learning service, Optimization as-a-Service will be also available as part of the data science experience (DSX), in addition to being a standalone cloud, hybrid, and on-premise offering, for the personas mentioned above to experience decision optimization as part of a prescriptive analytics solution as a stand-alone service.   If you consider the high level flow of optimization as-a-service it looks very similar to machine learning as-a-service as shown in figure #3 and, as such, they share many commonalities which makes DSX an ideal collaborative environment for machine learning and operations research practitioners.

screen-shot-2016-11-13-at-10-51-49-pm

Figure 3: Optimization-as-a-Service high level workflow

Clear business benefits reducing time, risks and costs.

In summary the combination of IBM Watson Machine Learning and the Optimization as-a-service capability can help organizations across every industry make progressively smarter cognitive insights and act on optimized decisions while removing the human frailties of emotional, political and personal bias.

Organizations are already experiencing the benefits. For example, a global tire manufacturer uses IBM decision optimization solutions to help:

  • Optimize long-term production planning, considering up to 10 million constraints across all products and plants
  • Predict when major machine bottlenecks are likely to occur, enabling staff to take corrective action early on
  • Drive smarter decision-making with 30 times more what-if scenarios evaluated
  • Save up to 30% of planners’ time, allowing them to focus on value-add initiatives

Your Optimal Decision

Over the next few months this is going to be an exciting space to watch as decision optimization moves forward with three key design principals of simplicity, collaboration and convergence of the technologies helping more organizations complete their cognitive journey. So what do YOU do next? What’s YOUR next best decision?  Sign up for Optimization as-a-service as part of the Data Science Experience here.

Dinesh Nirmal,  

Vice President, Analytics Development

Follow me on Twitter @DineshNirmalIBM

Using IBM Machine Learning to Help Solve Real World Business Problems

Billions of connected devices, zetabytes of data, power and brand loyalty now in the hands of the consumer, businesses having to market and sell to each and every one of us.  What just happened?  Three driving forces – mobile, cloud and a continuing explosion of data. Everything just got personal.

How can any business make sense of it all? How can they learn and avoid making the same mistakes – and become smarter. Oh – and did I mention much of this needs to happen in real time?

That’s where Machine Leaning as part of a cognitive strategy comes in to its own. Read my earlier blog on Enterprise-ready Machine Learning.

 

Machine Learning  – Get Smart.

It’s all about doing things smarter.  Regardless of industry, machine learning can greatly assist organizations in making progressively better decisions by constantly adapting and learning.

Below are a just a few examples of how organizations have leveraged machine learning capabilities to achieve significant business benefits or provide better service to their consumers and constituents.

 

Healthcare and the Zika Virus – a non government organization (NGO) healthcare based company needed to understand which primates are most likely reservoirs of the Zika virus and which mosquitoes are most likely to be the carriers.  IBM developed a new algorithm to help organizations identify the most likely animals and to help them with early detection, prevention and managing virus spread.   The benefits of course are very clear – by better understanding the disease the organization was able to proactively fight it before the epidemic occurs.

 

Government organization addresses long term unemployment – The problem of youth unemployment can be expressed as a statistic but can’t be solved as one. It needs to be solved one person at a time. A European public job agency believed that within its sprawling and complex base of employment data lay the indicators of which young job seekers are most at risk of long-term unemployment and, most importantly, why. Cracking these patterns would help lead to breakthroughs. The organization ran a decade’s worth of deep historical data through machine learning algorithms. The result was a predictive model that not only quantifies long-term unemployment risk but also breaks down the impact that each controllable variable – such as having or not having a driver’s license – has on the big picture. This enabled job counselors to offer job seekers guidance with an unprecedented level of detail, personalization and effectiveness. Traditionally they relied on their own experience and intuition of “what works”. The new solution applies machine learning analysis to the agency’s full record – 10 years’ worth of historical data – to show what really works. Counselors were then able to provide recommendations to job seekers with a high level of confidence. It set the stage for a transformation, both in the depth of its understanding of the youth unemployment issue and in its ability to bring personalized options to young job seekers. Because job counselors have access to granular, personalized insights into the “whys” of youth unemployment – insights surfaced by machine learning and NLP – they can make practical recommendations on closing skill-set and qualification gaps with confidence that it will help make a difference.

 

Automotive – Improving Vehicle uptime. A multi-national car manufacturing company based in Europe wanted to improve transport vehicle uptime and avoid unplanned stops so that it would achieve its vision of becoming the world leader in sustainable transport solutions. By building a predictive analytics platform the company gained the ability to identify the necessary parts and provide repair instructions, even before a truck arrives for service. This capability helped reduce diagnostic time by up to 70 percent and cut repair time by more than 20 percent .The client also gained the ability to plan maintenance better by performing preventive maintenance.

Further, the solution applied machine learning techniques to automatically discover patterns and learn from the vast amount of data it collected. Additionally, the company consolidated under one roof the people and systems needed to monitor and respond to vehicle issues in near real-time, including around-the-clock support. The process changes also helped the company maintain its commitment to core values of quality, safety and environmental care.

 

Oil and GasIncreases outage event detection accuracy by more than 95 percent  – For oil and gas companies, powerful and complex equipment such as compressors comprise the heartbeat of production. Keeping them running is a top priority. Companies use sophisticated asset surveillance systems to look for the signs of impending outages. Still, those systems are missing a significant share of outage events because the mix of warning indicators can be simply too complex to discern from the flood of noisy data generated by sensors. One company embraced a new way of measuring the health of its oil production assets. It’s using machine learning algorithms to automatically build complex, multivariate and far more flexible rules that define which changes in vibration patterns, pressure and the like are true anomalies. And because these machine learning algorithms are self-correcting, they get more accurate over time. The result: a quantum leap in outage detection accuracy.

The solution is game-changing for the company because the use of machine learning technology provides a far higher level of predictive accuracy than would be possible with the traditional statistical anomaly detection approaches used by oil companies. Because machine learning discovers complex pre-failure patterns within sensor readings, rather than just seeking out the simplified patterns that were predefined by engineering staff, the solution is far less likely to miss signs of impending equipment failure

 

Computer Services – digital image processing. A North American company collects more than two million square kilometers of earth imagery every day from high-resolution satellites and is used by urban planners to US and foreign defense and intelligence agencies. Its imagery is also used commercially for navigation technology and web mapping applications. The company developed a new machine learning system designed to help minimize the amount of manual review that is required to detect cloud cover in satellite images of Earth. The system consists of a set of sophisticated image-processing functions using an algorithm that effectively classifies each pixel within an image as either cloudy or non-cloudy by comparing it with the millions of pre-classified examples in the training data set. The company designed the models using IBM machine learning technology and then exported them to their custom-built system where they are compiled and deployed.

Over a test data set consisting of 600 images, comprising a mix of cloudy and non-cloudy images, the new system reported a true positive rate of 90 percent and an associated false positive rate of just six percent. For the non-cloudy scenes in the data set, which are the most valuable images from the client’s perspective, the system reported a false positive rate of just 0.1 percent, compared to 22.7 percent for its previous system. The company saw the false positive rate for its black and white satellite fall to 4.9 percent, from the previous rate of 71.1 percent.    They expect the system will grow ‘smarter’ over time and will hope to reduce the amount of images that require manual review by 90 percent.

 

UK based Retail Company focus on loyalty and customer satisfaction – When customers order online, they do so primarily for convenience and cost savings. Most are willing to forego the immediate satisfaction of trying out the item in person, paying for it immediately and walking out the door with the new product in hand. When online shoppers receive merchandise that fails to meet their needs or expectations, their disappointment and the effort required for resolution may discolor their impression of the transaction, eliminating both the convenience and the cost savings.  Studies demonstrate that 80 percent of first-time customers who return an order will never purchase from a company again. This retailer uses advanced analytics to identify both the products that are returned most frequently and the ineffective marketing tactics that elevate the number of returns. Using statistical and machine learning models, the company takes advantage of rapid alerts and automated information feeds on products that are returned so that it can quickly identify problems with sales and anticipate and help resolve evolving issues.

The solution allowed the retailer to develop a customized response system to automatically contact new customers who returned items, enabling customer service specialists to offer incentives to encourage disaffected buyers to try again. This personalized, near real-time response makes a tremendous difference to customer perceptions of retailers, enabling the company to help improve customer loyalty.

 

Energy and Utilities integrating renewable energy into the grid. The Vermont Electrical Power Company  (VELCO) worked with IBM Research to develop an integrated weather forecasting system to help deliver reliable, clean, affordable power to their consumers while integrating renewable energy into the grid.  The solution combines high resolution weather with multiple forecasting tools based on machine learning.  The machine learning models are trained on hindcasts of weather correlated to historical energy production and historical net demand.

The results are some of the most precise and accurate wind and solar generation forecasts in the world. This powerful tool turns multiple streams of data—transmission telemetry, distribution meter data, generation production, highly precise forecast models—into actionable information using leading edge analytics. A collaborative achievement involving dozens of in-state and regional partners and the formidable intellectual resources of IBM Research, VWAC’s results are significant and its value already demonstrated, even as further benefits continue to emerge. To find out more and the actual business results  watch the video on the VELCO website Courtesy Vermont Electrical Power Company web site and video.

 

Machine Learning – It’s all part of the Data Science Experience.

You don’t have to be a data scientist to use IBM machine learning. But even if you are we help automate the experience as much as possible with integrated tools that guide you through a step by step process.  Or simply use existing / prebuilt machine learning capabilities as a service (MLaaS).  There’s a lot happening. If you haven’t already done so I invite you to sign up for the data science experience at datascience.ibm.com.

 

For more information on IBM’s cognitive strategy and machine learning capabilities click this link : ibm.com/outthink

 

 

Dinesh Nirmal, 

Vice President, Development, Next Generation Platforms, Machine Learning, Big Data & Analytics 

Follow me on Twitter @DineshNirmalIBM

IBM BigInsights Basic Plan ready for a successful journey

It seems to me that more and more companies are adding Apache Hadoop to their mix of technologies in attempts to spend less time in the creation of an Hadoop infrastructure and focus more on gaining the benefits. IBM BigInsights (our implementation of Hadoop) is ready to take on that challenge. It provides a complete solution that is not only open and combines Hadoop and Spark, but also provides the right tools to help make Big Data easier and more scalable, while integrating it seamlessly with SQL, noSQL and other data types.

Community and customer feedback has been consistent: increasing demands for elasticity, automation and ‘pay as you go’ resulting in a shift in the market towards Hadoop in the cloud. Even large enterprises that successfully run Hadoop on-premise are seeing the cloud as a strategic platform that can help support their Big Data needs. They are looking for a broad range of cloud services and a well-integrated portfolio that helps enable them to have an enterprise-ready solution for their business critical applications.

IBM as a leader is ready for the challenge. BigInsights is IBM’s premier Big Data cloud solution par excellence. It offers customers an opportunity to deliver next-generation insights both quickly and economically. The Hadoop Cloud market appears to be gaining momentum across many industries[1] But prospects become even better when we consider that more and more customers are moving away from an “on-premise only” model and are looking out for a cloud solution that fits their needs. According to a recent Forrester Business Technographics® survey, 51% of companies plan to increase their spending on data management in the public cloud. IBM BigInsights is planning to answer the need with solutions in both ‘pay as you go’ model and a dedicated, subscription based model.

The Forrester Wave report, Big Data Hadoop Cloud Solutions (Q2 2016)[1], evaluated 8 Hadoop cloud vendors based on 37 criteria. Among the eight, IBM was identified as a leader (click to read full report).

I believe the results confirm IBM’s claim of the solidity of the BigInsights offering. In the evaluation of different Big Data Hadoop cloud vendors, out of 8 IBM received top scores in 5 current offering criteria, namely: solution configuration, data security, data management, development, cloud platform integration. Similarly in the strategy category, IBM received top scores in 4 of 6 criteria, those being: ability to execute, road map, professional services, fixes.

But that’s not where the road stops. BigInsights has now released (beta) the Basic plan, a solution where you pay for the cloud services as you use them. No commitments, no other contracts, just plain Big Data cloud services as needed. The beta version is already showing some impressive results. Based on the usage of 390 users, the add-node action was 100% successful and the cluster creation 99.99%. Average deployment time varied between 6 and 12 minutes. In other words, a very positive start.

The big data shift towards cloud is undoubtedly a great opportunity for IBM. As a ‘more-than-just Hadoop’ Service solution, BigInsights stands out as an end-to-end analytical cloud platform and enables companies to adopt enterprise-scale / scale-out workloads and performance much more effectively. I believe such a robust Hadoop offering can truly help customers address their most business critical business needs.

 

Dinesh Nirmal, 

Vice President, Development, Next Generation Platforms, Big Data & Analytics 

Follow me on Twitter @DineshNirmalIBM

[1] The Forrester Wave: Big Data Hadoop Cloud Solutions, Q2 2016, N. Yuhanna and M. Gualtieri, Forrester Research, Inc., April 22, 2016.

Apache Spark continues to forge ahead with help from IBM’s Spark Technology Center.

 

Apache Spark™ puts both deep and broad advanced analytics capabilities in the hands of the masses. Whether a data scientist, data engineer, analytics app developer or citizen analyst – Spark delivers sophisticated analytics simpler, faster and more efficiently than ever before.

Spark is currently one of the most active open source project for big data.  The latest release, Spark 2.0, is the result of nearly 2,500 contributions, with consistently more than 100 contributors per month. The new release is a significant milestone, and builds upon the input of the rapidly growing user and developer community.  The Spark 2.0 release has been summarized as easier, faster, and smarter[1].  Notable improvements include streamlined APIs, expanded SQL capabilities, improved performance, and structured streaming.  The release solidifies its leadership position as the premiere big data platform.

IBM has had a long tradition of supporting open source projects, and was the very first supporter of the Apache Foundation from its inception in 1999.  Through IBM’s flagship organization, the Spark Technology Center (STC), IBM’s role in the Apache Spark Project is expected to have a massive impact on industry wide adoption of the technology, which has fed into the excitement felt by the big data community at large about the project.

The STC’s core mission is to contribute to the Spark Community, as well as to expand the core technology to make it enterprise and cloud ready.  It is also fueling the adoption of Spark in the business community.  Through education and outreach, IBM is building data science skills and driving intelligence into business applications.

The commitment of IBM to open source is shown especially in this latest Spark release.  Nearly 18% of all non-trivial Features, Improvements and Bug fixes and 16% of all JIRAs were contributed by the Spark Technology Center, placing IBM as the number two contributor to the Apache Spark Project.

In Machine Learning, the Spark Technology Center contributed no less than 42% of the new features, and 24% of the enhancements.  The STC has contributed 44% of all lines of code (LOC) worldwide to the PySpark component and over 25% of LOC in Spark ML.  Significant code contributions were also made in SparkR, WebUI and many others.  In Spark SQL, Spark’s most active component, IBM leveraged its long-standing SQL experience by resolving 25% of all bug fixes for the new release.

All this makes the level of commitment and contribution to Spark by IBM’s Spark Technology Center undeniable. The future is bright for Apache Spark, and IBM is proud to be an active contributor, and looks forward to continuing the tradition of excellence.

To find out more about the work of the Spark Technology Center, visit spark.tc and follow us at @apachespark_tc.

 

Dinesh Nirmal,

Vice President, Development, Next Generation Platforms, Big Data & Analytics

Follow me on Twitter @DineshNirmalIBM

 

TRADEMARK DISCLAIMER: Apache®, Apache Spark™, and Spark™ are trademarks of the Apache Software Foundation in the United States and/or other countries.

[1] http://goo.gl/zSbFq9