Industry News

Industry News

Machine learning AI News covering many topics : The passing of Paul Allen – huge supporter of AI dies aged 65. Steve Balmer – LA Clippers roll out Courtvision – AI boosted stat overlays. FAANG report – can you write a model based off FAANG? US Driverless cars fail to recognize UK’s uniquely local vehicles. Crowndstrike IPO – can you write an IPO model?

AI & Machine Learning News. 22, October 2018

 “As long as we work together—with both urgency and determination—there are no limits to what we can achieve.” – Paul Allen

10 Times Tech Legend Paul Allen Spoke About Artificial Intelligence

The co-founder of Microsoft, Paul Allen’s tryst with personal computers is not unknown. Allen, in his early years played a pivotal role at Microsoft, steering the company’s fortunes and making it a dominant player as an enterprise software. In the second half of his career, Allen put all his might behind advancing AI. Quoting on how computers are easier to understand than human brain, he once famously said that computers are basically computing elements and are easy to understand as compared to human brain.
After parting ways with Microsoft, he kick-started many ventures of which Allen Institute of Artificial Intelligence (AI2) is one of the most ambitious. He was quite driven into his dual efforts to firstly, reverse-engineer the human brain and secondly, build one from scratch through artificial intelligence. “When I founded AI2, I wanted to expand the capabilities of artificial intelligence through high-impact research,” he had said.
Envisioning a world of perfect AI, aligned with human values, he invested considerable resources in advancing the developments in artificial intelligence through AI2. While his soul might have departed, he continues to inspire us and has given a lot of hope to AI visionaries to keep up with the innovations in this field. Paying a tribute, here we list 10 times Paul Allen quoted artificial intelligence and technology.
2018-10-18 11:03:09+00:00 Read the full story.

Paul G. Allen, 1953-2018: Microsoft co-founder leaves legacy of innovation, philanthropy, bold bets

Paul G. Allen, who founded Microsoft with Bill Gates before making his mark in technology investing, sports ownership, commercial space, global philanthropy, the environment, museums and the arts, has died at the age of 65, two weeks after announcing that he was diagnosed with a recurrence of non-Hodgkin’s lymphoma, according to a statement Monday from his company Vulcan. “Personal computing would not have existed without him,” Gates said in a statement, calling Allen a “true partner and dear friend.”
‘His legacy will live on forever’: Death of Paul Allen brings reaction from the many worlds he touched
Allen’s life and work were remarkable for their depth and breadth, spanning from the formative years of the personal computer in the 1970s to the modern world of 21st century wireless technologies, artificial intelligence, cutting-edge brain science and the promise of space exploration. Allen remained enthralled with the potential of technology throughout his life.
“It really is a golden age of what’s possible,” Allen said in an interview with GeekWire last year, after he made a $40 million gift to the University of Washington’s computer science and engineering program, joining with Microsoft to create a $50 million endowment for what was rechristened the Paul G. Allen School of Computer Science & Engineering.
2018-10-15 22:09:19-07:00 Read the full story.
Insights from Paul Allen
CloudQuant Thoughts… Surprisingly little in our feeds on the passing of a man so influential in the history of computing. A huge proponent and supporter of AI (he founded the AI2 research institute in Seattle), he will be sorely missed.

Future of sports viewing? Steve Ballmer and L.A. Clippers debut new augmented reality NBA experience

Steve Ballmer has launched plenty of products in high-stakes situations, but this takes it to a new level. His NBA team, the L.A. Clippers, is rolling out a new augmented reality viewing experience for basketball fans on opening night for the 2018-19 NBA season at Staples Center.
The new experience, Clippers CourtVision, uses computer vision, artificial intelligence and augmented reality to analyze the action on the court and translate it into on-screen annotations and animations, displayed on screen as the game unfolds. Viewers can see the probability that a player will make a shot, for example, or watch as the play is diagrammed in real time on the basketball court. Ballmer and the company that developed the technology, Second Spectrum, believe it could be the first step toward a radically different viewing experience for professional sports in the future. Ballmer is an investor in Second Spectrum, and has championed the technology among his fellow NBA owners.
2018-10-18 00:00:13-07:00 Read the full story.

‘CourtVision’ Review: We tested Steve Ballmer’s attempt to transform the NBA viewing experience

Augmented reality on sports broadcasts isn’t new. In fact, there’s a gold standard: the first-down line for football games, invented two decades ago. Imagine the chaos and confusion in homes and sports bars across the country if they took that away. More recent additions to the genre include the virtual baseball strike zone, and the digital arc tracing the flight of a golf ball.
So what about basketball? Can augmented reality provide anything so indispensable that NBA fans would howl in protest if it were taken away? Not yet. But the technology is impressive, and the potential is there.
That’s my takeaway after trying out Clippers CourtVision for two games. The experience was unveiled this week by former Microsoft CEO Steve Ballmer and his NBA team, the L.A. Clippers. Developed by technology company Second Spectrum with Ballmer’s backing and encouragement, CourtVision uses computer vision to see what’s happening on court, artificial intelligence to understand the game, and augmented reality to display animations and data on screen.
The stream is currently delayed by two minutes from the live action, to allow time for the required processing, but that’s down significantly from earlier testing, and Ballmer and Second Spectrum say the time gap will get even shorter over time.
2018-10-21 17:21:28-07:00 Read the full story.
CloudQuant Thoughts… First video games started to look more like real life basketball, now real life basketball starts to look more like a video game. I am not sure where we are headed here!

FAANG Report

Apple throws privacy ‘shade’ at rivals; Amazon hit for anti-union video at Whole Foods; Facebook shows off election ‘war room’ to fight election manipulation; Google introduces privacy tweak on Chrome after privacy gripes and is hit by a major data breach and did not report it over regulation fears ; and Netflix subscriber growth expanding at incredible pace.
2018-10-18 00:00:00 Read the full story.
CloudQuant Thoughts… In case you are not aware FAANG refers to the Five major stocks in the US Equities Market (FB, AAPL, AMZN, NFLX, GOOG/GOOGL). Last year many analysts stated that, if you removed these Five companies from the market then it was basically stagnant, did not move at all. The suggestion was that the entire upwards trend of recent years was down to these Five companies. As a Data Scientist, can you use this observation as a jumping off point to create a model at CloudQuant?

US driverless cars unsafe as they can’t spot iconic British vehicles like the Routemaster bus and Hackney cab, experts warn

US driverless cars pose a safety risk on the UK’s streets as they can’t spot iconic British vehicles, such as London’s red buses and black cabs, as their artificial intelligence has not been taught to notice them on the roads, experts claim. Engineers have noticed that the autonomous cars made in Silicon Valley are currently using cameras and software that have only been trained on pictures and videos of US vehicles, meaning they won’t detect unique vehicles with more district styles, such as the Routemaster bus and Hackney taxi. This has led UK scientists and politicians to raise questions over companies like Google and Uber, who have been developing their own autonomous technology, being able to test driverless cars in Britain.
2018-10-20 00:00:00 Read the full story.
CloudQuant Thoughts… Whoops! Slight oversight there!

CrowdStrike Hires Goldman Sachs To Lead IPO

NEW YORK/SAN FRANCISCO – Cybersecurity software maker CrowdStrike Inc has hired investment bank Goldman Sachs Group to prepare for an initial public offering that could come in the first half of next year, people familiar with the matter said on Friday. CrowdStrike uses artificial intelligence for its Falcon platform to prevent attacks on computers on or off the network. CrowdStrike is trying to stand out from the hundreds of security startups that have sprouted in recent years, promising next-generation technologies to fight cyber criminals, government spies and hacker activists, who have plagued some of the world’s biggest corporations.
2018-10-21 01:59:20-04:00 Read the full story.
CloudQuant Thoughts… IPOs (Initial Public Offerings) are a well know source of Volatility and where there is volatility there is profit. The interesting thing from a Data Scientist’s point of view is that there is a distinct lack of data. This is one of those situations where you are literally learning and making decisions live, starting today, IPO day. Can you write a model on CloudQuant to find the best IPOs algorithmically? Work out the best number of days to hold?

Must Read Books on Machine Learning Artificial Intelligence

In this article, we’ve listed some of the must-read books on Machine Learning and Artificial Intelligence. These books are in no particular rank or order. The motive of this article is not to promote any particular book, but to make you aware of a world which exists beyond video tutorials, blogs and podcasts.
Machine Learning Yearning – Andrew Ng
Programming Collective Intelligence
Machine Learning for Hackers Drew Conway and John Myles White
Machine Learning by Tom M Mitchell
The Elements of Statistical Learning – Trevor Hastie, Robert Tibshirani and Jerome Friedman
Learning from Data – Yaser Abu Mostafa, Malik Magdon-Ismail and Hsuan-Tien Lin
Pattern Recognition and Machine Learning – Christopher M Bishop
Natural Language Processing with Python – Steven Bird, Ewan Klein, and Edward Loper
Artificial Intelligence: A Modern Approach – Stuart Russell and Peter Norvig
Artificial Intelligence for Humans – Jeff Heaton
Paradigm of Artificial Intelligence Programming – Peter Norvig
Artificial Intelligence: A New Synthesis – Nils J Nilsson
Superintelligence – Nick Bostrom
The Singularity is Near – Ray Kurzweil
Life 30 – Being Human in the Age of Artificial Intelligenc – Max Tegmark
The Master Algorithm – Pedro Domingos
2018-10-17 19:00:29+05:30 Read the full story.

Doctor who? UK patients signal support for AI surgeons

A third of patients in the UK are willing to have major invasive surgery performed by AI, according to research conducted by YouGov for PwC. The research found that young people and men are most supportive, with nearly half of 18 to 24-year-olds and 39% of men willing to go under a knife wielded by AI. Perhaps unexpectedly it is old people who are the most sceptical. Only 24% of over 55s are open to the idea.
As the benefits of AI-powered healthcare become more and more evident, it is only a matter of time before AI infuses into the healthcare system. AI promises to help to alleviate waiting times and surgery delays that currently plague operations, free up staff and resources, and cut costs. But it is no exaggeration to say that patients value the relationship they have with medical professionals more than in any other context, preferring to have sensitive consultations and life-changing decisions made by real-life humans. Even if AI interfaces don’t develop a believable ‘human touch’ – and there is no reason to suspect that this won’t happen sooner or later – having treatment decisions made by a cold chunk of metal and code might be a pill worth swallowing if it means your healthcare is smarter and fast-tracked.
2018-10-18 00:00:00 Read the full story.


Rolls-Royce Partners With Intel For The Next Generation Of Self-Piloting Ships

With all the developments that are occurring involving autonomous vehicles, it’s easy to imagine a future in which commuters sit back, relax, and are shuttled to their destinations by self-driving cars. Continuing developments in artificial intelligence (AI) and advances in computer vision are coming together to make this futuristic dream a reality. Yet the technology won’t be limited to the open road. Rolls-Royce (NASDAQOTH:RYCEY) develops power and propulsion systems for a wide range of transportation assets including aircraft and ocean vessels. As a key player in the international shipping industry, the company is joining forces with Intel (NASDAQ:INTC) to advance its plans to make self-piloting ships a reality.
The company already uses AI to create intelligent systems to make commercial shipping safer, but now Rolls-Royce is collaborating with Intel to process the massive amount of data necessary by turning these ships into floating data centers. Rolls-Royce’s Intelligent Awareness System (IAS) uses sensor fusion and enhanced decision making that combines data from LIDAR, radar, thermal cameras, HD cameras, satellite data, and weather forecasts. Each vessel can capture up to 1 terabyte (or approximately 1 trillion bytes) of data per day — even when the data is compressed.
2018-10-16 01:41:02-04:00 Read the full story.

Speech Recognition Challenge with Deep Learning Studio

We might be on the verge of too many screens. It seems like everyday, new versions of common objects are “re-invented” with built-in wifi and bright touch screens. A promising antidote to our screen addiction are voice interfaces. Talking more voice interface Speech recognition technology has become an increasingly popular concept in recent years. From organizations to individuals, the technology is widely used for various advantages it provides.
One of the most notable advantages of speech recognition technology includes the dictation ability it provides. With the help of the technology users can easily control devices and create documents by speaking. Speech recognition can allow documents to be created faster because the software generally produces words as fast as they are spoken, which is generally much faster than a person can type. No longer will spelling or writing hold you back. Voice recognition software, as well as being faster to complete tasks, is increasingly accurate when it comes to vocabulary and spelling.
For most of us, the ultimate luxury would be an assistant who always listens for your call, anticipates your every need, and takes action when necessary. That luxury is now available thanks to artificial intelligence assistants, aka voice assistants. Voice assistants come in somewhat small packages and can perform a variety of actions after hearing a wake word or command. They can turn on lights, answer questions, play music, place online orders, etc.
But, for independent makers and entrepreneurs, it’s hard to build a simple speech detector using free, open data and code. Many voice recognition datasets require preprocessing before a neural network model can be built on them. To help with this, TensorFlow had released the Speech Commands Datasets. It includes 65,000 one-second long utterances of 30 short words, by thousands of different people.
2018-10-21 22:16:13.912000+00:00 Read the full story.

IBM Snags $240 Million AI Deal

International Business Machines (NYSE:IBM) exited the PC business in 2005 by selling its PC division to Lenovo (NASDAQOTH:LNVGY). That deal began a relationship between the two companies, and Lenovo has since become the No. 1 seller of PCs worldwide.
Lenovo is now looking to make its commercial PC business more efficient, and it’s turning to IBM’s artificial intelligence technology for help. IBM announced a multiyear deal with Lenovo on Thursday that aims to use AI to reduce customer service and field service costs. The $240 million pact, covering North America, Europe, the Middle East, Africa, and Latin America, is a win for IBM’s technology support services business.
2018-10-19 02:13:14-04:00 Read the full story.


IBM opening up a series of AI, cloud and security initiatives

Positioning itself as an open vendor, IBM on Oct. 15 announced its new AI OpenScale, MultiCloud Manager and Security Connect platforms. AI OpenScale enables organizations to use multiple artificial intelligence frameworks including TensorFlow, AWS SageMaker, AzureML and others on the IBM cloud. It also has a new system that will use AI to build new AI models.
The new IBM MultiCloud Manager is a service designed to help organizations automate and manage workloads across different cloud providers. Finally, the IBM Security Connect offering provides an open cloud-based platform for integration of multiple security tools.
2018-10-17 00:00:00 Read the full story.

Real-time Data Processing with Serverless Computing

In the Big Data landscape, data engineers are always striving to come up with efficient and accurate methods to compute volume, variety and velocity of data so that it can prove to be a strong skeleton for Data Scientists who are conducting their analysis. This is necessary since new age technologies like Machine Learning and Artificial Intelligence heavily depends on Big Data as mention in the recent Forbes post.
However, Big Data deployment in the Cloud came up with a lot of difficulties such as under, improper and over utilization of compute resources at various periods. To abstract away these problems, Serverless Architecture came to the rescue. Netflix, Mapbox and New York Times are some of the prominent organizations using Serverless Architecture and unleashing their potential in Real-time Data Processing.
2018-10-19 00:35:03-07:00 Read the full story.

AI to reshape rather than replace finance workforce

The job market within the financial service sector is set for a significant shift as the revolution in artificial intelligence (AI) reshapes processes and roles, a panel told its audience at this year’s Money20/20. For Gregory Simpson, senior vice president and chief technology officer at Synchrony, while the workforce will change there may not be a widescale loss of opportunities in the marketplace. “We’ve taken a very conscious approach about how to reskill people, because there will be different types of jobs,” he said. “We talk about AI as augmented intelligence versus artificial intelligence – and how augmentation can change jobs, and how prepared people,are to fill those roles. There will be some jobs that need replaced, and in some cases there will be a displacement of other types of automation.”
“In some cases it’s creating new jobs with people needed to build that automation. Our audit team is talking about hiring data scientists rather than auditors, for instance,” he added.
2018-10-21 00:00:00 Read the full story.


This CEO is paying 600,000 strangers to help him build human-powered AI that’s ‘whole orders of magnitude better than Google’

One of the worst-kept secrets in Silicon Valley is that it takes a whole lot of human labor to make artificial intelligence…intelligent.
The best example: When Google’s reCAPTCHA pages ask you to identify street signs or storefronts in photos before you can log in, you’re proving you’re not a robot, sure. You’re also providing valuable, human insight into what a street sign looks like, which is extremely useful data when you’re trying to train a self-driving car, or a smart security camera. The whole concept was memorably lampooned in an episode last year of HBO’s “Silicon Valley.”
Hive pays 600,000 workers and counting to label photos, getting paid pennies in return. You won’t get rich, but as Hive CEO Guo says, it’s a simple “game” that makes you money — what other app on your phone can do that? The data gets put to use in training AI systems, at a scale that Guo says is unmatched. Hive is a Silicon Valley startup that’s best known for its AI-powered image recognition system, with customers including NASCAR.
2018-10-21 00:00:00 Read the full story.


The ultimate guide to starting AI – Part 2

Many teams try to start an applied AI project by diving into algorithms and data before figuring out desired outputs and objectives. Unfortunately, that’s like raising a puppy in a New York City apartment for a few years, then being surprised that it can’t herd sheep for you. You can’t expect to get anything useful by asking wizards to sprinkle machine learning magic on your business without some effort from you first. Instead, the first step is for the owner — that’s you! — to form a clear vision of what you want from your dog (or ML/AI system) and how you’ll know you’ve trained it successfully.
My previous article discussed the why, now it’s time to dive into how to do this first step for ML/AI, with all its gory little sub-steps. This reference guide is densely-packed and long, so feel free to stick to large fonts and headings for a two-minute crash course. Here’s the table of contents:

  • Figure out who’s in charge
  • Identify the use case
  • Do some reality checks
  • Craft a performance metric wisely
  • Set testing criteria to overcome human biases

2018-10-19 15:42:21.123000+00:00 Read the full story.

Concise Cheat Sheets for Machine Learning with Python (and Maths)

Machine learning is difficult for beginners. As well as libraries for Machine Learning in python are difficult to understand. Over the past few weeks, I have been collecting Machine Learning cheat sheets from different sources and to make things more interesting and give context, I added excerpts for each major topic.
If you are just getting started with Machine Learning or Data Science, you’ll richly benefit from resources compiled from our recent publications;
An Overview of Best Machine Learning Cheat Sheets…

  1. Scikit-Learn Cheat Sheet: Python Machine Learning
  2. Python Cheat Sheet for Scikit-learn
  3. Keras Cheat Sheet: Neural Networks in Python
  4. Python SciPy Cheat Sheet
  5. Theano Cheat Sheet

2018-10-22 00:00:08+00:00 Read the full story.

Stop Making These Five Simple Mistakes in Big Data Analytics

Data Scientists, using their formidable skills in math, statistics, and programming, dig out an enormous mass of messy data then clean, manage, and organize it. They use their analytic powers that include industry knowledge, contextual understanding, and skepticism of existing assumptions to help businesses unearth hidden solutions to complex challenges.
But nobody is perfect, mistakes happen, though that can in fact be a good thing. It is rightly quoted by the famous Irish novelist, James Joyce that, “Mistakes are the portals of discovery.” For Data Scientists, mistakes help them become sharper and discover new data trends, but that doesn’t mean mistakes in Big Data Analytics are not sometimes quite problematic. Although Data Scientists rarely commit egregious mistakes, since they are hired with precision, some newbies in this field can make them, which we are going to discuss here. So let’s see what common mistakes Data Analysts/Scientists make and how to avoid them.
Mistake 1: Going Overboard on Tools
Mistake 2: Not Being an Explorer and Visualizer
Mistake 3: Ignoring Possibilities
Mistake 4: Solving Problems Randomly
Mistake 5: Fearing to Communicate and Compete
2018-10-18 00:30:56-07:00 Read the full story.


What is a Decision Tree in Machine Learning? – Hacker Noon

Decision trees, one of the simplest and yet most useful Machine Learning structures. Decision trees, as the name implies, are trees of decisions. You have a question, usually a yes or no (binary; 2 options) question with two branches (yes and no) leading out of the tree. You can get more options than 2, but for this article, we’re only using 2 options.
Trees are weird in computer science. Instead of growing from a root upwards, they grow downwards. Think of it as an upside down tree. The top-most item, in this example, “Am I hungry?” is called the root. It’s where everything starts from. Branches are what we call each line. A leaf is everything that isn’t the root or a branch.
Trees are important in machine learning as not only do they let us visualise an algorithm, but they are a type of machine learning.
2018-10-22 Read the full story.

National Australia Bank trials AI-powered facial recognition ATMs

The move is part of a push across the bank to use the latest cloud-based systems and will use Microsoft’s Azure platform, along with its cognitive services AI software as it evaluates whether customers like the idea of cardless banking. The proof-of-concept ATMs are being demonstrated at this week’s Sibos conference in Sydney, and will only be introduced to the wider public if the bank is satisfied it can address any privacy and security concerns about the use of customer biometrics. The ATMs will recognise a customer’s face and then require them to add their PIN number to complete transactions.
2018-10-18 00:00:00 Read the full story.

Predicting Hospital Readmission for Patients with Diabetes Using Scikit-Learn

As the healthcare system moves toward value-based care, CMS has created many programs to improve the quality of care of patients. One of these programs is called the Hospital Readmission Reduction Program (HRRP), which reduces reimbursement to hospitals with above average readmissions. For those hospitals which are currently penalized under this program, one solution is to create interventions to provide additional assistance to patients with increased risk of readmission. But how do we identify these patients? We can use predictive modeling from data science to help prioritize patients.
One patient population that is at increased risk of hospitalization and readmission is that of diabetes. Diabetes is a medical condition that affects approximately 1 in 10 patients in the United States. According to Ostling et al, patients with diabetes have almost double the chance of being hospitalized than the general population (Ostling et al 2017). Therefore, in this article, I will focus on predicting hospital readmission for patients with diabetes.
In this project I will demonstrate how to build a model predicting readmission in Python using the following steps

  • data exploration
  • feature engineering
  • building training/validation/test samples
  • model selection
  • model evaluation

2018-10-21 21:04:34.462000+00:00 Read the full story.

Deloitte: 42% of executives believe AI will be of ‘critical importance’ within 2 years

“Companies are excited about the potential of AI to improve performance and competitiveness — and for good reason,” said Dr. Jeff Loucks, executive director at Deloitte’s Center for Technology, Media, and Telecommunications. “But to reach this potential, companies must engage risk, address talent shortfalls, and execute well. While AI’s upside is significant, haste can leave companies with bridges to nowhere — pilots that don’t scale or projects with no business benefit.”
The report breaks out AI adoption into four categories: machine learning, or the ability of statistical models to develop capabilities and autonomously improve their performance over time; deep learning, a form of machine learning involving neural networks; natural language processing, the ability to parse meaning from text; and computer vision, techniques that extract intent out of visual elements. Natural language processing outstripped all other categories in terms of growth, according to the survey, with 62 percent of companies reporting having adopted it (up from 53 percent a year ago). Machine learning came in second with 58 percent (up 5 percent year-over-year), and computer vision and deep learning followed close behind, with 57 percent and 50 percent adoption, respectively (a 16 percent increase from 2017).
2018-10-21 00:00:00 Read the full story.

A Pioneering Scientist Explains Deep Learning

Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.
Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution (out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.
The Verge spoke to Sejnkowski about how “deep learning” suddenly became everywhere, what it can and cannot do, and the problem of hype.
2018-10-18 13:07:14+00:00 Read the full story.

Crux Raises $20 mln in Series B Funding Round • Integrity Research

Yesterday, alternative data engineering firm, Crux Informatics announced that it had closed a $20 mln Series B funding round. Investors in the latest financing round include quantitative hedge fund Two Sigma, plus previous investors Goldman Sachs and Citigroup. Founded in 2017, Crux Informatics provides outsourced data engineering to clean, normalize and transform raw data into formats specific to each client, as well as offering investors an online portal of third-party data sources.
Crux Informatics CEO, Philip Brittan explained the strategic rationale for the latest investment round, “We have built a solution for what has become a significant pain point for financial services firms – ingesting and managing the tremendous amount of data that is now available to them. We are transforming the data industry making it possible for our clients to reap the benefits from improved data flow, which translates into more actionable insights and alpha. This funding will help us continue to drive innovation, which will scale our business and the data set solutions we offer to our clients.”
2018-10-18 15:44:17+00:00 Read the full story.


Cloudera Announces Strategic Alliance with NEC to Support the Utilization of Big Data

A recent press release reports, “Cloudera, Inc., the modern platform for machine learning and analytics optimized for the cloud, announced a strategic partnership with NEC Corporation, a leading global systems integrator, to accelerate the use of big data. Through this strategic partnership, customers will be able to obtain NEC’s advanced technical, consulting and support services, and access to Cloudera’s data management software, Cloudera Enterprise. Businesses today are turning data into a competitive advantage. They are finding that the key to success is to take control of their data and the infrastructure needed to unlock the data value and derive insights to make better, smarter and fact-based decisions. To do so, having a well-structured and centralized data repository is critical. This alliance will provide organizations the ability to integrate and centralize data across different business units through Cloudera’s modern, scalable and secure platform, enabling them to solve many complex and important problems through the application of machine learning and artificial intelligence.”
2018-10-22 00:15:43-07:00 Read the full story.

The 5 Basic Statistics Concepts Data Scientists Need to Know

Statistics can be a powerful tool when performing the art of Data Science (DS). From a high-level view, statistics is the use of mathematics to perform technical analysis of data. A basic visualisation such as a bar chart might give you some high-level information, but with statistics we get to operate on the data in a much more information-driven and targeted way. The math involved helps us form concrete conclusions about our data rather than just guesstimating.

  • Statistical Features
  • Probability Distributions
  • Dimensionality Reduction
  • Over and Under Sampling
  • Bayesian Statistics

2018-10-22 02:17:34.099000+00:00 Read the full story.

Deutsche Bank steps up Big Data efforts with new analytics capability for Securities Services

Deutsche Bank is launching its enterprise analytics capability, which collates and analyses millions of lines of data daily to identify opportunities for efficiencies in the bank’s and its clients’ securities settlements.
Deutsche Bank’s Securities Services will be bringing live its new enterprise analytics capability in November 2018, enabling daily data analysis on securities transactions. Starting with the German market and with a wider roll-out planned, the platform represents a key step towards unlocking the value of Deutsche Bank’s huge repository of transaction data, which details the holdings and movements of cash, securities and other instruments in and out of client accounts.
2018-10-22 12:19:00 Read the full story.

Alternative Data Researcher M Science Enhances Flagship Video Game Data • Integrity Research

M Science, the alternative data research provider originally known as Majestic Research, added video game viewership data from Twitch to its recently released analytic platform, M Data Viz, further strengthening the firm’s video game capabilities.. M Science has expanded its analytic coverage by over 20% over the last year, primarily in its coverage of technology and European consumer stocks. As of June 2018, the firm had over 30 research staff registered on LinkedIn, the majority of which (21) were based in Portland Oregon. The firm is currently recruiting for senior analysts in the healthcare, industrials and fintech sectors.
In July 2018, M Science launched an improved version of its transaction data, called SWIPE, which uses machine learning to calculate five metrics derived from transaction data – aggregate spending at a company level, unique customers, number of transactions per shopper, average ticket, and a proprietary estimate of a company’s revenue (labelled the ‘M Science Index’). The firm currently provides revenue estimates for fifty stocks.
2018-10-19 13:55:46+00:00 Read the full story.


Link data to outcomes to avoid a data deluge, say Hitachi Vantara Australia CTO

Digital transformation projects are failing in Australia because organisations haven’t linked their data strategy to business outcomes, according to Chris Drieberg, director of pre-sales and CTO for Hitachi Vantara ANZ. Worse still, Drieberg says, the digital infrastructure being built today may prove fundamentally inadequate very soon. “Most of [our customers] that we deal with today are building a data warehouse strategy and they’ll only look at a data lake strategy when the business is impacted,”
018-10-18 13:46:23+11:00 Read the full story.

It’s all about the data

Just continuing on the theme of how different industries can learn from each other, I used to work for NCR. NCR had several major industries they served: retailers, airlines, telcos and banks. The common thread across all of these industries was high frequency customer contact, and the challenge for all of these industries was how to leverage that contact. The highest frequency contact was in banking and telecommunications. Sure, retailers and airlines see customers often, but banks and telecommunications firms touch customers every day. Back in the 1990s, when I was at NCR, I made a prediction as a result. The prediction was that one day banks would become telecoms firms and telecoms firms would become banks.
Today, that prediction is part-true. In some countries, banks are running mobile networks and, in other countries, mobile networks are running banks. But what happens longer term, if that prediction plays out?
2018-10-16 06:52:10+00:00 Read the full story.


NICE Actimize Launches X-Sight, the Industry’s First Financial Crime Risk Management PaaS

According to a recent press release, “NICE Actimize, a NICE business and leader in Autonomous Financial Crime Management, today announced the launch of X-Sight, an advanced machine-learning based Platform-as-a-Service designed to power the industry’s first financial crime risk management marketplace. The NICE Actimize X-Sight Platform-as-a-Service offers a single, unified, cost-effective way for financial service organizations to rapidly innovate and to introduce new services while supporting best-in-class financial crime, risk and compliance management capabilities.”
2018-10-22 00:10:19-07:00 Read the full story.

SEC Launches FinHub Portal

Thursday marked the launch of the Securities and Exchange Commission’s FinHub, which will serve as the agency’s “strategic hub for innovation and technology.” It is accessible to the public, and also will serve as an exchange of ideas for those in the industry and other regulators.
The hub covers four topical areas: 1) Blockchain/distributed ledger, 2) Digital marketplace financing, 3) Automated investment advice, and 4) Artificial intelligence/machine learning. Under each topic there are links to current regulation, investor information and various speeches and presentations. For example, under the Digital Marketplace Financing category is a regulation bar that when clicked on will provide information on the SEC’s crowdfunding final rule as well as a compliance details.
2018-10-18 Read the full story (@ ThinkAdvisor).
2018-10-22  Read the full story (@ TheIndustrySpread).

Will Alphabet Earnings Crush Expectations Again? — The Motley Fool

Alphabet (NASDAQ:GOOG) (NASDAQ:GOOGL) is slated to report its third-quarter 2018 results after the market close on Thursday, Oct. 25.
Google’s parent company is going into its report on a solid note. In the second quarter, revenue increased 26% year over year and earnings per share adjusted for one-time factors jumped 32% to $11.75, trouncing the $9.59 Wall Street was looking for. On a GAAP basis, however, EPS declined 9.4%, driven by a 4.34 bil…
2018-10-22 00:00:00 Read the full story.

Database Trends and Applications Magazine: October/November 2018

The Journal of Information Integration and Management – Volume 32, Number 5 • October/November 2018

  • Content Automation Emerges in Cluttered Enterprises
  • Hybrid Clouds Fulfill a Wide Range of Data Enterprise Needs
  • How to Transform Your Back-End Infrastructure into a Platform of the Future
  • Controlling Costs in the Cloud for High-Availability Applications
  • Using Data Lakes to Fuel Self-Service Analytics
  • Revelation Software OfferS Primer on OpenInsight 10
  • Pick Cloud and 3phi Partner to Sell Reporting Tools Based on PICK/MultiValue Databases
  • Kore Technologies’ Newest Kourier Update Enhances Cloud Capabilities
  • NEXT-GEN DATA MANAGEMENT Dangers of Statistical Modeling
  • EMERGING TECHNOLOGIES Blockchain Enables Unique Digital Proofs
  • IOUG OBSERVATIONS Considerations Before Converting from an RDBMS to Cloud-Native Services
  • DBA Corner The Mainframe Does Big Data
  • SQL Server Drill Down Fly to the Cloud Using the Azure Database Migration Service (DMS)
  • Database Elaborations Should PII and GDPR Drive Database Design

2018-10-19 00:00:00 Read the full story.

Data Centre World 2018: Water scarcity could halt data centre growth

At last week’s Data Centre World in Singapore, the largest industry event in Asia, water technology provider Ecolab discussed sustainable solutions to address the freshwater crisis and examples of how data centres have successfully reduced water consumption.
Demand for natural resources is exceeding supply at an alarming rate, not least when it comes to our most fundamental resource – water. By 2030, global demand for fresh water is expected to exceed available supplies by 40 per cent.
Running parallel to our increasing water consumption is the mushrooming of data centres all over the world, that rely on large volumes of water to cool their racks and servers.
2018-10-16 00:00:00 Read the full story.


Is Hadoop Officially Dead?

The merger of Cloudera and Hortonworks was applauded by many people in the big data community, and even Wall Street liked the news initially. But as the confetti from the party clears, some are asking tough questions, like whether the merger signals the death of Hadoop as a viable computer platform moving forward. The answer is probably not. Here’s why.
The October 3 announcement that Hortonworks will join forces with its arch-rival Cloudera to create a single company with about $730 million in annual revenue, 2,500 customers, and a $5.2 billion market valuation took a lot of people by surprise.
2018-10-18 00:00:00 Read the full story.

Top 10 real-life examples of Machine Learning

Machine learning is one modern innovation that has helped man enhance not only many industrial and professional processes but also advances everyday living. But what is machine learning? It is a subset of artificial intelligence, which focuses on using statistical techniques to build intelligent computer systems in order to learn from databases available to it. Currently, machine learning has been used in multiple fields and industries. For example, medical diagnosis, image processing, prediction, classification, learning association, regression etc.
The intelligent systems built on machine learning algorithms have the capability to learn from past experience or historical data. Machine learning applications provide results on the basis of past experience. In this article, we will discuss 10 real-life examples of how machine learning is helping in creating better technology to power today’s ideas.

  • Image Recognition
  • Speech Recognition
  • Medical diagnosis
  • Statistical Arbitrage
  • Learning associations
  • Classification
  • Prediction
  • Extraction
  • Regression
  • Financial Services

2018-10-22 Read the full story.

The business case for machine learning

Machine Learning (ML) is one of the most researched topics in computer science. It’s been around for decades. However, many people consider it just another buzzword, or even worse, confuse it with Artificial Intelligence (AI). However, the two are not the same.
Machine Learning is the science of a machine learning and improving without being specifically programed to do so. Artificial Intelligence is the base technology that makes this possible. Think of ML as a subset of AI. It’s important to keep in mind that all Machine Learning is Artificial Intelligence but not all Artificial Intelligence is Machine Learning. More than clarifying the difference between the two, it’s important to understand why Machine Learning has gained so much attention over the past few years.
2018-10-19 00:00:00 Read the full story.


Here’s the one key advantage Seattle’s tech ecosystem has over Silicon Valley

Silicon Valley may be the startup capital of the world, but Seattle has a leg up when it comes what is arguably the most impactful technology of the future. The age-old Bay Area vs. Seattle tech ecosystem debate came up again during a venture capital panel discussion at the Seattle Interactive Conference on Thursday that included Heather Redman, managing director at Flying Fish Partners; Hope Cochran, venture partner at Madrona Venture Group; and Shauna Causey, partner at Madrona Venture Labs. They gave the usual responses — Seattle is too humble; there isn’t enough capital flowing; it’s cheaper in Seattle and employees are more loyal here.
But Redman, a veteran of the Seattle tech scene who launched Flying Fish last year, offered a forward-looking take. She said Seattle not only has more artificial intelligence and machine learning talent than the Bay Area, but its innovation culture is one that lends itself to setting the ethical bar for the industry at large. “This is the place where we can thread the needle on how to get the benefits from those technologies without tripping into some of the really nasty potential side effects of having a lot of machines making ethical choices for us,” she said.
2018-10-19 16:35:01-07:00 Read the full story.


AI Weekly: For evidence of academic investment in AI, look no further than Pittsburgh

The Massachusetts Institute of Technology announced this week that it would invest $1 billion in a new college of computer engineering: the Stephen A. Schwarzman College of Computing. It’s the single largest investment in artificial intelligence (AI) by a U.S. academic institution to date. And when the new building hosts its first classes in 2022, it’ll be the largest structural addition to MIT’s campus since the 1950s. But MIT isn’t the only university channeling funds toward AI education.
Yet another AI-forward institution of note is Carnegie Mellon University (CMU), which partnered with Bosch’s Center for Artificial Intelligence on an $8 million research project that goes through 2023. CMU has the additional distinction of being the first university to offer an undergraduate degree in AI, and it neighbors the ARM Institute, a $250 million initiative focused on accelerating the advancement of transformative robotics technologies and education in the U.S. manufacturing industry.
2018-10-19 00:00:00 Read the full story.


The Pixel 3’s dual cameras are a tacit admission that AI can’t do everything — yet

Google’s latest flagship smartphones — the Pixel 3 and Pixel 3 XL — are finally shipping to customers, and the reviews are unanimous: The rear camera and dual selfie cams are best in class. But as good as those cameras might be, they’re a bit puzzling — and sort of paradoxical. Allow me to explain.
The original Pixel and Pixel XL have two cameras: one front and one rear. The Pixel 2 and Pixel 2 XL have two cameras: one front and one rear. And the Pixel 3 and Pixel 3 XL have three cameras: two front and one rear. “The notion of a software-defined camera or computational photography camera is a very promising direction and I think we’re just beginning to scratch the surface,” Google AI researcher Marc Levoy told The Verge in October 2016, shortly after the Pixel and Pixel XL’s debut. “I think the excitement is actually just starting in this area, as we move away from single-shot hardware-dominated photography to this new area of software-defined computational photography.”
In the weeks preceding the launch of the Pixel 2 and Pixel 2 XL, Mario Queiroz — Google’s GM and VP of phones — insisted that the phones’ single front and rear cameras were just as capable as dual-sensor setups from the likes of Apple and Huawei. The Mountain View company even considered bragging about needing one camera in its marketing materials, according to 9to5Google.
I’ve been trying to reconcile the inconsistency since the Pixel 3 was announced last week. Is the addition of a second front camera a tacit admission that AI can’t do everything? That a physical camera is superior? That a hybrid approach might be best?
2018-10-19 00:00:00 Read the full story.


Anki Vector review: Big on heart, not on smarts

It was late Monday afternoon, and Vector was giving me a headache. The darn thing seemed to be ignoring me. Earlier in the month, I received a package containing Anki’s new Vector, which the company calls a “home robot.” It has generated plenty of enthusiasm — in fact, it blew through its $500,000 Kickstarter goal in just a few days, hitting $1.9 million by the end of the campaign — and began shipping to early backers this month.
“It’s truly the first mass-market consumer robot that’s fundamentally powered by the cloud, [and it’s running] the first truly production-grade robotic operating system that third-party developers can use,” Anki cofounder Mark Palatucci told me in a recent phone interview. “We’re leveraging the enormous economies of scale of the modern cloud while maintaining this mass market price. But if my testing is any indication, Vector could use a tad more R&D.
2018-10-21 00:00:00 Read the full story.


Inside Teradata’s Audacious Plan to Consolidate Analytics

To hear Teradata COO Oliver Ratzesberger explain it, top executives at Fortune 500 firms — and the boards that hold their purse strings — are simply fed up with big analytic investments that haven’t panned out, and they’re turning to Teradata for answers. “The last five to 10 years have been a curse and a blessing,” Ratzesberger tells Datanami in an interview here at Teradata Analytics Universe in Las Vegas, Nevada, where approximately 3,000 Teradata customers, partners, and employees gathered for four-and-a-half days of training, education, and commiserating about failed analytic projects.
“There are few executives left who don’t say ‘I’ve spent billions of dollars. I have 1,500 clusters. I have Vertica there, Hana there, Greenplum there. We bought a couple instances of Netezza. But IBM just de-released Netezza, Vertica just got sold a second time, Greenplum is now this open source thing. And Hadoop – well, that is going away.’ – “They’re literally coming to us and saying ‘Give us a proposal to clean up the dozens of instances and consolidate them into one,’” Ratzesberger continues.
2018-10-17 00:00:00 Read the full story.


Meta Tagging Shoes with Pytorch CNNs – Towards Data Science

Something that I have been wanting to play around with is generating text to describe images. When posed in this way two pathways come to mind. First would be to use a combination of CNNs for feature extraction and feed those extracted features to an LSTM and let it generate descriptions by iterating repeatedly. The second way would be to build a multi-label classification model and have the output nodes represent specific tags. The first model is good in the case you wanted to generate captions for images that have a grammatical structure to them. The multi-label classifier works in situations where there is a finite number of tags that are of interest to generate. This number could be large and as long there is enough data you can train a model in this way.
For this particular post I wanted to try and generate meta data tags for pairs of shoes using only raw images as inputs. As for methodology I decided to use a multi-label classification model. My first reason for not using the CNN + LSTM route is that I didn’t explicitly need that English like structure that would be the goal of the LSTM. The second reason is that I felt I would need more data than I was willing to generate by hand to get the model to train nicely for my tailored use case. I hoped that I would be able to leverage pretrained networks to quickly build a model to generate these meta data tags. This turned out to be partially true.
2018-10-21 21:02:26.423000+00:00 Read the full story.


Analysts Get A Guiding Hand

It can be tough knowing what you can and cannot do in analytics. While it may be possible to build a query that considers a person’s race during a credit check, it is also illegal (not to mention unethical). Now data catalog provider Alation is stepping up by providing more automation to help users conduct analyses in a responsible manner.
Earlier this year Alation rolled out a new feature in its Compose query building environment called TrustCheck that gives users instant feedback on whether the query they’re writing is kosher or not.
2018-10-17 00:00:00 Read the full story.

Contact Centers Have Moved To Cloud — What’s Next?

As of 2018, the market for contact center technology has fully shifted to cloud. The remaining outliers of demand are large, highly customized implementations that preserve legacy on-premises software or organizations that wish to maintain the technology inside their firewall. Agility, flexibility, and the ability to rapidly shift capacity are all key elements that AD&D pros gain when migrating to cloud contact centers.
Our recently published report, “The Forrester Wave™: Cloud Contact Centers, Q3 2018,” identified the 11 most significant players in the cloud contact center market: 8×8, Aspect, Avaya, Cisco, Enghouse Interactive, Five9, Genesys, NewVoiceMedia, NICE inContact, Serenova, and Talkdesk.
2018-10-15 11:46:36-04:00 Read the full story.


Why Microsoft, IBM, Google and Boeing are taking a giant leap into quantum computing

The small world of quantum physics is a big deal on the frontier of computer science. Microsoft CEO Satya Nadella rates quantum computing as one of three key technologies that will shape his company’s future, along with artificial intelligence and mixed reality. Google and NASA are working with D-Wave Systems to blaze a quantum trail. IBM has its Q initiative, and Boeing’s newly formed Disruptive Computing & Networks unit is targeting quantum as well. There’s been a White House summit on quantum information science, and Congress is considering legislation that’d give quantum computing a $1.3 billion boost over the next 10 years. What’s going on?
Actually, not a whole lot as of yet. Sure, D-Wave’s computer can take on some specific quantum tasks, but that doesn’t mean a general-purpose, universal computer that takes advantage of all the weird properties of quantum physics is just around the corner.
2018-10-19 01:14:36-07:00 Read the full story.


Facebook ‘war room’: Teams gather to fight election manipulation

It’s a windowless room, packed with about two dozen desks, a half-dozen screens showing TV news and Twitter feeds and even more monitors lining the walls tracking trends in Facebook user behavior. This is Facebook’s first ever “war room,” designed to prevent election manipulation by improving data-sharing across the company and enabling quick decision-making. This roughly 900-square-foot room, which Facebook recently showed to journalists, is a visual representation of the company’s commitment to dramatically improving communication and security ahead of the Nov. 6 U.S. midterms.
Facebook has assembled teams from across the company in a 900-square-foot “war room,” to help identify and stop attempts to manipulate elections, including the Nov. 6 U.S. midterms.
WhatsApp, Instagram, operations, software engineering, data science, research operations, legal, policy, communications — they’re all represented in the room.
Ahead of this month’s votes in the Brazilian presidential election, the company identified an effort to suppress voter turnout with fake posts saying the election was delayed due to protests. Facebook was quickly able to shut it down.
2018-10-17 00:00:00 Read the full story.

How Facebook is Using War Room to Fight Election Interference

Facebook announced Oct. 18 that the company has set up a war room at its offices in Menlo Park, Calif., to monitor efforts to interfere with national elections in the United States and Brazil. According to the announcement, the war room is staffed by more than two dozen experts from throughout the company, including from their threat intelligence, legal, data science and software engineering teams. The room includes several large monitors on walls around the room and desks for each of the workers.
The idea of a war room is to get everyone that’s involved in dealing with a threat or some other type of urgent problem into a single room where they can interact instantly to counteract the problem. The workers in the room can see threats as they appear on dashboards on the monitors, and then can work together to investigate and solve each problem.
2018-10-19 00:00:00 Read the full story.

Amazon creates 1,000 new UK research roles as tech giants home in on British talent

Amazon is investing in three regional hubs across the UK, creating more than 1,000 new skilled jobs in a move UK trade secretary Liam Fox hailed as a “signal to the world that the UK is very much open for business”. The internet giant will open a new office in Manchester, to house at least 600 new employees working on software development, machine learning and research and development.
2018-10-19 00:00:00 Read the full story.


Canada installs Chinese underwater monitoring devices next to US nuclear submarine base

While the eyes of the world have been on the strategic tussle between Beijing and Washington in the South China Sea, Chinese scientists, with the help of the Canadian authorities, have succeeded in positioning four monitoring devices in waters just 300km (186 miles) off the United States’ Pacific coast. The instruments, which use hi-tech sensors to monitor the underwater environment, are connected to the Ocean Network Canada (ONC), a grid of marine observatories stretching from the northeast Pacific to the Arctic. While the network is operated by the University of Victoria in British Columbia, its four new additions are the property of the Sanya Institute of Deep-sea Science and Engineering, a unit of the Chinese Academy of Sciences, which also developed and built them.
The devices were placed on the Endeavour segment of the Juan de Fuca Ridge by a remote-controlled submersible owned by the Canadian Coast Guard on June 27. Now fully operational, they can be used to provide real-time streaming of data to the Chinese institute’s control centres in Sanya, a city on the island province of Hainan, and elsewhere.
2018-10-22 01:02:46+08:00 Read the full story.


Driverless trucks, supply chains, AI and Big Data – How does it work?

You would’ve heard of driverless cars. The technology would soon be redefining the way people commute, not only around the world but between and in cities as well. Driverless technology is not only being applied to personal vehicles, but also to commercial transportation. Driverless trucks are the natural evolution of driverless cars. They will cut down on costs, increase efficiency, and generally shake up the shipping industry. Let’s look at how artificial intelligence technology is changing the outlook of the freight industry.
However, at the moment, “driverless trucks” will be a misnomer. There will still be someone in the cab, monitoring the truck like a conductor, rather than actively driving. Much like experimental driverless taxis or the driverless Google cars, there’s someone behind the wheel in case something goes wrong. In the case of shipping, however, it’s also to make sure the truck is on schedule, making the correct deliveries, and more. It’s not something a driver would have to constantly do, but rather they must check in every so often to make sure all systems are running properly. The driver is a failsafe more than anything, and perhaps a person to provide a signature upon delivery.
2018-10-17 14:42:03+00:00 Read the full story.


Designing the future with the help of the past with Bill Buxton – Microsoft Research Podcast 32 mins

Episode 46, October 17, 2018
The ancient Chinese philosopher Confucius famously exhorted his pupils to study the past if they would divine the future. In 2018, we get the same advice from a decidedly more modern, but equally philosophical Bill Buxton, Principal Researcher in the HCI group at Microsoft Research. In addition to his pioneering work in computer science and design, Bill Buxton has spent the past several decades amassing a collection of more than a thousand artifacts that chronicle the history of human computer interaction for the very purpose of informing the future of human computer interaction.
Today, in a wide-ranging interview, Bill Buxton explains why Marcel Proust and TS Eliot can be instructive for computer scientists, why the long nose of innovation is essential to success in technology design, why problem-setting is more important than problem-solving, and why we must remember, as we design our technologies, that every technological decision we make is an ethical decision as well.
2018-10-17 08:00:57-07:00 Read the full story.


In a tiny Seattle apartment, the bed is on the ceiling, and that’s part of what makes it a smart home

In a small apartment just a couple blocks from the waterfront in downtown Seattle, windows looking south over the Alaskan Way Viaduct usually afford a view of Mount Rainier. Clouds kept the mountain hidden during a visit this week. The home’s bed and several storage compartments were also out of sight. The one-bedroom apartment on University Street, in a building called Cyrene, looks like you’d imagine many do in Seattle these days — sleek and spare, and kind of tiny. But it’s a little extra minimalist thanks to a smart-home invention that has created space where there would normally be furniture.
San Francisco-based Bumblebee Spaces is in Seattle to demo its signature Bumblebee bed and smart storage cabinets that comprise a system for making “space for what matters.” This is no ordinary furniture showroom. And this is not an old-school Murphy bed, folded up into a cabinet, but rather a modern answer that uses depth sensors, artificial intelligence, machine learning, an app-based control panel, industrial-grade straps and electric motors to raise a bed off the floor and tuck it overhead on the ceiling.
2018-10-19 13:00:12-07:00 Read the full story.


Arm’s Neoverse will span everything from tiny devices to high-end server chips

Arm’s Neoverse, a cloud-to-edge infrastructure that the chip design company hopes will support a world with a trillion intelligent devices. The Neoverse is basically Arm’s ecosystem for supporting the chip design and manufacturing firms that will produce those devices, based on the Arm architecture. But it’s also a market-based approach to supporting customers in different segments, like automotive, machine learning, or the internet of things (IoT).
Arm is also stepping up with a more aggressive roadmap for processors that make use of the most advanced manufacturing possible. That means the company is targeting everything from low-power embedded processors to high-end chips for servers. I spoke with Haas about Neoverse and other topics at the Arm TechCon event last week in San Jose, California. Here’s an edited transcript of our interview…
2018-10-21 00:00:00 Read the full story.

Behind a PayWall or Registration Wall…

Which Data Skills Do You Actually Need? This 2×2 Matrix Will Tell You.

Data skills — the skills to turn data into insight and action — are the driver of modern economies. According to the World Economic Forum, computing and mathematically-focused jobs are showing the strongest growth, at the expense of less quantitative roles. So whether it’s to maximize the part we play in data-driven economic growth, or simply to ensure that we and our teams remain relevant and employable, we need to think about transitioning to a…
2018-10-18 16:00:45+00:00 Read the full story.

This news clip post is produced algorithmically based upon CloudQuant’s list of sites and focus items we find interesting. If you would like to add your blog or website to our search crawler, please email We welcome all contributors.
This news clip and any CloudQuant comment is for information and illustrative purposes only. It is not, and should not be regarded as “investment advice” or as a “recommendation” regarding a course of action. This information is provided with the understanding that CloudQuant is not acting in a fiduciary or advisory capacity under any contract with you, or any applicable law or regulation. You are responsible to make your own independent decision with respect to any course of action based on the content of this post.