The race is on to create mobile software agents (virtual assistants) that can perform tasks, or services for an individual based on user input, location awareness, and the ability to access information from a variety of online sources (such as weather or traffic conditions, news, stock prices, user schedules, retail prices, etc) and to perform ongoing tasks such as schedule management (e.g., sending an alert to a dinner date that a user is running late due to traffic conditions, update schedules for both parties, and change the restaurant reservation time) and personal health management (e.g., monitoring caloric intake, heart rate and exercise regimen, then making recommendations for healthy choices)
Think About The Internet In 1996 Before Google Existed — That's where we are in mobile now. We need a new Web. A new model of user experience. We need a new Web that is computable by machines. My view is that it’s the right time for a revolution in this space. Data volumes are going to grow by orders of magnitude in the connected world. My concern is that the signal to noise ratio will only get worse unless we can usher in some improvements. We can solve this problem by creating a new Web separable from the previous Web layer (Web 2.0). This is Web 3.0, differentiated from the current Web.
If you've ever looked at the Google ads that pop up specifically for you, or had a try of Google Now, you will realize that the predictive powers of AI based on computational statistics are extremely limited. They might be able to pick vaguely the topics/products that are very loosely related to you, but they are completely unable to pick topics/products that are both related to you and of interest to you RIGHT NOW. So instead of getting served ads for random stuff you don't want, you get served ads for slightly less random stuff.
The new Web will be like a typical expert system consisted of a knowledge base and an inference engine. The knowledge base stored concepts about the world. The inference engine applied logical rules to the knowledge base and deduced new knowledge. This process would iterate as each new concepts in the knowledge base could trigger additional rules in the inference engine. Inference engines work primarily in one of two modes: forward chaining and backward chaining. Forward chaining starts with the known facts and asserts new facts. Backward chaining starts with goals, and works backward to determine what facts must be asserted so that the goals can be achieved.