Apple will allow enterprises that have developed their own iOS apps to augment them with artificial intelligence capabilities delivered by IBM Watson.
The two vendors announced an expansion of a 2014 partnership at IBM's Think 2018 conference in Las Vegas.
The first result is two new products: IBM Watson Services for Core ML and the IBM Cloud Developer Console for Apple.
Already, US-based Coca-Cola has been named as a triallist of Watson Services for Core ML.
In a brief statement, IBM said that "Coca-Cola is currently partnering with IBM, working on prototypes for how IBM Watson Services may transform in-field capabilities."
"Initial functionalities being analysed are custom visual recognition problem identification, cognitive diagnosis and augmented reality repair," IBM said.
This appeared reflective of many early use cases envisioned for the technology: helping mobile field force workers to perform maintenance and visual asset inspection.
The iOS-Watson tie-up is initially limited to image recognition; Apple said developers could use Watson to build apps that can recognise visual content and analyse images for scenes, objects, faces, colours, food and more.
Core ML is an Apple framework that can be used to integrate machine learning into apps.
Watson Services trains, tests and deploys updated models to Core ML, providing for continuous learning over time for iPhone and iPad apps.
The image recognition process itself takes place on the devices themselves with Core ML, with no data being sent back to IBM's Watson servers.
Watson Services run on the IBM Cloud. iOS developers access them through the IBM Cloud Developer Console for Apple, which - beyond AI features - provides services such as authentication, data and analytics.
Pre-trained models are available on Github for iOS developers wanting to get started with the machine learning tools for integrating visual recognition into applications.
Apple's Xcode version 9 and the iOS 11 mobile operating system are required to use Core ML.