Facebook has had a finger in Intel's processor pie, and has collaborated with the processor giant on the design of the upcoming Cooper Lake Xeon part.
Announced at the Open Compute Project Global Summit, Intel said the Cooper Lake processor will feature Bfloat 16 for deep learning training.
This is a 16-bit floating point representation that improves perormance by offering the same dynamic range as standard 32-bit equivalent FP representation and can be used to accelerate artificial intelligence training models.
The acceleration can be used to optimise workloads such as speech recognition, image classification, recommendation engines and machine translation, Intel said.
Cooper Lake has been on Intel's roadmap since the middle of last year, and is fabricated with a 14 nanometre process.
Intel and Chinese vendor Inspur also provided an open hardware server motherboard reference design that features four next-generation Xeon processor sockets.
The high density design can have up to 112 processor cores in a 2U design.
Dell, HP, Hyve Solutions, Lenovo, Quanta, Supermicro, Wiwynn and ZT Systems intend to bring out servers based on the reference design this year.