ValidMind, a regulatory compliance platform for AI risk management at banks, raises $8.1 million in seed funding to automate model validation and documentation.
Databricks launches DBRX, a powerful open source AI model that outperforms rivals, challenges big tech, and sets a new standard for enterprise AI efficiency and performance.
Tokenizing time series data and treating it like a language enables a model whose zero-shot performance matches or exceeds that of purpose-built models.
How can we effectively generate missing data trans-formations among tables in a data repository? Multiple versions of the same tables are generated from the iterative process when data scientists and machine learning engineers fine-tune their ML pipelines, making incremental improvements. This…
Zapata Computing, Inc., the Industrial Generative AI company, announced that its scientists, in collaboration with Insilico Medicine, the University of Toronto, and St. Jude Children’s Research Hospital have demonstrated the first instance of a generative model running on quantum hardware outperforming state-of-the-art classical models in generating viable cancer drug candidates. The research points to a promising future of hybrid quantum generative AI for drug discovery using today’s quantum devices.
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here. For OpenAI CTO Mira Murati, an exclusive Wall Street Journal interview with personal tech columnist Joanna Stern yesterday seemed like a slam-dunk. The clips of OpenAI’s Sora text-to-video model, which was shown off in a […]
Anthropic launches Claude 3 Haiku, the fastest and most affordable AI model in its class, featuring advanced vision capabilities and enterprise-grade security for high-volume, latency-sensitive applications.
Cohere, a leading AI startup, unveils powerful new language model Command-R and seeks up to $1 billion in funding to compete with OpenAI and Anthropic in the enterprise AI market.
Sketches have rich spatial information to help the robot carry out its tasks without getting confused by the clutter of realistic images or the ambiguity of natural language instructions.
HP Amplify — NVIDIA and HP Inc. today announced that NVIDIA CUDA-X™ data processing libraries will be integrated with HP AI workstation solutions to turbocharge the data preparation and processing work that forms the foundation of generative AI development.
Abacaus AI tested the uncensored LLM on MT-Bench and found it performs slightly better than the best open-source model on the HumanEval leaderboard – Qwen1.5-72B chat.
In the Hellaswag LLM benchmark evaluating common sense natural language inference, Danube performed with an accuracy of 69.58%, sitting just behind Stability AI’s Stable LM 2 1.6 billion parameter model.
While it remains to be seen how well StarCoder2 models perform in different coding scenarios, the companies did note that the performance of the smallest 3B model alone matched that of the original 15B StarCoder LLM.
V-JEPA uses the same rule of learning through observations, referred to as “self-supervised learning,” which means that V-JEPA does not need human-labeled data.
Samba-1 is not a single model, like OpenAI's GPT-4, rather it is a combination of more than 50 high-quality AI models put together in an approach that SambaNova refers to as a Composition of Experts architecture.