Most developers have already incorporated AI tools such as Copilot and ChatGPT into their day-to-day. At the same time, a staggering 79% of devs have privacy and security concerns.
Find out where AI innovation meets security in these handy technical resources from Nylas:
Writer, who is developing a "full-stack" generative AI platform for businesses, has secured $100M in Series B funding led by ICONIQ Growth. The funds will advance its "industry-specific" text-generating AI models. Differentiating itself in a competitive market, Writer focuses on non-copyrighted business writing, cost-effective models, transparency, and ensuring that no training is done using customer data.
Databricks has secured over $500 million in a Series I round, boosting its valuation to $43 billion. Noteworthy investors include T. Rowe Price, Morgan Stanley, and Nvidia. Despite wider market funding slowdowns, Databricks' strong revenue growth and run rate suggest an eventual IPO could exceed its current valuation.
Fine-tuning pre-trained vision models for specific tasks often requires a lot of extra computing power. Researchers have come up with a method called "Salient Channel Tuning" (SCT) that smartly picks which parts of a model to fine-tune, using way fewer extra settings and still outperforming other methods in almost all tested tasks.
Large language models like GPT-3 are smart because they learn from a ton of data, but how do they react when new information conflicts with what they already know? Researchers have found that these models can get confused and give wrong answers, especially when they face conflicting or distracting info, raising concerns about how reliable they can be.
Typically with language models, the input and output embedding layers are tied together. However, if you tie the input embedding to other embeddings within the model, and use a contrastive loss, you can get dramatically improved performance both in wall clock and final benchmark accuracy.
Open source AI models have significant advantages over proprietary models. For most AI applications, controllability, customizability, transparency, and trust are more important than advanced reasoning abilities. AI-native companies should focus on owning their core models rather than outsourcing their intelligence layer to external providers.
Event cameras have some cool benefits like low power use and quick response times, but they struggle with creating detailed 3D models of scenes. Researchers have introduced Robust e-NeRF, a new method that makes these cameras much better at building 3D models, even in challenging conditions like fast motion or varying light.
Are you developing your next AI project? You need the right domain name to support it. Porkbun offers 500+ domain extensions including .ai, .cloud, .app, and .dev β all backed by powerful web hosting solutions and incredible support 365 days/year.
This article makes the argument that we should wait to regulate AI, as regulation frequently results in reduced competition and production, economic benefits moving overseas, and a government-centric instead of a user-centric industry.
Despite concerns that AI would render human creativity obsolete, we continue to appreciate human endeavors, like chess, even when AI performs better. This paradox arises because humans value imperfection, surprises, and shared experiences, which AI, with its constant optimization, cannot replicate.
GE Health is set to create an AI-assisted ultrasound imaging tool designed for easy use by healthcare providers without specialized training. The device aims to improve medical imaging and will primarily focus on maternal, fetal, and pediatric lung health to address preventable maternal and child mortality.