TLDR AI 2023-09-06

Hugging Face training cluster as a service 💻, improving math skills in LLMs ➕, fine-tuning zero shot models 0️⃣

🚀
Headlines & Launches

X’s privacy policy confirms it will use public data to train AI models (3 minute read)

X has updated its privacy policy to collect biometric, job, and education data from users. Notably, X will use this data and other public info to train its AI models. Alex Ivanovs suggests Elon Musk might leverage X's data for his AI venture, xAI. Musk clarifies only public data will be used.

Hugging Face training cluster as a service (3 minute read)

Hugging Face launched a new service that allows you to simply train a model without managing the complicated process internally.

Kindo raises $7m for AI enhanced productivity tools (4 minute read)

Led by Riot Ventures, Kindo aims to create a central set of tools for managing complex enterprise AI processes.
🧠
Research & Innovation

A Smarter Way for Self-Driving Cars and Robots to See More Clearly (14 minute read)

The authors have created a new tool called SQLdepth that helps self-driving cars and robots understand their surroundings in great detail.

Improving Math Skills in Big Language Models (21 minute read)

This research looks into how well LLMs can solve math problems and how different factors affect their abilities. They’ve found that a method called Rejection sampling Fine-Tuning (RFT) helps these models get better at math, especially when we use samples from several models.

Robust fine-tuning of zero shot models (28 minute read)

Zero shot models are AI models that have robust performance on a broad range of distributions. However, if you tune them on your narrow task, they tend to lose that ability while simultaneously getting much better at what they're trained for. This is good but could be improved. This paper presents a method to tune such models, like CLIP to maintain their generalizability while improving specific performance.
👨‍💻
Engineering & Resources

Batch LoRAs on the same GPU (GitHub Repo)

LoRAs are small parameter adapters that change the behavior of the underlying model by plugging into certain layers. You can batch generation for the base model to improve performance. Additionally, you can batch LoRA-modified versions of the model and serve models with different performance to different users.

Making AI Bots Better at Real-Life Conversations (GitHub Repo)

Many AI chatbots struggle with having natural wide-ranging chats like humans do, so this project introduced 'Topical-Chat', a new dataset from human conversations on 8 big topics, to teach AI how to talk more like us without fixed roles.
🎁
Miscellaneous

Ban or Embrace? Colleges Wrestle With AI-Generated Admissions Essays (7 minute read)

Colleges grapple with the rise of AI-generated admissions essays. With chatbots like ChatGPT able to produce college essays, officials fear a potential for generic or plagiarized content. While some schools discourage AI use, Georgia Tech advises students to use it responsibly for brainstorming and refining, but not for direct content creation.

All 50 states call on Congress to address AI-generated CSAM (2 minute read)

US attorneys general from all 50 states urge Congress to create a commission investigating AI's role in child exploitation. They're concerned about AI being used to generate child sexual abuse material (CSAM) and deepfakes. Led by South Carolina's Attorney General Alan Wilson, the push aims to expand CSAM restrictions to cover AI-generated content.

Unraveling The Ethical Complexities Of AI (18 minute read)

An exploration of the central ethical dilemmas posed by artificial intelligence, with the goal of illuminating how AI may shape our collective destiny and the power we have to guide it down a path aligned with human values.
⚡️
Quick Links

SAM.cpp (GitHub Repo)

Segment anything from Meta running in pure C++ powered by GGML.

Kula (Product)

Your AI assistant to hire top talent.

1.6B parameter model reaches 32% on HumanEval (8 minute read)

Best in class and lightning-fast code generation model. Trained from scratch on 1.2T tokens of half code half language data.
The most important AI, ML, and data science news in a free daily email.
Join 500,000 readers for