HomeOVERVIEWProjected Language Models: A Large Model Pre-Segmented Into Smaller Ones

Projected Language Models: A Large Model Pre-Segmented Into Smaller Ones

This paper has been accepted at the Foundation Models in the Wild workshop at ICML 2024.
Large language models are versatile tools but are not suitable for small inference budgets. Small models have more efficient inference but their lower capacity means that their performance can be good only if one limits their scope to a specialized domain. This paper explores how to get a small language model with good specialized accuracy, even when specialization data is unknown during pretraining. We propose a novel architecture, projected networks (PN). PN is a high capacity network whose parameters can be linearly projected into a small network for fine tuning. We assess the empirical effectiveness of our solution compared to small model training, distillation and hard mixture of experts.

Latest articles

Newbury BS cuts resi, expat, landlord rates by up to 30bps  – Mortgage Strategy

Newbury Building Society has cut fixed-rate offers by up to 30 basis points...

Rate and Term Refinances Are Up a Whopping 300% from a Year Ago

What a difference a year makes.While the mortgage industry has been purchase loan-heavy for...

Goldman Sachs loses profit after hits from GreenSky, real estate

Second-quarter profit fell 58% to $1.22 billion, or $3.08 a share, due to steep...

Why Do AIs Lie?

Zeroth Principles can clarify many issues in the ML/AI domain. As discussed in a...

More like this

#431 – Roman Yampolskiy: Dangers of Superintelligent AI

Podcast: Play in new window | DownloadSubscribe: Spotify | TuneIn | Roman Yampolskiy is...

Heatmaps – AICorr.com

Heatmaps This page covers a matplotlib heatmaps tutorial. A heatmap is a data visualisation technique that...

AI inom musikproduktion – en växande trend med starka motståndsgrupper

En fjärdedel av musikproducenter använder nu AI, men en majoritet visar motstånd på grund...