SAELens
ArthurConmyGDM commited on
Commit
8f91220
1 Parent(s): ed67fee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -19
README.md CHANGED
@@ -1,30 +1,20 @@
1
  ---
2
- license: apache-2.0
3
  ---
4
 
5
- # 1. GemmaScope
6
 
7
- Gemmascope is TODO
8
 
9
- # 2. What Is `gemmascope-9b-pt-mlp`?
10
 
11
- - `gemmascope-`: see 1.
12
- - `9b-pt-`: These SAEs were trained on the Gemma v2 9B base model (TODO link)
13
- - `mlp`: These SAEs were trained on the outputs of MLP layers.
14
 
15
- ## 3. GTM FAQ (TODO(conmy): delete for main rollout)
 
 
16
 
17
- Q1: Why does this model exist in `gg-hf`?
18
-
19
- A1: See https://docs.google.com/document/d/1bKaOw2mJPJDYhgFQGGVOyBB3M4Bm_Q3PMrfQeqeYi0M (Google internal only).
20
-
21
- Q2: What does "SAE" mean?
22
-
23
- A2: Sparse Autoencoder. See https://docs.google.com/document/d/1roMgCPMPEQgaNbCu15CGo966xRLToulCBQUVKVGvcfM (should be available to trusted HuggingFace collaborators, and Google too).
24
-
25
- TODO(conmy): remove this when making the main repo.
26
-
27
- ## 4. Point of Contact
28
 
29
  Point of contact: Arthur Conmy
30
 
 
1
  ---
2
+ license: cc-by-4.0
3
  ---
4
 
5
+ # 1. Gemma Scope
6
 
7
+ Gemma Scope is a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.
8
 
9
+ See our [landing page](https://huggingface.co/google/gemma-scope) for details on the whole suite. This is a specific set of SAEs:
10
 
11
+ # 2. What Is `gemma-scope-9b-pt-mlp`?
 
 
12
 
13
+ - `gemma-scope-`: See 1.
14
+ - `9b-pt-`: These SAEs were trained on Gemma v2 9B base model.
15
+ - `mlp`: These SAEs were trained on the model's MLP sublayer outputs.
16
 
17
+ ## 3. Point of Contact
 
 
 
 
 
 
 
 
 
 
18
 
19
  Point of contact: Arthur Conmy
20