Frequently Asked Questions

First version: Sat Apr 25 2026 19:00:00 GMT-0500 (Central Daylight Time) Last update: Sat Apr 25 2026 19:00:00 GMT-0500 (Central Daylight Time)

What is Deep Research?

Deep Research is a technique that gives modern LLMs access to search, allowing them to cite sources based on your directions or intent for higher quality outputs tied to verifiable sources. They are extended sessions that utilize reasoning models, and most major LLM labs currently have one of these.

What models do you use?

Whatever is best for the task. ChatGPT deep research, Claude with extended thinking, Gemini Deep Research, and others. The model is listed on each entry.

What will I learn from these articles?

The text is the source, read it.

Is this beginner-friendly?

Who is beginner, and why is he looking for friends? Knowledge is better than friends.

Does this assume prior knowledge?

There is no guarantee. Some lists are made for me, some are made to reimagine how I would learn something from scratch in a more efficient way if I could go back. Others are designed around my aspirations of teaching certain topics and developing curricula.

Can I apply this to coding?

Where it already applies.

Can I apply this to research?

That is the goal.

Can I apply this to thinking better?

If it holds, you will not need to ask.

Was this written by AI?

Yes. This project is specifically centered around AI accelerated learning, not new original research. The deep research capabilities of models are much better suited to things simpler such as the creation of curricula, finding of resources, and providing a decent synopsis of a field to point you in various directions of where to start your own rabbit hole.

Are these actual citations or just references?

Both, depending on the article. Deep research models work by reading pages on the web and pulling information from them. Each numbered citation in an article maps to an entry in the sources panel, which links to the original page the model consulted. The model reads these sources as input tokens, reasons over them, then produces the article as output tokens. These are not hallucinated URLs. The model actually visited and read each source during its research session. Whether a given citation is a direct quote, a paraphrase, or a general reference depends on how the model used that source in context.

How do you verify the output?

Every entry includes its sources. The research models cite their work. If a claim lacks a source, treat it with appropriate skepticism. The sources panel on each report links to everything the model referenced.

Can I see the prompts?

The prompts database is included in the repository. Each entry's slug maps to the exact prompt that generated it.

Where do I start?

Wherever you please.

Are some articles more important than others?

They were all to a degree important enough to tickle my fancy that I spent time reading them.

Maybe someday when I feel that this has become large enough or relevant enough that I ought to.

Are these worth reading?

It is not my place to decide what is worth it for you.

Why read these instead of using ChatGPT, Perplexity, etc. directly?

See the about page. More answers to come.

Can I use these reports?

Do whatever you want with them. If they help you learn something, that is the entire point.