Deepseek

We haven't received many problem reports for Deepseek yet.
If Deepseek is down for you, you might be one of the first to encounter an outage - please consider checking back later. It's also possible that the issue is local or related to your internal network or system.

Deepseek reports

Deepseek is an AI-chatbot. When Deepseek is down, the service may be unable to repond to prompts, the API may be down or login isn't working.

Reports about problems with Deepseek

What might be more astonishing is that a lot of separately generated sequencies are exactly the same. For example, when I use Deepseek V3 with temperature 1.0 the sequence HTTHHTTHTH appears 6 times which has 10 to negative 14probability so essentially should not have happened.
@chsmrrll Jensen was right when he said that Deepseek would deepen the market, but only in the short term. In the long term, such independent building of code and products, in isolation from the rules imposed by the software environment, will revolutionize the market.
DeepSeek-R1-0528-Qwen3-8B is completely and utterly unusable to me. Got it in LMStudio,got the recommended version for my VRAM, and it's not able to complete an answer. Is there some secret to getting it to work properly?
DeepSeek R1 0528: Not So Minor After All The latest DeepSeek update might wear the label of a “minor release,” but the impact is anything but. Under the surface, this drop speaks volumes; about China’s AI trajectory, geopolitical pressure, and the slow erosion of Western tech
@iamDCinvestor If anything you'd need more GPUs worldwide once DeepSeek's vision of decentralizing LLM servers is realized Running DeepSeek for 1 user requires 8 full GPUs. Running DeepSeek for 100 users require about 100-200 GPUs if the private LLM system is scaled efficiently. By contrast
DeepSeek R1-v2 created a generative model whose theory I don't fully understand. I told it to predict the successor of the transformer. Sonnet 4 fixed some errors within the training pipeline. What a crazy time we live in..
Deepseek-R1 (128K context): Strong at reasoning and math problems (in case you're using Cline for homework). Excellent bang for your buck. ($.45/2.70 per M tokens in/out)