1.
Keith Haring
2.
A Drag-and-Drop library for all JavaScript frameworks
3.
(仅为展示astra界面)
Connecting Mongoose to Astra
检索增强生成 (RAG)
JSON API
GEN AI
MongooseJS ODM
通过引入JSON API , Astra DB 成为 JavaScript 开发人员构建 AI 应用程序的首选 maybe
嵌入是内容或查询的一种特定矢量表示形式,它们由捕获文本语义或其他内容(例如图像)的表示形式的机器学习模型创建。 自然语言机器学习模型已基于大量的数据进行训练,可以识别单词之间的模式和关系。 在训练过程中,他们会试着在一个称为编码器的中间步骤中将任何输入表示为实数矢量。 训练完成后,可以修改这些语言模型,使中间矢量表示形式成为模型的输出。 生成的嵌入是高维矢量,其中具有相似含义的单词在矢量空间中距离更近,如了解嵌入 (Azure OpenAI) 中所述。
4.
Faiss
一个开源代码库,提供基于本地文件的存储
适量索引存储在机器学习工作区的存储账户中
如果仅有矢量数据,这个库很合适
由于它存储在本地,因此成本最低,非常适合开发和测试
Faiss is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning. the vectors can be compared with L2 (Euclidean) distances or dot products.
关于这组篇已经第二天
5.
新开
Standard instruction-tuning with multi-task supervised fine-tuning (commonly known as SFT) may still not result in models that follow humans intentions while being safe, ethical and harmless, and can be futher improved with Reinforcement Learning from Human Feedback (RLHF): human annotators rank outputs from the fine-tuned model, which are used to fine-tune again with reinforcement learning (Ouyang et al., 2022b). Recent work showed that human feedback may be replaced with feedback from an LLM, a process referred to as Reinforcement Learning from AI Feedback (RLAIF) Bai et al. (2022b). Direct Preference Optimization (DPO) bypasses the need to fit a reward model to human preferences as in RLHF and instead directly fine-tunes the policy with a cross-entropy objective, achieving more efficient alignment of the LLM to human preferences.
3.5.3 Medical Benchmarks One desirable capability of LLMs is on contributing medical related tasks to make affordable, high-quality healthcare more accessible to the broader public. For mental health, IMHI (Yang et al., 2023c) benchmark is constructed using 10 existing mental health analysis datasets, including mental health detection: DR (Pirina & Çöltekin, 2018), CLP (Coppersmith et al., 2015), Dreaddit (Turcan & McKeown, 2019), loneliness, SWMH and T-SID (Ji et al., 2022); mental health cause detection: SAD (Mauriello et al., 2021), CAMS (Garg et al., 2022); mental risk factors detection: MultiWD (SATHVIK & Garg, 2023), IRF (Garg et al., 2023). For radiology, OpenI (Demner-Fushman et al., 2016) dataset and MIMIC-CXR (Johnson et al., 2019) datasets both contain radiology reports with findings and impressions text. Models For mental health, MentalLlama-chat-13B (Yang et al., 2023c) finetunes a Llama-chat-13B model on IMHI training set. MentalLlama-chat-13B model with zero-shot prompting outperforms ChatGPT with few-shot prompting or with zero-shot prompting for 9 out of 10 tasks in IMHI. Liu et al. (2023) proposes to fine-tune a Llama checkpoint to generate impression text given radiology report findings. The resulting Radiology-Llama-2 model outperforms ChatGPT and GPT-4 by a large margin on both MIMIC-CXR and OpenI datasets.
6.
略
the author
7.
这是我见过的最好的关于菜单的专业英文网站,但是我了解过得相关方向很少
8.
9.
10.