24.1.11
◼️

24.1.11

Date
Jan 11, 2024
Parent item
Sub-item
Tags
1.
2.
Bristol Myers Squibb - Pharmaceutical industry company
3.
4.
5.
Edinburgh Park Arena
6.
Ambient photonics cells
7.
8.
Euler diagram for P, NP, NP-complete, and NP-hard sets of problems. The left side is valid under the assumption that P≠NP, while the right side is valid under the assumption that P=NP (except that the empty language and its complement are never NP-complete, and in general, not every problem in P or NP is NP-complete).
Euler diagram for PNP, NP-complete, and NP-hard sets of problems. The left side is valid under the assumption that P≠NP, while the right side is valid under the assumption that P=NP (except that the empty language and its complement are never NP-complete, and in general, not every problem in P or NP is NP-complete).
9.
10.
Orchard Brae
11.
12.
Samsung Music Frame
13.
🖤
we provide theoretical evidence that the complexity class of a model determines its ability to generalize, that transformers are not Turing complete, and that Find+Replace transformers are. Accordingly, Find+Replace transformers should generalize better to difficult tasks on which existing state-of-the-art transformers fail. In this section, we run experiments to verify that this is true and to prove that Find+Replace transformers can still be trained efficiently despite being Turing complete.” (“Turing Complete Transformers: Two Transformers Are More Powerful Than One”, 2023, p. 8)
14.
[PDF] Sparks of Artificial General Intelligence: Early experiments with GPT-4 | Semantic Scholar
It is argued that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models, and the rising capabilities and implications of these models are discussed. Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.
[PDF] Sparks of Artificial General Intelligence: Early experiments with GPT-4 | Semantic Scholar
by microsoft
15.
stable diffusion
Variational Autoencoder (VAE) DM
噪点预测器(U-Net) 噪点预测器是图像降噪的关键所在。Stable Diffusion 使用 U-Net 模型来执行降噪。U-Net 模型是最初为生物医学中的图像分割而开发的卷积神经网络。特别是,Stable Diffusion 使用为计算机视觉开发的残差神经网络(ResNet)模型。
16.