並び順

ブックマーク数

期間指定

  • から
  • まで

1 - 3 件 / 3件

新着順 人気順

openclの検索結果1 - 3 件 / 3件

  • GitHub - ggerganov/llama.cpp: LLM inference in C/C++

    The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud. Plain C/C++ implementation without any dependencies Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks AVX, AVX2 and AVX512 support for x86 architectures 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit,

      GitHub - ggerganov/llama.cpp: LLM inference in C/C++
    • D3D12 GPU Video acceleration in the Windows Subsystem for Linux now available!

      This list is illustrative of a GPU and vendor driver supporting all possible entrypoints/profiles using mesa 22.3. The actual capabilities reported in vaQueryConfigProfiles, vaQueryConfigEntrypoints , vaQueryConfigAttributes, VaQueryVideoProcPipelineCaps and others are dynamically queried from the underlying GPU and might vary between platforms and driver versions. The vainfo utility will list the

        D3D12 GPU Video acceleration in the Windows Subsystem for Linux now available!
      • Apple Silicon搭載のMacでも引き続き「OpenGL/OpenCL」は動作するものの、依然として非推奨でMetalへの移行を推奨。

          Apple Silicon搭載のMacでも引き続き「OpenGL/OpenCL」は動作するものの、依然として非推奨でMetalへの移行を推奨。
        1