nndocs:a770
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
nndocs:a770 [2024/03/05 16:09] – created naptastic | nndocs:a770 [2024/03/05 16:34] (current) – [Ooba] Forgot important note: requirements.txt doesn't work naptastic | ||
---|---|---|---|
Line 35: | Line 35: | ||
apt -y install \ | apt -y install \ | ||
intel-basekit intel-aikit intel-oneapi-pytorch intel-oneapi-tensorflow \ | intel-basekit intel-aikit intel-oneapi-pytorch intel-oneapi-tensorflow \ | ||
- | intel-opencl-icd intel-level-zero-gpu level-zero \ | + | intel-opencl-icd intel-level-zero-gpu level-zero |
intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2 \ | intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2 \ | ||
libegl-mesa0 libegl1-mesa libegl1-mesa-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri \ | libegl-mesa0 libegl1-mesa libegl1-mesa-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri \ | ||
libglapi-mesa libgles2-mesa-dev libglx-mesa0 libigdgmm12 libxatracker2 mesa-va-drivers \ | libglapi-mesa libgles2-mesa-dev libglx-mesa0 libigdgmm12 libxatracker2 mesa-va-drivers \ | ||
- | mesa-vdpau-drivers mesa-vulkan-drivers va-driver-all vainfo hwinfo clinfo | + | mesa-vdpau-drivers mesa-vulkan-drivers va-driver-all vainfo hwinfo clinfo |
- | | + | libigc-dev intel-igc-cm libigdfcl-dev libigfxcmrt-dev |
- | | + | intel-fw-gpu intel-i915-dkms xpu-smi |
A reboot is required. | A reboot is required. | ||
Line 72: | Line 72: | ||
Holy crap this takes a long time. But now I can inst--hang on I'm not sure why this is necessary: | Holy crap this takes a long time. But now I can inst--hang on I'm not sure why this is necessary: | ||
+ | conda activate textgen | ||
conda install intel-extension-for-pytorch=2.1.10 pytorch=2.1.0 -c intel -c conda-forge | conda install intel-extension-for-pytorch=2.1.10 pytorch=2.1.0 -c intel -c conda-forge | ||
- | Holy crap this takes a long time. IPEX and pytorch weren' | + | Holy crap this takes a long time. IPEX and pytorch weren' |
+ | python -c " | ||
+ | 2.1.0a0+cxx11.abi | ||
+ | 2.1.10+xpu | ||
+ | [0]: _DeviceProperties(name=' | ||
+ | |||
+ | Interestingly, | ||
+ | (textgen) root@sadness: | ||
+ | Platform #0: Intel(R) OpenCL | ||
+ | `-- Device #0: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz | ||
+ | Platform #1: Intel(R) OpenCL Graphics | ||
+ | `-- Device #0: Intel(R) Arc(TM) A770 Graphics | ||
+ | |||
+ | And llama.cpp... I have no explanation for this output. | ||
+ | GGML_SYCL_DEBUG=0 | ||
+ | ggml_init_sycl: | ||
+ | ggml_init_sycl: | ||
+ | found 3 SYCL devices: | ||
+ | Device 0: Intel(R) Arc(TM) A770 Graphics, | ||
+ | max compute_units 512, max work group size 1024, max sub group size 32, global mem size 255012864 | ||
+ | Device 1: Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz, | ||
+ | max compute_units 8, max work group size 8192, max sub group size 64, global mem size 33517965312 | ||
+ | Device 2: Intel(R) Arc(TM) A770 Graphics, | ||
+ | max compute_units 512, max work group size 1024, max sub group size 32, global mem size 255012864 | ||
====llama-cpp-python==== | ====llama-cpp-python==== | ||
Line 87: | Line 111: | ||
pip install rich accelerate gradio==3.50.* markdown transformers datasets peft | pip install rich accelerate gradio==3.50.* markdown transformers datasets peft | ||
+ | Actually using requirements.txt creates a conflict. I haven' | ||
=====State===== | =====State===== | ||
I can launch Ooba and it behaves as expected: | I can launch Ooba and it behaves as expected: |
nndocs/a770.1709654975.txt.gz · Last modified: 2024/03/05 16:09 by naptastic