Repository: slashml/amd_inference
Branch: main
Commit: a419f91dde2a
Files: 14
Total size: 23.4 KB
Directory structure:
gitextract_lsk01owt/
├── Aptfile
├── Dockerfile
├── LICENSE
├── README.md
├── examples/
│ ├── question_answering.py
│ └── text_generation.py
├── requirements.txt
├── run-docker-amd.sh
├── run_inference.py
└── src/
├── __init__.py
├── amd_setup.py
├── engine.py
├── model.py
└── utils.py
================================================
FILE CONTENTS
================================================
================================================
FILE: Aptfile
================================================
rocm-dev
rocm-libs
rocm-cmake
miopen-hip
rocblas
================================================
FILE: Dockerfile
================================================
# Start from the latest ROCm base image
FROM rocm/dev-ubuntu-22.04:6.0-complete
# Set environment variables
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Install rocm-smi
RUN apt-get update && apt-get install -y rocm-smi
# Set up a new user
RUN useradd -m -s /bin/bash user
USER user
WORKDIR /home/user/app
# Set up Python environment
ENV PATH="/home/user/.local/bin:${PATH}"
RUN python3 -m pip install --user --upgrade pip
COPY --chown=user:user requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt
# Copy the application code
COPY --chown=user:user . /home/user/app/
# Set an argument for the model path
ARG MODEL_PATH
ENV MODEL_PATH=${MODEL_PATH}
# Set the entry point to run the inference script
ENTRYPOINT ["python3", "src/run_inference.py"]
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# ROCm Dependency Notice
This project uses ROCm, which is licensed under the MIT License.
See https://rocm.docs.amd.com/en/latest/about/license.html for details.
================================================
FILE: README.md
================================================
# ⚙️ AMD GPU Inference
[](https://github.com/bentoml/OpenLLM/blob/main/LICENSE)
[](https://twitter.com/slash_ml)
[](https://discord.com/invite/EXJkWygF)
This project provides a Docker-based inference engine for running Large Language Models (LLMs) on AMD GPUs. It's designed to work with models from Hugging Face, with a focus on the LLaMA model family.
## Prerequisites
- AMD GPU with ROCm support
- Docker installed on your system
- ROCm drivers installed on your host system (version 5.4.2 or compatible)
## Project Structure
```
amd-gpu-inference/
├── src/
│ ├── __init__.py
│ ├── engine.py
│ ├── model.py
│ ├── utils.py
│ └── amd_setup.py
├── Dockerfile
├── requirements.txt
├── run_inference.py
├── run-docker-amd.sh
└── README.md
```
## Quick Start
1. Clone this repository:
```
git clone https://github.com/slashml/amd-gpu-inference.git
cd amd-gpu-inference
```
2. Make the run script executable:
```
chmod +x run-docker-amd.sh
```
3. Run the inference engine with a specified model and prompt:
```
./run-docker-amd.sh "meta-llama/Llama-2-7b-chat-hf" "Translate the following English text to French: 'Hello, how are you?'"
```
Replace `"meta-llama/Llama-2-7b-chat-hf"` with the Hugging Face model you want to use, and provide your own prompt.
## Detailed Usage
### Aptfile
The project includes an `Aptfile` that lists the necessary ROCm packages to be installed in the Docker container. This ensures that all required ROCm drivers and libraries are available for the inference engine to utilize the AMD GPU effectively.
### Building the Docker Image
The `run-docker-amd.sh` script builds the Docker image automatically. If you want to build it manually, use:
```
docker build -t amd-gpu-inference .
```
### Running the Container
The `run-docker-amd.sh` script handles running the container with the necessary AMD GPU flags. If you want to run it manually:
```
docker run --rm -it \
--device=/dev/kfd \
--device=/dev/dri \
--group-add=video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
amd-gpu-inference "model_name" "your prompt here"
```
Replace `"model_name"` with the Hugging Face model you want to use, and `"your prompt here"` with your input text.
## Customization
### Changing the Model
You can use any model available on Hugging Face by specifying its repository name when running the container. For example:
```
./run-docker-amd.sh "facebook/opt-1.3b" "Your prompt here"
```
### Modifying the Inference Logic
If you need to change how the inference is performed, modify the `run_inference.py` file. Remember to rebuild the Docker image after making changes.
## Troubleshooting
- Ensure that your AMD GPU drivers and ROCm are correctly installed and configured on your host system.
- If you encounter "out of memory" errors, try using a smaller model or reducing the input/output length.
- For model-specific issues, refer to the model's documentation on Hugging Face.
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## Acknowledgements
- This project uses the Hugging Face Transformers library.
- ROCm is developed by AMD. Licensed under MIT License
See https://rocm.docs.amd.com/en/latest/about/license.html for details.
For any questions or issues, please open an issue in the GitHub repository.
================================================
FILE: examples/question_answering.py
================================================
from src.engine import InferenceEngine
def question_answering_example():
model_path = "models/llama-2-1b"
engine = InferenceEngine(model_path)
context = "The Great Wall of China is an ancient wall in China. The wall is 6,259 km long and was built to protect the Chinese states and empires against nomadic invasions from the north. It was built from the 3rd century BC to the 17th century AD."
question = "How long is the Great Wall of China?"
prompt = f"Context: {context}\n\nQuestion: {question}\n\nAnswer:"
result = engine.run_inference(prompt, max_length=50)
print("Question Answering Example:")
print(f"Context: {context}")
print(f"Question: {question}")
print(f"Answer: {result['output']}")
print(f"Inference Time: {result['inference_time']}")
if __name__ == "__main__":
question_answering_example()
================================================
FILE: examples/text_generation.py
================================================
from src.engine import InferenceEngine
def text_generation_example():
# You can change this to any supported LLaMA model on Hugging Face
model_name = "meta-llama/Llama-3-8b"
engine = InferenceEngine(model_name)
prompt = "Write a short story about a robot learning to paint:"
result = engine.run_inference(prompt, max_length=200)
print("Text Generation Example:")
print(f"Model: {model_name}")
print(f"Prompt: {prompt}")
print(f"Generated Story: {result['output']}")
print(f"Inference Time: {result['inference_time']}")
if __name__ == "__main__":
text_generation_example()
================================================
FILE: requirements.txt
================================================
--extra-index-url https://download.pytorch.org/whl/rocm6.1
torch==2.4.0+rocm6.1
torchvision==0.19.0+rocm6.1
torchaudio==2.4.0+rocm6.1
transformers==4.37.2
numpy==1.26.3
pandas==2.1.4
scipy==1.11.4
scikit-learn==1.3.2
tqdm==4.66.1
matplotlib==3.8.2
jupyter==1.0.0
ipython==8.20.0
fastapi==0.109.0
uvicorn==0.25.0
pytest==7.4.4
loguru==0.7.2
python-dotenv==1.0.0
================================================
FILE: run-docker-amd.sh
================================================
#!/bin/bash
set -e
# Check if model name and prompt are provided
if [ "$#" -ne 2 ]; then
echo "Usage: $0 <model_name> <prompt>"
echo "Example: $0 meta-llama/Llama-2-7b-chat-hf \"Translate the following English text to French: 'Hello, how are you?'\""
exit 1
fi
MODEL_NAME="$1"
PROMPT="$2"
# Build the Docker image
echo "Building Docker image..."
docker build -t amd-gpu-inference .
# Run the container
echo "Running container with AMD GPU support..."
docker run --rm -it \
--device=/dev/kfd \
--device=/dev/dri \
--group-add=video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
amd-gpu-inference "$MODEL_NAME" "$PROMPT"
echo "Container execution completed."
================================================
FILE: run_inference.py
================================================
import os
import sys
from src.engine import InferenceEngine
from src.utils import print_gpu_utilization, get_gpu_info
def run_inference(model_name, prompt):
print(f"GPU Info: {get_gpu_info()}")
print_gpu_utilization()
print(f"Initializing inference engine...")
print(f"GPU Info: {get_gpu_info()}")
print_gpu_utilization()
engine = InferenceEngine(model_name)
result = engine.run_inference(prompt, max_length=200)
print(f"Input: {result['input']}")
print(f"Output: {result['output']}")
print(f"Inference Time: {result['inference_time']}")
print_gpu_utilization()
if __name__ == "__main__":
if len(sys.argv) > 1:
prompt = " ".join(sys.argv[1:])
else:
prompt = "Explain the concept of machine learning in simple terms."
run_inference(prompt)
================================================
FILE: src/__init__.py
================================================
================================================
FILE: src/amd_setup.py
================================================
import os
import torch
def setup_amd_environment():
if torch.version.hip is None:
print("This is not an AMD GPU environment. No setup needed.")
return
# Set environment variables for ROCm
os.environ['HIP_VISIBLE_DEVICES'] = '0' # Use the first GPU
os.environ['HSA_OVERRIDE_GFX_VERSION'] = '11.0.0' # Updated for newer AMD GPUs
# Check if ROCm is properly set up
try:
assert torch.cuda.is_available()
print(f"ROCm is properly set up. Using GPU: {torch.cuda.get_device_name(0)}")
except AssertionError:
print("Error: ROCm is not properly set up or no AMD GPU is available.")
print("Please ensure that ROCm 6.2 is installed and configured correctly.")
def optimize_for_amd():
if torch.version.hip is None:
return
# Set benchmark mode
torch.backends.cudnn.benchmark = True
# Enable TF32 for improved performance (if supported by the GPU)
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True
print("Applied optimizations for AMD GPU with ROCm 6.2.")
# Call these functions when importing this module
setup_amd_environment()
optimize_for_amd()
================================================
FILE: src/engine.py
================================================
import torch
from .model import LlamaModel
from .utils import set_seed, format_time
class InferenceEngine:
def __init__(self, model_path, seed=42):
set_seed(seed)
self.model = LlamaModel(model_path)
self.is_amd_gpu = torch.version.hip is not None
def run_inference(self, prompt, max_length=100, temperature=0.7):
if self.is_amd_gpu:
start_event = torch.cuda.Event(enable_timing=True)
end_event = torch.cuda.Event(enable_timing=True)
start_event.record()
else:
start_time = torch.cuda.Event(enable_timing=True)
end_time = torch.cuda.Event(enable_timing=True)
start_time.record()
output = self.model(prompt, max_length=max_length, temperature=temperature)
if self.is_amd_gpu:
end_event.record()
torch.cuda.synchronize()
inference_time = start_event.elapsed_time(end_event)
else:
end_time.record()
torch.cuda.synchronize()
inference_time = start_time.elapsed_time(end_time)
return {
"input": prompt,
"output": output,
"inference_time": format_time(inference_time)
}
def batch_inference(self, prompts, **kwargs):
results = []
for prompt in prompts:
results.append(self.run_inference(prompt, **kwargs))
return results
================================================
FILE: src/model.py
================================================
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
class LlamaModel:
def __init__(self, model_name):
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if self.device.type == "cuda" and torch.version.hip is not None:
print("Using AMD GPU with ROCm")
elif self.device.type == "cuda":
print("Using NVIDIA GPU")
else:
print("Using CPU")
print(f"Loading model {model_name} from Hugging Face...")
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16).to(self.device)
print("Model loaded successfully.")
def generate(self, prompt, max_length=100, temperature=0.7):
input_ids = self.tokenizer.encode(prompt, return_tensors="pt").to(self.device)
with torch.no_grad():
output = self.model.generate(
input_ids,
max_length=max_length,
temperature=temperature,
num_return_sequences=1,
do_sample=True,
top_p=0.95,
)
generated_text = self.tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
def __call__(self, prompt, **kwargs):
return self.generate(prompt, **kwargs)
================================================
FILE: src/utils.py
================================================
import random
import torch
import numpy as np
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
def format_time(time_ms):
"""Convert time from milliseconds to a human-readable format."""
if time_ms < 1000:
return f"{time_ms:.2f}ms"
elif time_ms < 60000:
return f"{time_ms/1000:.2f}s"
else:
minutes = int(time_ms / 60000)
seconds = (time_ms % 60000) / 1000
return f"{minutes}m {seconds:.2f}s"
def get_gpu_memory_usage():
"""Get the current GPU memory usage."""
if torch.cuda.is_available():
return torch.cuda.memory_allocated() / 1024**2 # Convert to MB
else:
return 0
def print_gpu_utilization():
"""Print current GPU utilization."""
if torch.cuda.is_available():
if torch.version.hip is not None:
print(f"AMD GPU Memory Usage: {get_gpu_memory_usage():.2f} MB")
else:
print(f"NVIDIA GPU Memory Usage: {get_gpu_memory_usage():.2f} MB")
else:
print("CUDA is not available. Running on CPU.")
def is_amd_gpu():
"""Check if the current GPU is an AMD GPU."""
return torch.cuda.is_available() and torch.version.hip is not None
def get_gpu_info():
"""Get information about the current GPU."""
if not torch.cuda.is_available():
return "No GPU available"
if is_amd_gpu():
return f"AMD GPU: {torch.cuda.get_device_name(0)}"
else:
return f"NVIDIA GPU: {torch.cuda.get_device_name(0)}"
gitextract_lsk01owt/
├── Aptfile
├── Dockerfile
├── LICENSE
├── README.md
├── examples/
│ ├── question_answering.py
│ └── text_generation.py
├── requirements.txt
├── run-docker-amd.sh
├── run_inference.py
└── src/
├── __init__.py
├── amd_setup.py
├── engine.py
├── model.py
└── utils.py
SYMBOL INDEX (19 symbols across 7 files)
FILE: examples/question_answering.py
function question_answering_example (line 3) | def question_answering_example():
FILE: examples/text_generation.py
function text_generation_example (line 3) | def text_generation_example():
FILE: run_inference.py
function run_inference (line 6) | def run_inference(model_name, prompt):
FILE: src/amd_setup.py
function setup_amd_environment (line 4) | def setup_amd_environment():
function optimize_for_amd (line 21) | def optimize_for_amd():
FILE: src/engine.py
class InferenceEngine (line 5) | class InferenceEngine:
method __init__ (line 6) | def __init__(self, model_path, seed=42):
method run_inference (line 11) | def run_inference(self, prompt, max_length=100, temperature=0.7):
method batch_inference (line 38) | def batch_inference(self, prompts, **kwargs):
FILE: src/model.py
class LlamaModel (line 4) | class LlamaModel:
method __init__ (line 5) | def __init__(self, model_name):
method generate (line 19) | def generate(self, prompt, max_length=100, temperature=0.7):
method __call__ (line 35) | def __call__(self, prompt, **kwargs):
FILE: src/utils.py
function set_seed (line 5) | def set_seed(seed):
function format_time (line 12) | def format_time(time_ms):
function get_gpu_memory_usage (line 23) | def get_gpu_memory_usage():
function print_gpu_utilization (line 30) | def print_gpu_utilization():
function is_amd_gpu (line 40) | def is_amd_gpu():
function get_gpu_info (line 44) | def get_gpu_info():
Condensed preview — 14 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (26K chars).
[
{
"path": "Aptfile",
"chars": 48,
"preview": "rocm-dev\nrocm-libs\nrocm-cmake\nmiopen-hip\nrocblas"
},
{
"path": "Dockerfile",
"chars": 861,
"preview": "# Start from the latest ROCm base image\nFROM rocm/dev-ubuntu-22.04:6.0-complete\n\n# Set environment variables\nENV DEBIAN_"
},
{
"path": "LICENSE",
"chars": 10419,
"preview": "Apache License\nVersion 2.0, January 2004\nhttp://www.apache.org/licenses/\n\nTERMS AND CONDITIONS FOR USE, REPRODUCTION, AN"
},
{
"path": "README.md",
"chars": 3627,
"preview": "# ⚙️ AMD GPU Inference\n\n[](https://gith"
},
{
"path": "examples/question_answering.py",
"chars": 857,
"preview": "from src.engine import InferenceEngine\n\ndef question_answering_example():\n model_path = \"models/llama-2-1b\"\n engin"
},
{
"path": "examples/text_generation.py",
"chars": 619,
"preview": "from src.engine import InferenceEngine\n\ndef text_generation_example():\n # You can change this to any supported LLaMA "
},
{
"path": "requirements.txt",
"chars": 360,
"preview": "--extra-index-url https://download.pytorch.org/whl/rocm6.1\ntorch==2.4.0+rocm6.1\ntorchvision==0.19.0+rocm6.1\ntorchaudio=="
},
{
"path": "run-docker-amd.sh",
"chars": 713,
"preview": "#!/bin/bash\n\nset -e\n\n# Check if model name and prompt are provided\nif [ \"$#\" -ne 2 ]; then\n echo \"Usage: $0 <model_na"
},
{
"path": "run_inference.py",
"chars": 823,
"preview": "import os\nimport sys\nfrom src.engine import InferenceEngine\nfrom src.utils import print_gpu_utilization, get_gpu_info\n\nd"
},
{
"path": "src/__init__.py",
"chars": 0,
"preview": ""
},
{
"path": "src/amd_setup.py",
"chars": 1190,
"preview": "import os\nimport torch\n\ndef setup_amd_environment():\n if torch.version.hip is None:\n print(\"This is not an AMD"
},
{
"path": "src/engine.py",
"chars": 1426,
"preview": "import torch\nfrom .model import LlamaModel\nfrom .utils import set_seed, format_time\n\nclass InferenceEngine:\n def __in"
},
{
"path": "src/model.py",
"chars": 1412,
"preview": "import torch\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\nclass LlamaModel:\n def __init__(self, mode"
},
{
"path": "src/utils.py",
"chars": 1596,
"preview": "import random\nimport torch\nimport numpy as np\n\ndef set_seed(seed):\n random.seed(seed)\n np.random.seed(seed)\n to"
}
]
About this extraction
This page contains the full source code of the slashml/amd_inference GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 14 files (23.4 KB), approximately 5.9k tokens, and a symbol index with 19 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.