Title: convert-hf-to-ggml.py CUDA out of memory · Issue #35 · bigcode-project/starcoder.cpp · GitHub
Open Graph Title: convert-hf-to-ggml.py CUDA out of memory · Issue #35 · bigcode-project/starcoder.cpp
X Title: convert-hf-to-ggml.py CUDA out of memory · Issue #35 · bigcode-project/starcoder.cpp
Description: i've tried to convert model from HF to GGML format: python3 convert-hf-to-ggml.py ../starcoderbase_int8 and got an error: Loading model: ../starcoderbase_int8 Loading checkpoint shards: ... Traceback (most recent call last): File "/home/...
Open Graph Description: i've tried to convert model from HF to GGML format: python3 convert-hf-to-ggml.py ../starcoderbase_int8 and got an error: Loading model: ../starcoderbase_int8 Loading checkpoint shards: ... Traceba...
X Description: i've tried to convert model from HF to GGML format: python3 convert-hf-to-ggml.py ../starcoderbase_int8 and got an error: Loading model: ../starcoderbase_int8 Loading checkpoint shards: ... Tra...
Opengraph URL: https://github.com/bigcode-project/starcoder.cpp/issues/35
X: @github
Domain: patch-diff.githubusercontent.com
{"@context":"https://schema.org","@type":"DiscussionForumPosting","headline":"convert-hf-to-ggml.py CUDA out of memory","articleBody":"i've tried to convert model from HF to GGML format:\r\n```\r\npython3 convert-hf-to-ggml.py ../starcoderbase_int8\r\n```\r\nand got an error:\r\n```\r\nLoading model: ../starcoderbase_int8\r\nLoading checkpoint shards: ...\r\nTraceback (most recent call last):\r\n File \"/home/alex/starcoder/starcoder.cpp/convert-hf-to-ggml.py\", line 58, in \u003cmodule\u003e\r\n model = AutoModelForCausalLM.from_pretrained(model_name, config=config, torch_dtype=torch.float16 if use_f16 else torch.float32, low_cpu_mem_usage=True, trust_remote_code=True, offload_state_dict=True)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py\", line 493, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py\", line 2901, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py\", line 3258, in _load_pretrained_model\r\n new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py\", line 725, in _load_state_dict_into_meta_model\r\n set_module_quantized_tensor_to_device(model, param_name, param_device, value=param, fp16_statistics=fp16_statistics)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/utils/bitsandbytes.py\", line 109, in set_module_quantized_tensor_to_device\r\n new_value = value.to(device)\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 576.00 MiB (GPU 0; 10.90 GiB total capacity; 9.21 GiB already allocated; 568.69 MiB free; 9.74 GiB reserved in total by PyTorch) If reserved memory is \u003e\u003e allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n```\r\nAnd next I've tried to force it to run on the CPU:\r\n```\r\nexport CUDA_VISIBLE_DEVICES=\"\"\r\npython3 convert-hf-to-ggml.py ../starcoderbase_int8\r\n```\r\nThen, I got this:\r\n```\r\nLoading model: ../starcoderbase_int8\r\nTraceback (most recent call last):\r\n File \"/home/alex/starcoder/starcoder.cpp/convert-hf-to-ggml.py\", line 58, in \u003cmodule\u003e\r\n model = AutoModelForCausalLM.from_pretrained(model_name, config=config, torch_dtype=torch.float16 if use_f16 else torch.float32, low_cpu_mem_usage=True, trust_remote_code=True, offload_state_dict=True)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py\", line 493, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py\", line 2370, in from_pretrained\r\n raise RuntimeError(\"No GPU found. A GPU is needed for quantization.\")\r\nRuntimeError: No GPU found. A GPU is needed for quantization.\r\n\r\n```\r\nFor me, the main reason to go with GGML implementation is that I can't fit the model in my GPU. I thought I could perform both the conversion and inference using only the CPU and system RAM. Am I doing something specific wrong or I got it wrong in general?","author":{"url":"https://github.com/Alex20129","@type":"Person","name":"Alex20129"},"datePublished":"2024-03-01T08:12:19.000Z","interactionStatistic":{"@type":"InteractionCounter","interactionType":"https://schema.org/CommentAction","userInteractionCount":0},"url":"https://github.com/35/starcoder.cpp/issues/35"}
| route-pattern | /_view_fragments/issues/show/:user_id/:repository/:id/issue_layout(.:format) |
| route-controller | voltron_issues_fragments |
| route-action | issue_layout |
| fetch-nonce | v2:f0604dc8-c063-abcd-ef65-4e6239e61b2f |
| current-catalog-service-hash | 81bb79d38c15960b92d99bca9288a9108c7a47b18f2423d0f6438c5b7bcd2114 |
| request-id | A6F2:18CF34:DE774E:135594A:6970C15E |
| html-safe-nonce | 72a79c11756b27b9936302a9351c54ebf6e38e3c7082f33c7c27d9fef74b2c1f |
| visitor-payload | eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJBNkYyOjE4Q0YzNDpERTc3NEU6MTM1NTk0QTo2OTcwQzE1RSIsInZpc2l0b3JfaWQiOiI5MDIwMzQyODg3MjI3OTAwMjU1IiwicmVnaW9uX2VkZ2UiOiJpYWQiLCJyZWdpb25fcmVuZGVyIjoiaWFkIn0= |
| visitor-hmac | b03c63326c03a23039b2d633e1962deda6330e4a31806127b313d704bb2b6fdd |
| hovercard-subject-tag | issue:2162853971 |
| github-keyboard-shortcuts | repository,issues,copilot |
| google-site-verification | Apib7-x98H0j5cPqHWwSMm6dNU4GmODRoqxLiDzdx9I |
| octolytics-url | https://collector.github.com/github/collect |
| analytics-location | / |
| fb:app_id | 1401488693436528 |
| apple-itunes-app | app-id=1477376905, app-argument=https://github.com/_view_fragments/issues/show/bigcode-project/starcoder.cpp/35/issue_layout |
| twitter:image | https://opengraph.githubassets.com/9b54fdc734d78266ec27074ff58f714e72ae4a98f69152f902dc289a33f75c18/bigcode-project/starcoder.cpp/issues/35 |
| twitter:card | summary_large_image |
| og:image | https://opengraph.githubassets.com/9b54fdc734d78266ec27074ff58f714e72ae4a98f69152f902dc289a33f75c18/bigcode-project/starcoder.cpp/issues/35 |
| og:image:alt | i've tried to convert model from HF to GGML format: python3 convert-hf-to-ggml.py ../starcoderbase_int8 and got an error: Loading model: ../starcoderbase_int8 Loading checkpoint shards: ... Traceba... |
| og:image:width | 1200 |
| og:image:height | 600 |
| og:site_name | GitHub |
| og:type | object |
| og:author:username | Alex20129 |
| hostname | github.com |
| expected-hostname | github.com |
| None | bb43a7bc61aba1b91c3c5cf8e7d00342e1e77a0cfe55a141222dbd7f9782d26f |
| turbo-cache-control | no-preview |
| go-import | github.com/bigcode-project/starcoder.cpp git https://github.com/bigcode-project/starcoder.cpp.git |
| octolytics-dimension-user_id | 110470554 |
| octolytics-dimension-user_login | bigcode-project |
| octolytics-dimension-repository_id | 640843237 |
| octolytics-dimension-repository_nwo | bigcode-project/starcoder.cpp |
| octolytics-dimension-repository_public | true |
| octolytics-dimension-repository_is_fork | false |
| octolytics-dimension-repository_network_root_id | 640843237 |
| octolytics-dimension-repository_network_root_nwo | bigcode-project/starcoder.cpp |
| turbo-body-classes | logged-out env-production page-responsive |
| disable-turbo | false |
| browser-stats-url | https://api.github.com/_private/browser/stats |
| browser-errors-url | https://api.github.com/_private/browser/errors |
| release | 34817b01ad7cdf8b2beb35ea7b0e2a7609004eff |
| ui-target | canary-2 |
| theme-color | #1e2327 |
| color-scheme | light dark |
Links:
Viewport: width=device-width