Title: Added n_gpu_layers param by moejay · Pull Request #203 · abetlen/llama-cpp-python · GitHub
Open Graph Title: Added n_gpu_layers param by moejay · Pull Request #203 · abetlen/llama-cpp-python
X Title: Added n_gpu_layers param by moejay · Pull Request #203 · abetlen/llama-cpp-python
Description: This commit ggml-org/llama.cpp@905d87b adds support for some GPU acceleration. This PR adds those params to the binding
Open Graph Description: This commit ggml-org/llama.cpp@905d87b adds support for some GPU acceleration. This PR adds those params to the binding
X Description: This commit ggml-org/llama.cpp@905d87b adds support for some GPU acceleration. This PR adds those params to the binding
Opengraph URL: https://github.com/abetlen/llama-cpp-python/pull/203
X: @github
Domain: patch-diff.githubusercontent.com
| route-pattern | /_view_fragments/voltron/pull_requests/show/:user_id/:repository/:id/pull_request_layout(.:format) |
| route-controller | voltron_pull_requests_fragments |
| route-action | pull_request_layout |
| fetch-nonce | v2:cb013515-2252-eae5-6e86-2f5f5d992e65 |
| current-catalog-service-hash | ae870bc5e265a340912cde392f23dad3671a0a881730ffdadd82f2f57d81641b |
| request-id | C3D4:8E31A:A4F52D:D5D6DE:697694B6 |
| html-safe-nonce | b2b2078e93ca527f56a9a8e7090b31f0f57809805216cf2bcad73b2727910d9d |
| visitor-payload | eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJDM0Q0OjhFMzFBOkE0RjUyRDpENUQ2REU6Njk3Njk0QjYiLCJ2aXNpdG9yX2lkIjoiNzEyMzE3NTQwNTAyNzQ5NzE0MiIsInJlZ2lvbl9lZGdlIjoiaWFkIiwicmVnaW9uX3JlbmRlciI6ImlhZCJ9 |
| visitor-hmac | 1cfbba0e9db74dacdb242c7148a51ea3019e800881f11f86d1911694e51c8227 |
| hovercard-subject-tag | pull_request:1349633096 |
| github-keyboard-shortcuts | repository,pull-request-list,pull-request-conversation,pull-request-files-changed,copilot |
| google-site-verification | Apib7-x98H0j5cPqHWwSMm6dNU4GmODRoqxLiDzdx9I |
| octolytics-url | https://collector.github.com/github/collect |
| analytics-location | / |
| fb:app_id | 1401488693436528 |
| apple-itunes-app | app-id=1477376905, app-argument=https://github.com/_view_fragments/voltron/pull_requests/show/abetlen/llama-cpp-python/203/pull_request_layout |
| twitter:image | https://opengraph.githubassets.com/89f75c574c4a36e73b1af6fb8b14e60eb5191818e34d235dab3dc16b7268b050/abetlen/llama-cpp-python/pull/203 |
| twitter:card | summary_large_image |
| og:image | https://opengraph.githubassets.com/89f75c574c4a36e73b1af6fb8b14e60eb5191818e34d235dab3dc16b7268b050/abetlen/llama-cpp-python/pull/203 |
| og:image:alt | This commit ggml-org/llama.cpp@905d87b adds support for some GPU acceleration. This PR adds those params to the binding |
| og:image:width | 1200 |
| og:image:height | 600 |
| og:site_name | GitHub |
| og:type | object |
| og:author:username | moejay |
| hostname | github.com |
| expected-hostname | github.com |
| None | 032152924a283b83384255d9489e7b93b54ba01da8d380b05ecd3953b3212411 |
| turbo-cache-control | no-preview |
| go-import | github.com/abetlen/llama-cpp-python git https://github.com/abetlen/llama-cpp-python.git |
| octolytics-dimension-user_id | 6826477 |
| octolytics-dimension-user_login | abetlen |
| octolytics-dimension-repository_id | 617868717 |
| octolytics-dimension-repository_nwo | abetlen/llama-cpp-python |
| octolytics-dimension-repository_public | true |
| octolytics-dimension-repository_is_fork | false |
| octolytics-dimension-repository_network_root_id | 617868717 |
| octolytics-dimension-repository_network_root_nwo | abetlen/llama-cpp-python |
| turbo-body-classes | logged-out env-production page-responsive |
| disable-turbo | false |
| browser-stats-url | https://api.github.com/_private/browser/stats |
| browser-errors-url | https://api.github.com/_private/browser/errors |
| release | 5b577f6be6482e336e3c30e8daefa30144947b17 |
| ui-target | full |
| theme-color | #1e2327 |
| color-scheme | light dark |
Links:
Viewport: width=device-width