Title: Caching the pipeline outputs · Issue #3 · TensorStack-AI/TensorStack · GitHub
Open Graph Title: Caching the pipeline outputs · Issue #3 · TensorStack-AI/TensorStack
X Title: Caching the pipeline outputs · Issue #3 · TensorStack-AI/TensorStack
Description: I'd like to get your thoughts on how to cache the various outputs from the different stages of the pipeline, primarily the output from the text encoder. That step seems to be a substantial amount of time for a generation. But from a user...
Open Graph Description: I'd like to get your thoughts on how to cache the various outputs from the different stages of the pipeline, primarily the output from the text encoder. That step seems to be a substantial amount o...
X Description: I'd like to get your thoughts on how to cache the various outputs from the different stages of the pipeline, primarily the output from the text encoder. That step seems to be a substantial amou...
Opengraph URL: https://github.com/TensorStack-AI/TensorStack/issues/3
X: @github
Domain: patch-diff.githubusercontent.com
{"@context":"https://schema.org","@type":"DiscussionForumPosting","headline":"Caching the pipeline outputs","articleBody":"I'd like to get your thoughts on how to cache the various outputs from the different stages of the pipeline, primarily the output from the text encoder. That step seems to be a substantial amount of time for a generation. But from a users view, I can imagine a scenario where they'd want to regenerate with the same prompt.\nI haven't given the implementation part much thought, but probably default to some kind of interface and in memory storage.","author":{"url":"https://github.com/jdluzen","@type":"Person","name":"jdluzen"},"datePublished":"2025-11-18T03:58:56.000Z","interactionStatistic":{"@type":"InteractionCounter","interactionType":"https://schema.org/CommentAction","userInteractionCount":3},"url":"https://github.com/3/TensorStack/issues/3"}
| route-pattern | /_view_fragments/issues/show/:user_id/:repository/:id/issue_layout(.:format) |
| route-controller | voltron_issues_fragments |
| route-action | issue_layout |
| fetch-nonce | v2:de6dc709-bee2-36a2-e1cb-97663d96779e |
| current-catalog-service-hash | 81bb79d38c15960b92d99bca9288a9108c7a47b18f2423d0f6438c5b7bcd2114 |
| request-id | D8F4:97796:2EBE792:40A9942:6977BAB5 |
| html-safe-nonce | 332a892cfd5559aba0fa4032a32254afa3bfef1855d273175d73707b9e5cc8ac |
| visitor-payload | eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJEOEY0Ojk3Nzk2OjJFQkU3OTI6NDBBOTk0Mjo2OTc3QkFCNSIsInZpc2l0b3JfaWQiOiI3NDA1MzY2OTAyNTQyMjg1NDkzIiwicmVnaW9uX2VkZ2UiOiJpYWQiLCJyZWdpb25fcmVuZGVyIjoiaWFkIn0= |
| visitor-hmac | 0f218d65c26dcab825b099eea493d00e346647d0163952a53fa1a7d477c8b840 |
| hovercard-subject-tag | issue:3636011918 |
| github-keyboard-shortcuts | repository,issues,copilot |
| google-site-verification | Apib7-x98H0j5cPqHWwSMm6dNU4GmODRoqxLiDzdx9I |
| octolytics-url | https://collector.github.com/github/collect |
| analytics-location | / |
| fb:app_id | 1401488693436528 |
| apple-itunes-app | app-id=1477376905, app-argument=https://github.com/_view_fragments/issues/show/TensorStack-AI/TensorStack/3/issue_layout |
| twitter:image | https://opengraph.githubassets.com/299ae1e731bfa0b971c0e1e31806c5c6240124c52af2f776e23443a20a32275f/TensorStack-AI/TensorStack/issues/3 |
| twitter:card | summary_large_image |
| og:image | https://opengraph.githubassets.com/299ae1e731bfa0b971c0e1e31806c5c6240124c52af2f776e23443a20a32275f/TensorStack-AI/TensorStack/issues/3 |
| og:image:alt | I'd like to get your thoughts on how to cache the various outputs from the different stages of the pipeline, primarily the output from the text encoder. That step seems to be a substantial amount o... |
| og:image:width | 1200 |
| og:image:height | 600 |
| og:site_name | GitHub |
| og:type | object |
| og:author:username | jdluzen |
| hostname | github.com |
| expected-hostname | github.com |
| None | 831218810e1b66a41a0626c8c15f7f47625846d9a7516c7f43cfc2c61effad83 |
| turbo-cache-control | no-preview |
| go-import | github.com/TensorStack-AI/TensorStack git https://github.com/TensorStack-AI/TensorStack.git |
| octolytics-dimension-user_id | 165107405 |
| octolytics-dimension-user_login | TensorStack-AI |
| octolytics-dimension-repository_id | 912150088 |
| octolytics-dimension-repository_nwo | TensorStack-AI/TensorStack |
| octolytics-dimension-repository_public | true |
| octolytics-dimension-repository_is_fork | false |
| octolytics-dimension-repository_network_root_id | 912150088 |
| octolytics-dimension-repository_network_root_nwo | TensorStack-AI/TensorStack |
| turbo-body-classes | logged-out env-production page-responsive |
| disable-turbo | false |
| browser-stats-url | https://api.github.com/_private/browser/stats |
| browser-errors-url | https://api.github.com/_private/browser/errors |
| release | b37ffa89d073c0d4a6f07ff42d59120741a74955 |
| ui-target | full |
| theme-color | #1e2327 |
| color-scheme | light dark |
Links:
Viewport: width=device-width