Title: Reading variable sized data (a small entry followed by a large entry) leads to IndexOutOfBoundsException · Issue #183 · lmdbjava/lmdbjava · GitHub
Open Graph Title: Reading variable sized data (a small entry followed by a large entry) leads to IndexOutOfBoundsException · Issue #183 · lmdbjava/lmdbjava
X Title: Reading variable sized data (a small entry followed by a large entry) leads to IndexOutOfBoundsException · Issue #183 · lmdbjava/lmdbjava
Description: This is a bit of a doozy. We're in the process of moving an application to lmdbjava. We observe that when running the tests the following occurs: If the tests are all run in the same JVM process then we consistently get very strange erro...
Open Graph Description: This is a bit of a doozy. We're in the process of moving an application to lmdbjava. We observe that when running the tests the following occurs: If the tests are all run in the same JVM process th...
X Description: This is a bit of a doozy. We're in the process of moving an application to lmdbjava. We observe that when running the tests the following occurs: If the tests are all run in the same JVM proces...
Opengraph URL: https://github.com/lmdbjava/lmdbjava/issues/183
X: @github
Domain: github.com
{"@context":"https://schema.org","@type":"DiscussionForumPosting","headline":"Reading variable sized data (a small entry followed by a large entry) leads to IndexOutOfBoundsException","articleBody":"This is a bit of a doozy.\r\n\r\nWe're in the process of moving an application to lmdbjava. We observe that when running the tests the following occurs:\r\n\r\n1. If the tests are all run in the same JVM process then we consistently get very strange errors related to ByteBuf validation constraints:\r\n\r\nFor example:\r\n\r\n```\r\nCaused by: java.lang.IndexOutOfBoundsException: readerIndex: 1297, writerIndex: 1095 (expected: 0 \u003c= readerIndex \u003c= writerIndex \u003c= capacity(1095))\r\n at io.netty.buffer.AbstractByteBuf.checkIndexBounds(AbstractByteBuf.java:112)\r\n at io.netty.buffer.AbstractByteBuf.writerIndex(AbstractByteBuf.java:135)\r\n at org.lmdbjava.ByteBufProxy.out(ByteBufProxy.java:166)\r\n at org.lmdbjava.ByteBufProxy.out(ByteBufProxy.java:41)\r\n at org.lmdbjava.KeyVal.valOut(KeyVal.java:134)\r\n at org.lmdbjava.Cursor.seek(Cursor.java:377)\r\n at org.lmdbjava.Cursor.first(Cursor.java:126)\r\n at org.lmdbjava.CursorIterable.executeCursorOp(CursorIterable.java:125)\r\n at org.lmdbjava.CursorIterable.update(CursorIterable.java:172)\r\n at org.lmdbjava.CursorIterable.access$100(CursorIterable.java:52)\r\n at org.lmdbjava.CursorIterable$1.hasNext(CursorIterable.java:99)\r\n```\r\n\r\nThis occurs using the very standard code to iterate over all the keys in a table:\r\n\r\n```java\r\ntry (final CursorIterable\u003cByteBuf\u003e cursor = table.iterate(txn)) {\r\n\t\t\t\r\n\t\t\tfor (final KeyVal\u003cByteBuf\u003e kv : cursor) {\r\n\t\t\t\tfinal ByteBuf keyBuffer = kv.key();\r\n\t\t\t\tfinal ByteBuf valueBuffer = kv.val();\r\n```\r\n\r\n2. If the maven build is run using -DforkCount=1C -DreuseForks=false then all the tests run perfectly fine. [These system properties tell maven to run each test class in its own JVM](https://www.baeldung.com/maven-junit-parallel-tests) and it eliminates all errors in the build at the expense of a much slower build.\r\n\r\nAdditionally if we run a single test using the -Dtest=$TestName syntax the test passes fine.\r\n\r\nClearly there is some erorr related to static state that is leading to inter-test dependencies. If we eliminate static state by running each Test in its own JVM all the tests pass. If we run the tests run the tests in the same JVM fails.\r\n\r\nI've tried to dig into this and get some idea of what's happening and I doi believe the problem is related to the ByteBufProxy class. There are some red flags in this class:\r\n\r\n1. This class maintains significant static state in a threadlocal BUFFERS cache. There seems to be no way to reset or clear this BUFFERS. As far I can tell the cache persists even if you call Env.close() and it might explain why we're getting ByteBufs whose readIndex is larger than their writeIndex.\r\n\r\n2. The ByteBufProxy class is also hardcoded to use PooledByteBufAllocator.DEFAULT. We've had problems in the past with multiple components (netty, netty extensions) all mucking around with PooledByteBufAllocator.DEFAULT. We tend to **avoid** using PooledByteBufAllocator.DEFAULT and instead having each component use its own explicit PooledByteBufAllocator. Unfortunately there's no way to configure ByteBufProxy to use a different ByteBufAllocator.\r\n\r\n3. The ByteBufProxy class makes several questionable assumptions. ByteBufs are allocated and not released, and no attempt is made to clear() ByteBufs returned from the pool. ","author":{"url":"https://github.com/buko","@type":"Person","name":"buko"},"datePublished":"2021-10-20T09:59:32.000Z","interactionStatistic":{"@type":"InteractionCounter","interactionType":"https://schema.org/CommentAction","userInteractionCount":6},"url":"https://github.com/183/lmdbjava/issues/183"}
| route-pattern | /_view_fragments/issues/show/:user_id/:repository/:id/issue_layout(.:format) |
| route-controller | voltron_issues_fragments |
| route-action | issue_layout |
| fetch-nonce | v2:4e5ecd87-4dc0-1bfc-bca5-94a4a0bbc29b |
| current-catalog-service-hash | 81bb79d38c15960b92d99bca9288a9108c7a47b18f2423d0f6438c5b7bcd2114 |
| request-id | D0F8:271BC7:172A921:1F24323:69706A9F |
| html-safe-nonce | ace20466a88eeb141019101602fa4efb8f5ef0dac3b0a90c1b136828017d72eb |
| visitor-payload | eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJEMEY4OjI3MUJDNzoxNzJBOTIxOjFGMjQzMjM6Njk3MDZBOUYiLCJ2aXNpdG9yX2lkIjoiNzMyMzE2MjAwODkyMTMxOTk5IiwicmVnaW9uX2VkZ2UiOiJpYWQiLCJyZWdpb25fcmVuZGVyIjoiaWFkIn0= |
| visitor-hmac | bcff4c905a97eaaab8472a043c11f74505d9c431757815d13fb25b5650620db2 |
| hovercard-subject-tag | issue:1031215159 |
| github-keyboard-shortcuts | repository,issues,copilot |
| google-site-verification | Apib7-x98H0j5cPqHWwSMm6dNU4GmODRoqxLiDzdx9I |
| octolytics-url | https://collector.github.com/github/collect |
| analytics-location | / |
| fb:app_id | 1401488693436528 |
| apple-itunes-app | app-id=1477376905, app-argument=https://github.com/_view_fragments/issues/show/lmdbjava/lmdbjava/183/issue_layout |
| twitter:image | https://opengraph.githubassets.com/f9cca98bb543dbbd9924a9f96c050f0e81a7a110cb235361bef86361c09de421/lmdbjava/lmdbjava/issues/183 |
| twitter:card | summary_large_image |
| og:image | https://opengraph.githubassets.com/f9cca98bb543dbbd9924a9f96c050f0e81a7a110cb235361bef86361c09de421/lmdbjava/lmdbjava/issues/183 |
| og:image:alt | This is a bit of a doozy. We're in the process of moving an application to lmdbjava. We observe that when running the tests the following occurs: If the tests are all run in the same JVM process th... |
| og:image:width | 1200 |
| og:image:height | 600 |
| og:site_name | GitHub |
| og:type | object |
| og:author:username | buko |
| hostname | github.com |
| expected-hostname | github.com |
| None | 9920a62ba22d06470388e2904804fb7e5ec51c9e35f81784e9191394c74b2bd2 |
| turbo-cache-control | no-preview |
| go-import | github.com/lmdbjava/lmdbjava git https://github.com/lmdbjava/lmdbjava.git |
| octolytics-dimension-user_id | 19765602 |
| octolytics-dimension-user_login | lmdbjava |
| octolytics-dimension-repository_id | 60480511 |
| octolytics-dimension-repository_nwo | lmdbjava/lmdbjava |
| octolytics-dimension-repository_public | true |
| octolytics-dimension-repository_is_fork | false |
| octolytics-dimension-repository_network_root_id | 60480511 |
| octolytics-dimension-repository_network_root_nwo | lmdbjava/lmdbjava |
| turbo-body-classes | logged-out env-production page-responsive |
| disable-turbo | false |
| browser-stats-url | https://api.github.com/_private/browser/stats |
| browser-errors-url | https://api.github.com/_private/browser/errors |
| release | 7d6181066430cc06553c8396ca201e194ae33cb9 |
| ui-target | full |
| theme-color | #1e2327 |
| color-scheme | light dark |
Links:
Viewport: width=device-width