Title: i can connect without error, but it so slow.. · Issue #17 · mmmmmm44/VTuber-Python-Unity · GitHub
Open Graph Title: i can connect without error, but it so slow.. · Issue #17 · mmmmmm44/VTuber-Python-Unity
X Title: i can connect without error, but it so slow.. · Issue #17 · mmmmmm44/VTuber-Python-Unity
Description: i tryed print log. so, I knew where it slowed down. i started 5:50:00 but that is time for opened window. this is my main.py code. i changed something. original code and my code take too long to open the camera. (2min 28sec) """ Main pro...
Open Graph Description: i tryed print log. so, I knew where it slowed down. i started 5:50:00 but that is time for opened window. this is my main.py code. i changed something. original code and my code take too long to op...
X Description: i tryed print log. so, I knew where it slowed down. i started 5:50:00 but that is time for opened window. this is my main.py code. i changed something. original code and my code take too long to op...
Opengraph URL: https://github.com/mmmmmm44/VTuber-Python-Unity/issues/17
X: @github
Domain: patch-diff.githubusercontent.com
{"@context":"https://schema.org","@type":"DiscussionForumPosting","headline":"i can connect without error, but it so slow..","articleBody":"i tryed print log. so, I knew where it slowed down.\r\n\r\n\r\n\r\ni started 5:50:00\r\nbut that is time for opened window.\r\n\r\nthis is my main.py code.\r\ni changed something.\r\n\r\noriginal code and my code take too long to open the camera. (2min 28sec)\r\n\r\n```\r\n\"\"\"\r\nMain program to run the detection and TCP\r\n\"\"\"\r\n\r\nfrom argparse import ArgumentParser\r\nimport cv2\r\nimport mediapipe as mp\r\nimport numpy as np\r\n\r\n# for TCP connection with unity\r\nimport socket\r\n\r\n# face detection and facial landmark\r\nfrom facial_landmark import FaceMeshDetector\r\n\r\n# pose estimation and stablization\r\nfrom pose_estimator import PoseEstimator\r\nfrom stabilizer import Stabilizer\r\n\r\n# Miscellaneous detections (eyes/ mouth...)\r\nfrom facial_features import FacialFeatures, Eyes\r\n\r\nimport sys\r\n\r\n# global variable\r\nport = 5066 # have to be same as unity\r\n\r\n# init TCP connection with unity\r\n# return the socket connected\r\ndef init_TCP():\r\n port = args.port\r\n\r\n # '127.0.0.1' = 'localhost' = your computer internal data transmission IP\r\n # address = ('127.0.0.1', port)\r\n # address = ('121.160.178.145', port)\r\n address = ('172.30.1.31', port)\r\n # address = ('192.168.0.107', port)\r\n\r\n try:\r\n s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n s.connect(address)\r\n # print(socket.gethostbyname(socket.gethostname()) + \"::\" + str(port))\r\n print(\"Connected to address:\", socket.gethostbyname(socket.gethostname()) + \":\" + str(port))\r\n return s\r\n except OSError as e:\r\n print(\"Error while connecting :: %s\" % e)\r\n \r\n # quit the script if connection fails (e.g. Unity server side quits suddenly)\r\n sys.exit()\r\n\r\n # s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\r\n # # print(socket.gethostbyname(socket.gethostname()))\r\n # s.connect(address)\r\n # return s\r\n\r\ndef send_info_to_unity(s, args):\r\n msg = '%.4f ' * len(args) % args\r\n\r\n try:\r\n s.send(bytes(msg, \"utf-8\"))\r\n except socket.error as e:\r\n print(\"error while sending :: \" + str(e))\r\n\r\n # quit the script if connection fails (e.g. Unity server side quits suddenly)\r\n sys.exit()\r\n\r\ndef print_debug_msg(args):\r\n msg = '%.4f ' * len(args) % args\r\n print(msg)\r\n\r\ndef main():\r\n\r\n print(\"start\")\r\n # use internal webcam/ USB camera\r\n cap = cv2.VideoCapture(1)\r\n print(\"cap include success\")\r\n\r\n # IP cam (android only), with the app \"IP Webcam\"\r\n # url = 'http://192.168.0.102:4747/video'\r\n # url = 'https://192.168.0.102:8080/video'\r\n # cap = cv2.VideoCapture(url)\r\n\r\n # Facemesh\r\n detector = FaceMeshDetector()\r\n print(\"detector include success\")\r\n\r\n # get a sample frame for pose estimation img\r\n success, img = cap.read()\r\n print(\"cap read success\")\r\n\r\n # Pose estimation related\r\n pose_estimator = PoseEstimator((img.shape[0], img.shape[1]))\r\n print(\"pose_estimator include success\")\r\n image_points = np.zeros((pose_estimator.model_points_full.shape[0], 2))\r\n print(\"image_points include success\")\r\n\r\n # extra 10 points due to new attention model (in iris detection)\r\n iris_image_points = np.zeros((10, 2))\r\n print(\"iris_image_points include success\")\r\n\r\n # Introduce scalar stabilizers for pose.\r\n pose_stabilizers = [Stabilizer(\r\n state_num=2,\r\n measure_num=1,\r\n cov_process=0.1,\r\n cov_measure=0.1) for _ in range(6)]\r\n print(\"pose_stabilizers include success\")\r\n\r\n # for eyes\r\n eyes_stabilizers = [Stabilizer(\r\n state_num=2,\r\n measure_num=1,\r\n cov_process=0.1,\r\n cov_measure=0.1) for _ in range(6)]\r\n print(\"eyes_stabilizers include success\")\r\n\r\n # for mouth_dist\r\n mouth_dist_stabilizer = Stabilizer(\r\n state_num=2,\r\n measure_num=1,\r\n cov_process=0.1,\r\n cov_measure=0.1\r\n )\r\n print(\"mouth_dist_stabilizer include success\")\r\n\r\n\r\n # Initialize TCP connection\r\n if args.connect:\r\n socket = init_TCP()\r\n print(\"socket init success\")\r\n\r\n while cap.isOpened():\r\n success, img = cap.read()\r\n\r\n if not success:\r\n print(\"Ignoring empty camera frame.\")\r\n continue\r\n\r\n # Pose estimation by 3 steps:\r\n # 1. detect face;\r\n # 2. detect landmarks;\r\n # 3. estimate pose\r\n\r\n # first two steps\r\n img_facemesh, faces = detector.findFaceMesh(img)\r\n\r\n # flip the input image so that it matches the facemesh stuff\r\n img = cv2.flip(img, 1)\r\n\r\n # if there is any face detected\r\n if faces:\r\n # only get the first face\r\n for i in range(len(image_points)):\r\n image_points[i, 0] = faces[0][i][0]\r\n image_points[i, 1] = faces[0][i][1]\r\n \r\n # for refined landmarks around iris\r\n for j in range(len(iris_image_points)):\r\n iris_image_points[j, 0] = faces[0][j + 468][0]\r\n iris_image_points[j, 1] = faces[0][j + 468][1]\r\n\r\n # The third step: pose estimation\r\n # pose: [[rvec], [tvec]]\r\n pose = pose_estimator.solve_pose_by_all_points(image_points)\r\n\r\n x_ratio_left, y_ratio_left = FacialFeatures.detect_iris(image_points, iris_image_points, Eyes.LEFT)\r\n x_ratio_right, y_ratio_right = FacialFeatures.detect_iris(image_points, iris_image_points, Eyes.RIGHT)\r\n\r\n\r\n ear_left = FacialFeatures.eye_aspect_ratio(image_points, Eyes.LEFT)\r\n ear_right = FacialFeatures.eye_aspect_ratio(image_points, Eyes.RIGHT)\r\n\r\n pose_eye = [ear_left, ear_right, x_ratio_left, y_ratio_left, x_ratio_right, y_ratio_right]\r\n\r\n mar = FacialFeatures.mouth_aspect_ratio(image_points)\r\n mouth_distance = FacialFeatures.mouth_distance(image_points)\r\n\r\n # print(\"left eye: %.2f, %.2f\" % (x_ratio_left, y_ratio_left))\r\n # print(\"right eye: %.2f, %.2f\" % (x_ratio_right, y_ratio_right))\r\n\r\n # print(\"rvec (y) = (%f): \" % (pose[0][1]))\r\n # print(\"rvec (x, y, z) = (%f, %f, %f): \" % (pose[0][0], pose[0][1], pose[0][2]))\r\n # print(\"tvec (x, y, z) = (%f, %f, %f): \" % (pose[1][0], pose[1][1], pose[1][2]))\r\n\r\n # Stabilize the pose.\r\n steady_pose = []\r\n pose_np = np.array(pose).flatten()\r\n\r\n for value, ps_stb in zip(pose_np, pose_stabilizers):\r\n ps_stb.update([value])\r\n steady_pose.append(ps_stb.state[0])\r\n\r\n steady_pose = np.reshape(steady_pose, (-1, 3))\r\n\r\n # stabilize the eyes value\r\n steady_pose_eye = []\r\n for value, ps_stb in zip(pose_eye, eyes_stabilizers):\r\n ps_stb.update([value])\r\n steady_pose_eye.append(ps_stb.state[0])\r\n\r\n mouth_dist_stabilizer.update([mouth_distance])\r\n steady_mouth_dist = mouth_dist_stabilizer.state[0]\r\n\r\n # uncomment the rvec line to check the raw values\r\n # print(\"rvec steady (x, y, z) = (%f, %f, %f): \" % (steady_pose[0][0], steady_pose[0][1], steady_pose[0][2]))\r\n # print(\"tvec steady (x, y, z) = (%f, %f, %f): \" % (steady_pose[1][0], steady_pose[1][1], steady_pose[1][2]))\r\n\r\n # calculate the roll/ pitch/ yaw\r\n # roll: +ve when the axis pointing upward\r\n # pitch: +ve when we look upward\r\n # yaw: +ve when we look left\r\n roll = np.clip(np.degrees(steady_pose[0][1]), -90, 90)\r\n pitch = np.clip(-(180 + np.degrees(steady_pose[0][0])), -90, 90)\r\n yaw = np.clip(np.degrees(steady_pose[0][2]), -90, 90)\r\n\r\n # print(\"Roll: %.2f, Pitch: %.2f, Yaw: %.2f\" % (roll, pitch, yaw))\r\n # print(\"left eye: %.2f, %.2f; right eye %.2f, %.2f\"\r\n # % (steady_pose_eye[0], steady_pose_eye[1], steady_pose_eye[2], steady_pose_eye[3]))\r\n # print(\"EAR_LEFT: %.2f; EAR_RIGHT: %.2f\" % (ear_left, ear_right))\r\n # print(\"MAR: %.2f; Mouth Distance: %.2f\" % (mar, steady_mouth_dist))\r\n\r\n # send info to unity\r\n if args.connect:\r\n\r\n # for sending to live2d model (Hiyori)\r\n send_info_to_unity(socket,\r\n (roll, pitch, yaw,\r\n ear_left, ear_right, x_ratio_left, y_ratio_left, x_ratio_right, y_ratio_right,\r\n mar, mouth_distance)\r\n )\r\n\r\n # print the sent values in the terminal\r\n if args.debug:\r\n print_debug_msg((roll, pitch, yaw,\r\n ear_left, ear_right, x_ratio_left, y_ratio_left, x_ratio_right, y_ratio_right,\r\n mar, mouth_distance))\r\n\r\n\r\n # pose_estimator.draw_annotation_box(img, pose[0], pose[1], color=(255, 128, 128))\r\n\r\n # pose_estimator.draw_axis(img, pose[0], pose[1])\r\n\r\n pose_estimator.draw_axes(img_facemesh, steady_pose[0], steady_pose[1])\r\n\r\n else:\r\n # reset our pose estimator\r\n pose_estimator = PoseEstimator((img_facemesh.shape[0], img_facemesh.shape[1]))\r\n\r\n cv2.imshow('Facial landmark', img_facemesh)\r\n \r\n # press \"q\" to leave\r\n if cv2.waitKey(1) \u0026 0xFF == ord('q'):\r\n break\r\n\r\n cap.release()\r\n\r\n\r\nif __name__ == \"__main__\":\r\n\r\n parser = ArgumentParser()\r\n\r\n parser.add_argument(\"--connect\", action=\"store_true\",\r\n help=\"connect to unity character\",\r\n default=False)\r\n\r\n parser.add_argument(\"--port\", type=int, \r\n help=\"specify the port of the connection to unity. Have to be the same as in Unity\", \r\n default=5066)\r\n\r\n parser.add_argument(\"--cam\", type=int,\r\n help=\"specify the camera number if you have multiple cameras\",\r\n default=1)\r\n\r\n parser.add_argument(\"--debug\", action=\"store_true\",\r\n help=\"showing raw values of detection in the terminal\",\r\n default=False)\r\n\r\n args = parser.parse_args()\r\n\r\n # demo code\r\n main()\r\n\r\n```\r\n\r\nWhat should I do to make it faster?","author":{"url":"https://github.com/sy-project","@type":"Person","name":"sy-project"},"datePublished":"2022-08-03T09:02:08.000Z","interactionStatistic":{"@type":"InteractionCounter","interactionType":"https://schema.org/CommentAction","userInteractionCount":0},"url":"https://github.com/17/VTuber-Python-Unity/issues/17"}
| route-pattern | /_view_fragments/issues/show/:user_id/:repository/:id/issue_layout(.:format) |
| route-controller | voltron_issues_fragments |
| route-action | issue_layout |
| fetch-nonce | v2:fd438371-dabf-8ae1-b17e-4f1d473bf8aa |
| current-catalog-service-hash | 81bb79d38c15960b92d99bca9288a9108c7a47b18f2423d0f6438c5b7bcd2114 |
| request-id | A27A:10F735:1F271F:28BC9A:698D4F27 |
| html-safe-nonce | 1482a38f813ac7f46e12722c843f9ff51eb93ef05ae39f5dbd7780fcd60753c7 |
| visitor-payload | eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJBMjdBOjEwRjczNToxRjI3MUY6MjhCQzlBOjY5OEQ0RjI3IiwidmlzaXRvcl9pZCI6IjgxMTIyMzE0MTc1Mjg1MzY4NzEiLCJyZWdpb25fZWRnZSI6ImlhZCIsInJlZ2lvbl9yZW5kZXIiOiJpYWQifQ== |
| visitor-hmac | c0b7ccff0dfb0c56cdee5b99c9fe2df14d223434299890a7a16ecc314697057d |
| hovercard-subject-tag | issue:1326903356 |
| github-keyboard-shortcuts | repository,issues,copilot |
| google-site-verification | Apib7-x98H0j5cPqHWwSMm6dNU4GmODRoqxLiDzdx9I |
| octolytics-url | https://collector.github.com/github/collect |
| analytics-location | / |
| fb:app_id | 1401488693436528 |
| apple-itunes-app | app-id=1477376905, app-argument=https://github.com/_view_fragments/issues/show/mmmmmm44/VTuber-Python-Unity/17/issue_layout |
| twitter:image | https://opengraph.githubassets.com/07d7b601286b8020e6cdb54540547e32dd4b925d865c6e36a75d9e3dddd7c834/mmmmmm44/VTuber-Python-Unity/issues/17 |
| twitter:card | summary_large_image |
| og:image | https://opengraph.githubassets.com/07d7b601286b8020e6cdb54540547e32dd4b925d865c6e36a75d9e3dddd7c834/mmmmmm44/VTuber-Python-Unity/issues/17 |
| og:image:alt | i tryed print log. so, I knew where it slowed down. i started 5:50:00 but that is time for opened window. this is my main.py code. i changed something. original code and my code take too long to op... |
| og:image:width | 1200 |
| og:image:height | 600 |
| og:site_name | GitHub |
| og:type | object |
| og:author:username | sy-project |
| hostname | github.com |
| expected-hostname | github.com |
| None | c0818105fa276287e9369cfdefa0a0fa7953719791ceff9b94d69623c0a4fe8a |
| turbo-cache-control | no-preview |
| go-import | github.com/mmmmmm44/VTuber-Python-Unity git https://github.com/mmmmmm44/VTuber-Python-Unity.git |
| octolytics-dimension-user_id | 40860474 |
| octolytics-dimension-user_login | mmmmmm44 |
| octolytics-dimension-repository_id | 382236001 |
| octolytics-dimension-repository_nwo | mmmmmm44/VTuber-Python-Unity |
| octolytics-dimension-repository_public | true |
| octolytics-dimension-repository_is_fork | false |
| octolytics-dimension-repository_network_root_id | 382236001 |
| octolytics-dimension-repository_network_root_nwo | mmmmmm44/VTuber-Python-Unity |
| turbo-body-classes | logged-out env-production page-responsive |
| disable-turbo | false |
| browser-stats-url | https://api.github.com/_private/browser/stats |
| browser-errors-url | https://api.github.com/_private/browser/errors |
| release | a95a17cc440c14d4fcddc0641bc1136fa8d908f0 |
| ui-target | full |
| theme-color | #1e2327 |
| color-scheme | light dark |
Links:
Viewport: width=device-width