Title: 一文囊括序列预测方法(源码) · Issue #7 · aialgorithm/Blog · GitHub
Open Graph Title: 一文囊括序列预测方法(源码) · Issue #7 · aialgorithm/Blog
X Title: 一文囊括序列预测方法(源码) · Issue #7 · aialgorithm/Blog
Description: 1 时间序列 时间序列是指将某种现象某一个统计指标在不同时间上的各个数值,按时间先后顺序排列而形成的序列。典型的时间序列问题,例如股价预测、制造业中的电力预测、传统消费品行业的销售预测、客户日活跃量预测等等,本文以客户日活跃量预测为例。 2 预测方法 时间序列的预测方法可以归纳为三类: 1、时间序列基本规则法-周期因子法; 2、传统序列预测方法,如均值回归、ARIMA等线性模型; 3、机器学习方法,将序列预测转为有监督建模预测,如XGBOOST集成学习方法,LSTM长短...
Open Graph Description: 1 时间序列 时间序列是指将某种现象某一个统计指标在不同时间上的各个数值,按时间先后顺序排列而形成的序列。典型的时间序列问题,例如股价预测、制造业中的电力预测、传统消费品行业的销售预测、客户日活跃量预测等等,本文以客户日活跃量预测为例。 2 预测方法 时间序列的预测方法可以归纳为三类: 1、时间序列基本规则法-周期因子法; 2、传统序列预测方法,如均值回归、ARIMA等线性模型; 3、机器学...
X Description: 1 时间序列 时间序列是指将某种现象某一个统计指标在不同时间上的各个数值,按时间先后顺序排列而形成的序列。典型的时间序列问题,例如股价预测、制造业中的电力预测、传统消费品行业的销售预测、客户日活跃量预测等等,本文以客户日活跃量预测为例。 2 预测方法 时间序列的预测方法可以归纳为三类: 1、时间序列基本规则法-周期因子法; 2、传统序列预测方法,如均值回归、ARIMA等线性模型; 3、机器学...
Opengraph URL: https://github.com/aialgorithm/Blog/issues/7
X: @github
Domain: github.com
{"@context":"https://schema.org","@type":"DiscussionForumPosting","headline":"一文囊括序列预测方法(源码)","articleBody":"\r\n## 1 时间序列 \r\n时间序列是指将某种现象某一个统计指标在不同时间上的各个数值,按时间先后顺序排列而形成的序列。典型的时间序列问题,例如股价预测、制造业中的电力预测、传统消费品行业的销售预测、客户日活跃量预测等等,本文以客户日活跃量预测为例。\r\n\r\n\r\n\r\n\r\n\r\n## 2 预测方法\r\n时间序列的预测方法可以归纳为三类:\r\n1、时间序列基本规则法-周期因子法; \r\n2、传统序列预测方法,如均值回归、ARIMA等线性模型;\r\n3、机器学习方法,将序列预测转为有监督建模预测,如XGBOOST集成学习方法,LSTM长短期记忆神经网络模型。\r\n### 2.1 周期因子法\r\n当序列存在周期性时,通过加工出数据的周期性特征预测。这种比较麻烦,简述下流程不做展开。\r\n1、计算周期的[因子 。](https://www.jianshu.com/p/31e20f00c26f?spm=5176.12282029.0.0.36241491UUhnZE)\r\n2、计算base \r\n3、预测结果=周期因子*base\r\n\r\n### 2.2 Ariama\r\nARIMA模型,差分整合移动平均自回归模型,又称整合移动平均自回归模型\r\n(移动也可称作滑动),时间序列预测分析方法之一。ARIMA(p,d,q)中,\r\nAR是\"自回归\",p为自回归项数;MA为\"滑动平均\",q为滑动平均项数,\r\nd为使之成为平稳序列所做的差分次数(阶数)。\r\n建模的主要步骤是:\r\n1、数据需要先做平稳法处理:采用(对数变换或差分)平稳化后,并检验符合平稳非白噪声序列;\r\n2、观察PACF和ACF截尾/信息准则定阶确定(p, q);\r\n3、 建立ARIMA(p,d,q)模型做预测;\r\n\r\n\r\n```python\r\n\"\"\"\r\nARIMA( 差分自回归移动平均模型)预测DAU指标\r\n\"\"\"\r\nimport numpy as np\r\nimport pandas as pd\r\nimport seaborn as sns\r\nimport scipy\r\nimport matplotlib.pyplot as plt\r\nimport statsmodels.api as sm\r\nimport warnings\r\nfrom math import sqrt\r\nfrom pandas import Series\r\nfrom sklearn.metrics import mean_squared_error\r\nfrom statsmodels.graphics.tsaplots import acf, pacf,plot_acf, plot_pacf\r\nfrom statsmodels.stats.diagnostic import acorr_ljungbox\r\nfrom statsmodels.tsa.arima_model import ARIMA, ARMA, ARIMAResults\r\nfrom statsmodels.tsa.stattools import adfuller as ADF\r\nfrom keras.models import load_model\r\n\r\nwarnings.filterwarnings(\"ignore\")\r\n\r\ndf = pd.read_csv('DAU.csv')\r\ndau = df[\"dau\"]\r\n\r\n\r\n# 折线图\r\ndf.plot()\r\nplt.show()\r\n\r\n# 箱线图\r\nax = sns.boxplot(y=dau)\r\nplt.show()\r\n\r\n\r\n# pearsonr时间相关性\r\n# a = df['dau']\r\n# b = df.index\r\n# print(scipy.stats.pearsonr(a,b))\r\n# 自相关性\r\nplot_acf(dau)\r\nplot_pacf(dau)\r\nplt.show()\r\nprint('raw序列的ADF')\r\n# p值大于0.05为非平衡时间序列\r\nprint(ADF(dau))\r\n\r\n#对数变换平稳处理 \r\n# dau_log = np.log(dau)\r\n\r\n# dau_log = dau_log.ewm(com=0.5, span=12).mean()\r\n# plot_acf(dau_log)\r\n# plot_pacf(dau_log)\r\n# plt.show()\r\n# print('log序列的ADF')\r\n# print(ADF(dau_log))\r\n\r\n# print('log序列的白噪声检验结果')\r\n# # 大于0.05为白噪声序列\r\n# print(acorr_ljungbox(dau_log, lags=1))\r\n\r\n\r\n#差分平稳处理\r\ndiff_1_df = dau.diff(1).dropna(how=any)\r\ndiff_1_df = diff_1_df\r\ndiff_1_df.plot()\r\nplot_acf(diff_1_df)\r\nplot_pacf(diff_1_df)\r\nplt.show()\r\n\r\nprint('差分序列的ADF')\r\nprint(ADF(diff_1_df))\r\n\r\nprint('差分序列的白噪声检验结果')\r\n# 大于0.05为白噪声序列\r\nprint(acorr_ljungbox(diff_1_df, lags=1))\r\n\r\n\r\n# # 在满足检验条件后,给出最优p q值 ()\r\nr, rac, Q = sm.tsa.acf(diff_1_df, qstat=True)\r\nprac = pacf(diff_1_df, method='ywmle')\r\ntable_data = np.c_[range(1,len(r)), r[1:], rac, prac[1:len(rac)+1], Q]\r\ntable = pd.DataFrame(table_data, columns=['lag', \"AC\",\"Q\", \"PAC\", \"Prob(\u003eQ)\"])\r\norder = sm.tsa.arma_order_select_ic(diff_1_df, max_ar=7, max_ma=7, ic=['aic', 'bic', 'hqic'])\r\np, q =order.bic_min_order\r\nprint(\"p,q\")\r\nprint(p, q)\r\n\r\n# 建立ARIMA(p, d, q)模型 d=1\r\norder = (p, 1, q)\r\ntrain_X = diff_1_df[:]\r\narima_model = ARIMA(train_X, order).fit()\r\n\r\n# 模型报告\r\n# print(arima_model.summary2())\r\n\r\n# 保存模型\r\narima_model.save('./data/arima_model.h5')\r\n\r\n# # load model\r\narima_model = ARIMAResults.load('./data/arima_model.h5')\r\n\r\n\r\n# 预测未来两天数据\r\npredict_data_02 = arima_model.predict(start=len(train_X), end=len(train_X) + 1, dynamic = False)\r\n\r\n# 预测历史数据\r\npredict_data = arima_model.predict(dynamic = False)\r\n\r\n# 逆log化\r\n# original_series = np.exp(train_X.values[1:] + np.log(dau.values[1:-1]))\r\n# predict_series = np.exp(predict_data.values + np.log(dau.values[1:-1]))\r\n# 逆差分\r\noriginal_series = train_X.values[1:] + dau.values[1:-1]\r\npredict_series = predict_data.values + dau.values[1:-1]\r\n\r\n# comp = pd.DataFrame()\r\n# comp['original'] = original_series\r\n# comp['predict'] = predict_series\r\nsplit_num = int(len(dau.values)/3) or 1\r\nrmse = sqrt(mean_squared_error(original_series[-split_num:], predict_series[-split_num:]))\r\nprint('Test RMSE: %.3f' % rmse)\r\n# (0,1,0)Test RMSE\r\n\r\nplt.title('ARIMA RMSE: %.3f' % rmse)\r\nplt.plot(original_series[-split_num:], label=\"original_series\")\r\nplt.plot(predict_series[-split_num:], label=\"predict_series\")\r\nplt.legend()\r\nplt.show()\r\n```\r\n\r\n### 2.2 lstm\r\nLong Short Term 网络是一种 RNN 特殊的类型,可以学习长期依赖序列信息。\r\nLSTM区别于RNN的地方,主要就在于它在算法中加入了一个判断信息有用与否的“处理器”,这个处理器作用的结构被称为cell。一个cell当中被放置了三扇门,分别叫做输入门、遗忘门和输出门。一个信息进入LSTM的网络当中,只有符合信息才会留下,不符的信息则通过遗忘门被遗忘。通过这机制减少梯度爆炸/消失的风险。\r\n\r\n建模主要的步骤:\r\n1、数据处理:差分法数据平稳化;MAX-MIN法数据标准化;构建监督学习训练集;(对于LSTM,差分及标准化不是必要的)\r\n2、模型训练并预测;\r\n\r\n\r\n```python\r\n\"\"\"\r\nLSTM预测DAU指标\r\n\"\"\"\r\nimport numpy as np\r\nfrom pandas import DataFrame, datetime, concat,read_csv, Series\r\nfrom matplotlib import pyplot as plt\r\nfrom sklearn.metrics import mean_squared_error\r\nfrom math import sqrt\r\nfrom sklearn.preprocessing import MinMaxScaler\r\nfrom keras.models import Sequential\r\nfrom keras.models import load_model\r\nfrom keras.layers import Dense\r\nfrom keras.layers import LSTM\r\nfrom keras.callbacks import EarlyStopping\r\nfrom keras import regularizers\r\nfrom statsmodels.stats.diagnostic import acorr_ljungbox\r\nfrom statsmodels.tsa.stattools import adfuller as ADF\r\n\r\n# convert date\r\ndef parser(x):\r\n return datetime.strptime(x,\"%Y-%m-%d\")\r\n\r\n#supervised\r\ndef timeseries_to_supervised(data, lag=1):\r\n df = DataFrame(data)\r\n columns = [df.shift(1) for i in range(1, lag+1)]\r\n columns.append(df)\r\n df = concat(columns, axis=1)\r\n df.fillna(0, inplace=True)\r\n return df\r\n\r\n# diff series\r\ndef difference(dataset, interval=1):\r\n diff = list()\r\n for i in range(interval, len(dataset)):\r\n value = dataset[i] - dataset[i - interval]\r\n diff.append(value)\r\n return Series(diff)\r\n\r\n# invert diff value\r\ndef inverse_difference(history, yhat, interval=1):\r\n return yhat + history[-interval]\r\n\r\n# max_min标准化 [-1, 1]\r\ndef scale(train, test):\r\n # fit scaler\r\n scaler = MinMaxScaler(feature_range=(-1, 1))\r\n scaler = scaler.fit(train)\r\n # transform train\r\n train = train.reshape(train.shape[0], train.shape[1])\r\n train_scaled = scaler.transform(train)\r\n # transform test\r\n test = test.reshape(test.shape[0], test.shape[1])\r\n test_scaled = scaler.transform(test)\r\n return scaler, train_scaled, test_scaled\r\n\r\n# invert scale transform\r\ndef invert_scale(scaler, X, value):\r\n new_row = [x for x in X] + [value]\r\n array = np.array(new_row)\r\n array = array.reshape(1, len(array))\r\n inverted = scaler.inverse_transform(array)\r\n return inverted[0, -1]\r\n\r\n# model train\r\ndef fit_lstm(train, batch_size, nb_epoch, neurons):\r\n X, y = train[:,0:-1], train[:,-1]\r\n # reshp\r\n X = X.reshape(X.shape[0], 1, X.shape[1])\r\n model = Sequential()\r\n # stateful\r\n # input(samples:batch row, time steps:1, features:one time observed)\r\n model.add(LSTM(neurons,\r\n batch_input_shape=(batch_size, X.shape[1], X.shape[2]),\r\n stateful=True, return_sequences=True, dropout=0.2))\r\n model.add(Dense(1))\r\n model.compile(loss=\"mean_squared_error\", optimizer=\"adam\")\r\n# train\r\n train_loss = []\r\n val_loss = []\r\n for i in range(nb_epoch):\r\n # shuffle=false\r\n history = model.fit(X, y, batch_size=batch_size, epochs=1,verbose=0,shuffle=False,validation_split=0.3)\r\n train_loss.append(history.history['loss'][0])\r\n val_loss.append(history.history['val_loss'][0])\r\n # clear state\r\n model.reset_states()\r\n # 提前停止训练\r\n if i \u003e 50 and sum(val_loss[-10:]) \u003c 0.3:\r\n print(sum(val_loss[-5:]))\r\n print(\"better epoch\", i)\r\n break\r\n\r\n # print(history.history['loss'])\r\n\r\n plt.plot(train_loss)\r\n plt.plot(val_loss)\r\n plt.title('model train vs validation loss')\r\n plt.ylabel('loss')\r\n plt.xlabel('epoch')\r\n plt.legend(['train', 'validation'], loc='upper right')\r\n plt.show()\r\n return model\r\n\r\n# model predict\r\ndef forecast_lstm(model, batch_size, X):\r\n X = X.reshape(1, 1, len(X))\r\n yhat = model.predict(X, batch_size=batch_size)\r\n return yhat[0,0]\r\n\r\n# 开始加载数据\r\nseries = read_csv('DAU.csv')[\"dau\"]\r\nprint(series.head())\r\nseries.plot()\r\nplt.show()\r\n\r\n# 数据平稳化\r\nraw_values = series.values\r\ndiff_values = difference(raw_values, 1)\r\nprint(diff_values.head())\r\nplt.plot(raw_values, label=\"raw\")\r\nplt.plot(diff_values, label=\"diff\")\r\nplt.legend()\r\nplt.show()\r\nprint('差分序列的ADF')\r\nprint(ADF(diff_values)[1])\r\nprint('差分序列的白噪声检验结果')\r\n# (array([13.95689179]), array([0.00018705]))\r\nprint(acorr_ljungbox(diff_values, lags=1)[1][0])\r\n# 序列转监督数据\r\nsupervised = timeseries_to_supervised(diff_values, 1)\r\nprint(supervised.head())\r\nsupervised_values = supervised.values\r\n\r\n# split data\r\nsplit_num = int(len(supervised_values)/3) or 1\r\ntrain, test = supervised_values[0:-split_num], supervised_values[-split_num:]\r\n\r\n# 标准化\r\nscaler, train_scaled, test_scaled = scale(train, test)\r\n\r\n#fit model\r\nlstm_model = fit_lstm(train_scaled, 1, 200, 5)\r\ntrain_reshaped = train_scaled[:, 0].reshape(len(train_scaled), 1, 1)\r\ntrain_predict = lstm_model.predict(train_reshaped, batch_size=1)\r\ntrain_raw = train_scaled[:, 0]\r\n\r\n# # train RMSE plot\r\n# train_raw = raw_values[0:-split_num]\r\n# predictions = list()\r\n# for i in range(len(train_scaled)):\r\n# # make one-step forecast\r\n# X, y = train_scaled[i, 0:-1], train_scaled[i, -1]\r\n# yhat = forecast_lstm(lstm_model, 1, X)\r\n# # invert scaling\r\n# yhat = invert_scale(scaler, X, yhat)\r\n# # invert differencing\r\n# yhat = inverse_difference(raw_values, yhat, len(train_scaled)+1-i)\r\n# # store forecast\r\n# predictions.append(yhat)\r\n# expected = train_raw[i]\r\n# mae = abs(yhat-expected)\r\n# print('data=%d, Predicted=%f, Expected=%f, mae=%.3f' % (i+1, yhat, expected,mae))\r\n# print(mae)\r\n# plt.plot(train_raw, label=\"train_raw\")\r\n# plt.plot(predictions, label=\"predict\")\r\n# plt.legend()\r\n# plt.show()\r\n# 保存模型\r\nlstm_model.save('./data/lstm_model_epoch50.h5')\r\n# # load model\r\nlstm_model = load_model('./data/lstm_model_epoch50.h5')\r\n\r\n# validation\r\npredictions = list()\r\nfor i in range(len(test_scaled)):\r\n # make one-step forecast\r\n X, y = test_scaled[i, 0:-1], test_scaled[i, -1]\r\n yhat = forecast_lstm(lstm_model, 1, X)\r\n # invert scaling\r\n yhat = invert_scale(scaler, X, yhat)\r\n # invert differencing\r\n yhat = inverse_difference(raw_values, yhat, len(test_scaled)+1-i)\r\n # store forecast\r\n predictions.append(yhat)\r\n expected = raw_values[len(train) + i + 1]\r\n mae = abs(yhat-expected)\r\n print('data=%d, Predicted=%f, Expected=%f, mae=%.3f' % (i+1, yhat, expected, mae))\r\nmae = np.average(abs(predictions - raw_values[-split_num:]))\r\nprint(\"Test MAE: %.3f\",mae)\r\n#report performance\r\nrmse = sqrt(mean_squared_error(raw_values[-split_num:], predictions))\r\nprint('Test RMSE: %.3f' % rmse)\r\n# line plot of observed vs predicted\r\nplt.plot(raw_values[-split_num:], label=\"raw\")\r\nplt.plot(predictions, label=\"predict\")\r\nplt.title('LSTM Test RMSE: %.3f' % rmse)\r\nplt.legend()\r\nplt.show()\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n------\r\n\r\n\r\n\r\n``` csv\r\n# 附 数据集dau.csv\r\nlog_date,dau\r\n2018-06-01,257488\r\n2018-06-02,286612\r\n2018-06-03,287405\r\n2018-06-04,246955\r\n2018-06-05,249926\r\n2018-06-06,252951\r\n2018-06-07,255467\r\n2018-06-08,262498\r\n2018-06-09,288368\r\n2018-06-10,288440\r\n2018-06-11,255447\r\n2018-06-12,251316\r\n2018-06-13,251654\r\n2018-06-14,250515\r\n2018-06-15,262155\r\n2018-06-16,288844\r\n2018-06-17,296143\r\n2018-06-18,298142\r\n2018-06-19,264124\r\n2018-06-20,262992\r\n2018-06-21,263549\r\n2018-06-22,271631\r\n2018-06-23,296452\r\n2018-06-24,296986\r\n2018-06-25,271197\r\n2018-06-26,270546\r\n2018-06-27,271208\r\n2018-06-28,275496\r\n2018-06-29,284218\r\n2018-06-30,307498\r\n2018-07-01,316097\r\n2018-07-02,295106\r\n2018-07-03,290675\r\n2018-07-04,292231\r\n2018-07-05,297510\r\n2018-07-06,298839\r\n2018-07-07,302083\r\n2018-07-08,301238\r\n2018-07-09,296398\r\n2018-07-10,300986\r\n2018-07-11,301459\r\n2018-07-12,299865\r\n2018-07-13,289830\r\n2018-07-14,297501\r\n2018-07-15,297443\r\n2018-07-16,293097\r\n2018-07-17,293866\r\n2018-07-18,292902\r\n2018-07-19,292368\r\n2018-07-20,290766\r\n2018-07-21,294669\r\n2018-07-22,295811\r\n2018-07-23,297514\r\n2018-07-24,297392\r\n2018-07-25,298957\r\n2018-07-26,298101\r\n2018-07-27,298740\r\n2018-07-28,304086\r\n2018-07-29,305269\r\n2018-07-30,304827\r\n2018-07-31,299689\r\n2018-08-01,300526\r\n2018-08-02,59321\r\n2018-08-03,31731\r\n2018-08-04,36838\r\n2018-08-05,42043\r\n2018-08-06,42366\r\n2018-08-07,37209\r\n\r\n\r\n```\r\n","author":{"url":"https://github.com/aialgorithm","@type":"Person","name":"aialgorithm"},"datePublished":"2021-01-18T03:18:01.000Z","interactionStatistic":{"@type":"InteractionCounter","interactionType":"https://schema.org/CommentAction","userInteractionCount":0},"url":"https://github.com/7/Blog/issues/7"}
| route-pattern | /_view_fragments/issues/show/:user_id/:repository/:id/issue_layout(.:format) |
| route-controller | voltron_issues_fragments |
| route-action | issue_layout |
| fetch-nonce | v2:4c2897f1-a105-bba8-85ac-635e58715482 |
| current-catalog-service-hash | 81bb79d38c15960b92d99bca9288a9108c7a47b18f2423d0f6438c5b7bcd2114 |
| request-id | D57A:207A9B:921F6:C53A9:696A0273 |
| html-safe-nonce | 4ade0e9941ed63a78747aaca72566ad3ac430399ec16e69a4ad790f6515d161d |
| visitor-payload | eyJyZWZlcnJlciI6IiIsInJlcXVlc3RfaWQiOiJENTdBOjIwN0E5Qjo5MjFGNjpDNTNBOTo2OTZBMDI3MyIsInZpc2l0b3JfaWQiOiIxNTI0OTM4MjUwNTY2MzY5OTA3IiwicmVnaW9uX2VkZ2UiOiJpYWQiLCJyZWdpb25fcmVuZGVyIjoiaWFkIn0= |
| visitor-hmac | bb6529e71a2cfd3a9163649a864e2a7f0678213e35086e276c50ec8b917f0c27 |
| hovercard-subject-tag | issue:787881124 |
| github-keyboard-shortcuts | repository,issues,copilot |
| google-site-verification | Apib7-x98H0j5cPqHWwSMm6dNU4GmODRoqxLiDzdx9I |
| octolytics-url | https://collector.github.com/github/collect |
| analytics-location | / |
| fb:app_id | 1401488693436528 |
| apple-itunes-app | app-id=1477376905, app-argument=https://github.com/_view_fragments/issues/show/aialgorithm/Blog/7/issue_layout |
| twitter:image | https://opengraph.githubassets.com/66a23adb115435f071b9e6b73d2ada19d524420a2d879050b393b6e8fe19a475/aialgorithm/Blog/issues/7 |
| twitter:card | summary_large_image |
| og:image | https://opengraph.githubassets.com/66a23adb115435f071b9e6b73d2ada19d524420a2d879050b393b6e8fe19a475/aialgorithm/Blog/issues/7 |
| og:image:alt | 1 时间序列 时间序列是指将某种现象某一个统计指标在不同时间上的各个数值,按时间先后顺序排列而形成的序列。典型的时间序列问题,例如股价预测、制造业中的电力预测、传统消费品行业的销售预测、客户日活跃量预测等等,本文以客户日活跃量预测为例。 2 预测方法 时间序列的预测方法可以归纳为三类: 1、时间序列基本规则法-周期因子法; 2、传统序列预测方法,如均值回归、ARIMA等线性模型; 3、机器学... |
| og:image:width | 1200 |
| og:image:height | 600 |
| og:site_name | GitHub |
| og:type | object |
| og:author:username | aialgorithm |
| hostname | github.com |
| expected-hostname | github.com |
| None | 578c119ff0247c8b2f2491fbf4fc0395cdf909d4df66598cebdc96ddfc4418dc |
| turbo-cache-control | no-preview |
| go-import | github.com/aialgorithm/Blog git https://github.com/aialgorithm/Blog.git |
| octolytics-dimension-user_id | 33707637 |
| octolytics-dimension-user_login | aialgorithm |
| octolytics-dimension-repository_id | 147093233 |
| octolytics-dimension-repository_nwo | aialgorithm/Blog |
| octolytics-dimension-repository_public | true |
| octolytics-dimension-repository_is_fork | false |
| octolytics-dimension-repository_network_root_id | 147093233 |
| octolytics-dimension-repository_network_root_nwo | aialgorithm/Blog |
| turbo-body-classes | logged-out env-production page-responsive |
| disable-turbo | false |
| browser-stats-url | https://api.github.com/_private/browser/stats |
| browser-errors-url | https://api.github.com/_private/browser/errors |
| release | 671c2f67171dbced24284331f3133a613d08c366 |
| ui-target | full |
| theme-color | #1e2327 |
| color-scheme | light dark |
Links:
Viewport: width=device-width