This project aim to analysis&predict the trend of Taiwan's finance
- fix the problem mention yesterday
- turn the xAxis label on the chart from float to string
- next
- try linear regression using x: FF Diff, y: TAIEX Diff
- add TAIEX Open Diff into chart
- found problem
- the data of Friday will be scratch 3 times, probably the try catch in get_TAIEX_daily.py have some issue.(Expected not to scratch any data on weekend)
- add tab in frontend
- long time no see
- success using serve.js to serve the build version of react frontend
- if calling api directly by url, need to run flask with --host=0.0.0.0
- Fail to call api to backend by directly embed url in homepage without setupProxy
- Setup the scratching routine for scratching TAIEX and Foreign-Fund on linux8
- No need to deploy advanced server on linux8, npm start is sufficient
- Know how to automatically build and run the frontend
- Switch the server code(included jenkins server) to linux8.
- Next to do
- deploy the server using advanced server tools(e.g. Apache).
- Sucessful deal with webhook that emission when push to repo and trigger the build on jinken server.
- Finish the deployment of Jenkins pipeline on Oasis3 server that can automatic update the git repo. when run the pipeline.
- Use last 7 days diff as features to predict (tommorow's open - today's close), result still not good
- try to train a linear regression model for predicting the difference btw today's Close and tmr's Open, but performance not good yet
- need to collect more featrues
- build the TAIEX-Open vs. Foreign-Fund-Diff figure on frontend
- generate the label for predicting tommorow Open
- add get-TAIEX-from-twse.py to get TAIEX from 20200601 ~ 20210630
- Plan to build a Web APP that can visualize some analysis of TW finace
- add main.py for the backend
- coplete the auto dowload data file, add many anti-reptile detection method
- random delay
- add header
- random user-agent
- add the auto download data from twse function by file "get_data_from_twse.py"
- directly get data from the twse website, but been banned due to frequently requesting.
- Should download the file to local and read the csv file, instead of directly get the data from web url.