| Login
dc.contributor.author | Rahman, Md. Hishamur | |
dc.date.accessioned | 2021-02-12T06:16:44Z | |
dc.date.available | 2021-02-12T06:16:44Z | |
dc.date.issued | 2020-11-15 | |
dc.identifier.citation | [1] X. (Michael) Chen, M. Zahiri, and S. Zhang, “Understanding ridesplitting behavior of on-demand ride services: An ensemble learning approach,” Transp. Res. Part C Emerg. Technol., vol. 76, pp. 51–70, Mar. 2017. [2] R. R. Clewlow and G. S. Mishra, “Disruptive Transportation: The Adoption, Utilization, and Impacts of Ride-Hailing in the United States,” Oct. 2017. [3] “DiDi Labs – Intelligent Transportation Technology and Security.” [Online]. Available: http://www.didi-labs.com/. [4] J. Ke, H. Zheng, H. Yang, and X. (Michael) Chen, “Short-term forecasting of passenger demand under on-demand ride services: A spatio-temporal deep learning approach,” Transp. Res. Part C Emerg. Technol., vol. 85, no. October, pp. 591–608, 2017. [5] L. Moreira-Matias, J. Gama, M. Ferreira, J. Mendes-Moreira, and L. Damas, “Predicting taxi-passenger demand using streaming data,” IEEE Trans. Intell. Transp. Syst., vol. 14, no. 3, pp. 1393–1402, Sep. 2013. [6] M.-F. Chiang, T.-A. Hoang, and E.-P. Lim, “Where are the passengers?: a grid-based gaussian mixture model for taxi bookings,” in Proceedings of the 23rd SIGSPATIAL International Conference on Advances in Geographic Information Systems - GIS ’15, 2015, pp. 1–10. [7] D. Wang, W. Cao, J. Li, and J. Ye, “DeepSD: Supply-demand prediction for online car-hailing services using deep neural networks,” in Proceedings - International Conference on Data Engineering, 2017, pp. 243–254. [8] J. Ke et al., “Hexagon-Based Convolutional Neural Network for Supply-Demand Forecasting of Ride-Sourcing Services,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 11, pp. 4160–4173, 2019. [9] X. Geng et al., “Spatiotemporal Multi-Graph Convolution Network for Ride-Hailing Demand Forecasting,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2019, vol. 33, pp. 3656–3663. [10] M. G. McNally, “The Four-Step Model,” in Handbook of Transport Modelling, Emerald Group Publishing Limited, 2007, pp. 35–53. [11] A. Talvitie, “A direct demand model for downtown work trips,” Transportation (Amst)., vol. 2, no. 2, pp. 121–152, Jul. 1973. 96 [12] L. B. Lave, “Demand for Intercity Passenger Transportation,” Reg. Sci. J., vol. 12, no. 1, Apr. 1972. [13] J. Choi, Y. J. Lee, T. Kim, and K. Sohn, “An analysis of Metro ridership at the stationto- station level in Seoul,” Transportation (Amst)., vol. 39, no. 3, pp. 705–722, May 2012. [14] J. de D. Ortúzar and L. Willumsen, Modelling transport. John Wiley & Sons, 2011. [15] M. G. Dagenais and M. J. I. Gaudry, “Can Aggregate Direct Travel Demand Models Work?,” Universite de Montreal, Departement de sciences economiques, 1986. [16] X. Yan, X. Liu, and X. Zhao, “Using machine learning for direct demand modeling of ridesourcing services in Chicago,” J. Transp. Geogr., vol. 83, p. 102661, Feb. 2020. [17] C. Ding, X. Cao, and C. Liu, “How does the station-area built environment influence Metrorail ridership? Using gradient boosting decision trees to identify non-linear thresholds,” J. Transp. Geogr., vol. 77, pp. 70–78, May 2019. [18] H. Iseki, C. Liu, and G. Knaap, “The determinants of travel demand between rail stations: A direct transit demand model using multilevel analysis for the Washington D.C. Metrorail system,” Transp. Res. Part A Policy Pract., vol. 116, pp. 635–649, Oct. 2018. [19] L. Cheng, X. Chen, J. De Vos, X. Lai, and F. Witlox, “Applying a random forest method approach to model travel mode choice behavior,” Travel Behav. Soc., vol. 14, pp. 1–10, Jan. 2019. [20] X. Ma, J. Zhang, B. Du, C. Ding, and L. Sun, “Parallel Architecture of Convolutional Bi-Directional LSTM Neural Networks for Network-Wide Metro Ridership Prediction,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 6, pp. 2278–2288, Jun. 2019. [21] E. I. Vlahogianni, M. G. Karlaftis, and J. C. Golias, “Short-term traffic forecasting: Where we are and where we’re going,” Transp. Res. Part C Emerg. Technol., vol. 43, pp. 3–19, Jun. 2014. [22] Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015. [23] J. Baek and and K. Sohn, “Deep-Learning Architectures to Forecast Bus Ridership at the Stop and Stop-To-Stop Levels for Dense and Crowded Bus Networks,” Appl. Artif. Intell., vol. 30, no. 9, pp. 861–885, Oct. 2016. 97 [24] J. Zhang, Y. Zheng, D. Qi, R. Li, and X. Yi, “DNN-based prediction model for spatiotemporal data,” in GIS: Proceedings of the ACM International Symposium on Advances in Geographic Information Systems, 2016, pp. 1–4. [25] J. Zhang, Y. Zheng, D. Qi, R. Li, X. Yi, and T. Li, “Predicting citywide crowd flows using deep spatio-temporal residual networks,” Artif. Intell., vol. 259, pp. 147–166, Jun. 2018. [26] X. Ouyang, C. Zhang, P. Zhou, H. Jiang, and S. Gong, “DeepSpace: An Online Deep Learning Framework for Mobile Big Data to Understand Human Mobility Patterns,” arXiv Prepr. arXiv1610.07009, Oct. 2016. [27] D. Varshneya and G. Srinivasaraghavan, “Human trajectory prediction using spatially aware deep attention models,” arXiv Prepr. arXiv1705.09436, May 2017. [28] Q. Chen, X. Song, H. Yamada, and R. Shibasaki, “Learning Deep Representation from Big and Heterogeneous Data for Traffic Accident Inference,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 2016, pp. 338–344. [29] S. Alkheder, M. Taamneh, and S. Taamneh, “Severity Prediction of Traffic Accident Using an Artificial Neural Network,” J. Forecast., vol. 36, no. 1, pp. 100–108, Jan. 2017. [30] Y. Y. Chen, Y. Lv, Z. Li, and F. Y. Wang, “Long short-Term memory model for traffic congestion prediction with online open data,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2016, pp. 132–137. [31] X. Ma, H. Yu, Y. Wang, and Y. Wang, “Large-Scale Transportation Network Congestion Evolution Prediction Using Deep Learning Theory,” PLoS One, vol. 10, no. 3, p. e0119044, Mar. 2015. [32] W. Huang, G. Song, H. Hong, and K. Xie, “Deep architecture for traffic flow prediction: Deep belief networks with multitask learning,” IEEE Trans. Intell. Transp. Syst., vol. 15, no. 5, pp. 2191–2201, Oct. 2014. [33] Y. Lv, Y. Duan, W. Kang, Z. Li, and F. Y. Wang, “Traffic Flow Prediction with Big Data: A Deep Learning Approach,” IEEE Trans. Intell. Transp. Syst., vol. 16, no. 2, pp. 865–873, 2015. [34] R. Fu, Z. Zhang, and L. Li, “Using LSTM and GRU neural network methods for traffic flow prediction,” in Proceedings - 2016 31st Youth Academic Annual Conference of Chinese Association of Automation, YAC 2016, 2017, pp. 324–328. [35] Z. Zhao, W. Chen, X. Wu, P. C. V. Chen, and J. Liu, “LSTM network: A deep 98 learning approach for short-term traffic forecast,” IET Intell. Transp. Syst., vol. 11, no. 2, pp. 68–75, Mar. 2017. [36] N. G. Polson and V. O. Sokolov, “Deep learning for short-term traffic flow prediction,” Transp. Res. Part C Emerg. Technol., vol. 79, pp. 1–17, Jun. 2017. [37] X. Ma, Z. Tao, Y. Wang, H. Yu, and Y. Wang, “Long short-term memory neural network for traffic speed prediction using remote microwave sensor data,” Transp. Res. Part C Emerg. Technol., vol. 54, pp. 187–197, May 2015. [38] Y. Jia, J. Wu, and Y. Du, “Traffic speed prediction using deep learning method,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2016, pp. 1217–1222. [39] X. Ma, Z. Dai, Z. He, J. Ma, Y. Wang, and Y. Wang, “Learning traffic as images: A deep convolutional neural network for large-scale transportation network speed prediction,” Sensors (Switzerland), vol. 17, no. 4, p. 818, Apr. 2017. [40] C. Siripanpornchana, S. Panichpapiboon, and P. Chaovalit, “Travel-time prediction with deep learning,” in IEEE Region 10 Annual International Conference, Proceedings/TENCON, 2017, pp. 1859–1862. [41] X. Gang et al., “Continuous Travel Time Prediction for Transit Signal Priority Based on a Deep Network,” in IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2015, vol. 2015-Octob, pp. 523–528. [42] Y. LeCun, P. Haffner, L. Bottou, and Y. Bengio, “Object recognition with gradientbased learning,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 1681, Springer, Berlin, Heidelberg, 1999, pp. 319–345. [43] R. J. Williams and D. Zipser, “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks,” Neural Comput., vol. 1, no. 2, pp. 270–280, Jun. 1989. [44] I. Sutskever, G. Hinton, and G. Taylor, “The recurrent temporal restricted boltzmann machine,” in Advances in Neural Information Processing Systems 21 - Proceedings of the 2008 Conference, 2009, pp. 1601–1608. [45] S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997. [46] H. Yao et al., “Deep multi-view spatial-temporal network for taxi demand prediction,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, vol. 32, no. 1. 99 [47] X. Shi, Z. Chen, H. Wang, D. Y. Yeung, W. K. Wong, and W. C. Woo, “Convolutional LSTM network: A machine learning approach for precipitation nowcasting,” in Advances in Neural Information Processing Systems, 2015, vol. 2015- Janua, pp. 802–810. [48] H. Sak, A. Senior, and F. Beaufays, “Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition,” arXiv Prepr. arXiv1402.1128, Feb. 2014. [49] J. Collins, J. Sohl-Dickstein, and D. Sussillo, “Capacity and trainability in recurrent neural networks,” in 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 2017, pp. 1–17. [50] W. Brendel and M. Bethge, “Approximating cnns with bag-of-local-features models works surprisingly well on imagenet,” in Seventh International Conference on Learning Representations, 2019. [51] S. Li, W. Li, C. Cook, C. Zhu, and Y. Gao, “Independently Recurrent Neural Network (IndRNN): Building A Longer and Deeper RNN,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 5457–5466. [52] R. Collobert and J. Weston, “A unified architecture for natural language processing,” in Proceedings of the 25th international conference on Machine learning - ICML ’08, 2008, pp. 160–167. [53] M. Johnson et al., “Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation,” Trans. Assoc. Comput. Linguist., vol. 5, no. 1, pp. 339–351, 2017. [54] M. L. Seltzer and J. Droppo, “Multi-task learning in deep neural networks for improved phoneme recognition,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, 2013, pp. 6965–6969. [55] Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “Facial landmark detection by deep multitask learning,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8694 LNCS, no. PART 6, Springer, Cham, 2014, pp. 94–108. [56] Ł. Kaiser et al., “One model to learn them all,” arXiv Prepr. arXiv1706.05137, Jun. 2017. [57] J. Ma, Z. Zhao, X. Yi, J. Chen, L. Hong, and E. H. Chi, “Modeling task relationships in multi-task learning with multi-gate mixture-of-experts,” in Proceedings of the ACM 100 SIGKDD International Conference on Knowledge Discovery and Data Mining, 2018, pp. 1930–1939. [58] X. Li et al., “Prediction of urban human mobility using large-scale taxi traces and its applications,” Front. Comput. Sci. China, vol. 6, no. 1, pp. 111–121, 2012. [59] F. Wu, H. Wang, and Z. Li, “Interpreting traffic dynamics using ubiquitous urban data,” in GIS: Proceedings of the ACM International Symposium on Advances in Geographic Information Systems, 2016, pp. 1–4. [60] Y. Tong et al., “The Simpler the Better: A unified approach to predicting original taxi demands based on large-scale online platforms,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 1653– 1662. [61] H. wen Chang, Y. chin Tai, and Y. jen J. Hsu, “Context-aware taxi demand hotspots prediction,” Int. J. Bus. Intell. Data Min., vol. 5, no. 1, pp. 3–18, 2010. [62] K. Zhao, D. Khryashchev, J. Freire, C. Silva, and H. Vo, “Predicting taxi demand at high spatial resolution: Approaching the limit of predictability,” in Proceedings - 2016 IEEE International Conference on Big Data, Big Data 2016, 2016, pp. 833–842. [63] J. Liu, E. Cui, H. Hu, X. Chen, X. M. Chen, and F. Chen, “Short-term forecasting of emerging on-demand ride services,” in 2017 4th International Conference on Transportation Information and Safety, ICTIS 2017 - Proceedings, 2017, pp. 489–495. [64] H. Wei, Y. Wang, T. Wo, Y. Liu, and J. Xu, “ZEST: A hybrid model on predicting passenger demand for chauffeured car service,” in International Conference on Information and Knowledge Management, Proceedings, 2016, vol. 24-28-Octo, pp. 2203–2208. [65] Y. Li, J. Lu, L. Zhang, and Y. Zhao, “Taxi booking mobile app order demand prediction based on short-term traffic forecasting,” Transp. Res. Rec., vol. 2634, no. 1, pp. 57–68, Jan. 2017. [66] X. Qian, S. V Ukkusuri, C. Yang, and F. Yan, “Forecasting short-term taxi demand using boosting-GCRF,” in The 6th International Workshop on Urban Computing (UrbComp 2017), 2017, no. August. [67] Y. Liu, Z. Liu, C. Lyu, and J. Ye, “Attention-Based Deep Ensemble Net for Large- Scale Online Taxi-Hailing Demand Prediction,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 11, pp. 4798–4807, Oct. 2020. [68] C. Wang, P. Hao, G. Wu, X. Qi, and M. Barth, “Predicting the Number of Uber 101 Pickups by Deep Learning,” in Transportation Research Board 97th Annual Meeting, 2018. [69] J. Xu, R. Rahmatizadeh, L. Boloni, and D. Turgut, “Real-Time prediction of taxi demand using recurrent neural networks,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 8, pp. 2572–2581, Aug. 2018. [70] W. Jiang and L. Zhang, “Geospatial data to images: A deep-learning framework for traffic forecasting,” Tsinghua Sci. Technol., vol. 24, no. 1, pp. 52–64, 2019. [71] C. Wang, Y. Hou, and M. Barth, “Data-Driven Multi-step Demand Prediction for Ride-Hailing Services Using Convolutional Neural Network,” in Advances in Intelligent Systems and Computing, vol. 944, no. April, Las Vegas, Nevada, 2020, pp. 11–22. [72] X. Zhou, Y. Shen, Y. Zhu, and L. Huang, “Predicting multi-step citywide passenger demands using attention-based neural networks,” in WSDM 2018 - Proceedings of the 11th ACM International Conference on Web Search and Data Mining, 2018, vol. 2018-Febua, pp. 736–744. [73] L. Bai, X. Wang, L. Yao, W. Liu, S. S. Kanhere, and Z. Yang, “Spatio-temporal graph convolutional and recurrent networks for citywide passenger demand prediction,” in International Conference on Information and Knowledge Management, Proceedings, 2019, pp. 2293–2296. [74] Y. Zhou, J. Li, H. Chen, Y. Wu, J. Wu, and L. Chen, “A spatiotemporal attention mechanism-based model for multi-step citywide passenger demand prediction,” Inf. Sci. (Ny)., vol. 513, pp. 372–385, 2020. [75] Y. Wang, T. Wo, H. Yin, J. Xu, H. Chen, and K. Zheng, “Origin-destination matrix prediction via graph convolution: A new perspective of passenger demand modeling,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019, pp. 1227–1235. [76] Y. Xu and D. Li, “Incorporating graph attention and recurrent architectures for citywide taxi demand prediction,” ISPRS Int. J. Geo-Information, vol. 8, no. 9, 2019. [77] G. Jin, Z. Xi, H. Sha, Y. Feng, and J. Huang, “Deep Multi-View Spatiotemporal Virtual Graph Neural Network for Significant Citywide Ride-hailing Demand Prediction,” arXiv Prepr. arXiv2007.15189, 2020. [78] W. Pian and Y. Wu, “Spatial-Temporal Dynamic Graph Attention Networks for Ridehailing Demand Prediction,” arXiv Prepr. arXiv2006.05905, 2020. 102 [79] G. Jin, Y. Cui, L. Zeng, H. Tang, Y. Feng, and J. Huang, “Urban ride-hailing demand prediction with multiple spatio-temporal information fusion network,” Transp. Res. Part C Emerg. Technol., vol. 117, no. March, p. 102665, 2020. [80] X. Zhang, X. Wang, W. Chen, J. Tao, W. Huang, and T. Wang, “A Taxi Gap Prediction Method via Double Ensemble Gradient Boosting Decision Tree,” in Proceedings - 3rd IEEE International Conference on Big Data Security on Cloud, BigDataSecurity 2017, 3rd IEEE International Conference on High Performance and Smart Computing, HPSC 2017 and 2nd IEEE International Conference on Intelligent Data and Securit, 2017, pp. 255–260. [81] R. Wang, “Supply-demand Forecasting For a Ride-Hailing System,” University of California, Irvine, 2017. [82] L. Ling, X. Lai, and L. Feng, “Forecasting the Gap Between Demand and Supply of Ehailing Vehicle in Large Scale of Network Based on Two-stage Model,” in 2019 IEEE Intelligent Transportation Systems Conference, ITSC 2019, 2019, pp. 3880–3885. [83] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-Decem, pp. 770–778. [84] J. Li and Z. Wang, “Online car-hailing dispatch: Deep supply-demand gap forecast on spark,” in 2017 IEEE 2nd International Conference on Big Data Analysis, ICBDA 2017, 2017, pp. 811–815. [85] A. Ben Said and A. Erradi, “Multiview topological data analysis for crowdsourced service supply-demand gap prediction,” in 2020 International Wireless Communications and Mobile Computing, IWCMC 2020, 2020, pp. 1818–1823. [86] Z. Zhang, Y. Li, and H. Dong, “Multiple-Feature-Based Vehicle Supply-Demand Difference Prediction Method for Social Transportation,” IEEE Trans. Comput. Soc. Syst., vol. 7, no. 4, pp. 1095–1103, Aug. 2020. [87] L. Bai, L. Yao, S. S. Kanhere, Z. Yang, J. Chu, and X. Wang, “Passenger demand forecasting with multi-task convolutional recurrent neural networks,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11440 LNAI, Springer Verlag, 2019, pp. 29–42. [88] K. Zhang, Z. Liu, and L. Zheng, “Short-Term Prediction of Passenger Demand in Multi-Zone Level: Temporal Convolutional Neural Network with Multi-Task Learning,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 4, pp. 1480–1490, 2020. [89] J. Feng, M. Gluzman, and J. G. Dai, “Scalable deep reinforcement learning for ride103 hailing,” IEEE Control Syst. Lett., Sep. 2020. [90] T. Kim, S. Sharda, X. Zhou, and R. M. Pendyala, “A stepwise interpretable machine learning framework using linear regression (LR) and long short-term memory (LSTM): City-wide demand-side prediction of yellow taxi and for-hire vehicle (FHV) service,” Transp. Res. Part C Emerg. Technol., vol. 120, no. September, p. 102786, 2020. [91] L. Kuang, X. Yan, X. Tan, S. Li, and X. Yang, “Predicting taxi demand based on 3D convolutional neural network and multi-task learning,” Remote Sens., vol. 11, no. 11, p. 1265, 2019. [92] H. Luo, J. Cai, K. Zhang, R. Xie, and L. Zheng, “A multi-task deep learning model for short-term taxi demand forecasting considering spatiotemporal dependences,” J. Traffic Transp. Eng. (English Ed., no. April, pp. 1–12, 2020. [93] F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to Forget: Continual Prediction with LSTM,” Neural Comput., vol. 12, no. 10, pp. 2451–2471, 1999. [94] B. Wang, Y. Lei, T. Yan, N. Li, and L. Guo, “Recurrent convolutional neural network: A new framework for remaining useful life prediction of machinery,” Neurocomputing, vol. 379, pp. 117–129, Feb. 2020. [95] Z. Yuan, X. Zhou, and T. Yang, “Hetero-ConvLSTM: A deep learning approach to traffic accident prediction on heterogeneous spatio-temporal data,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2018, vol. 18, pp. 984–992. [96] Didi, “GAIA Open Dataset,” DiDi Chuxing GAIA Open Dataset Initiative, 2018. [Online]. Available: https://gaia.didichuxing.com. [97] Apple, “Dark Sky API,” Dark Sky by Apple, 2020. [Online]. Available: https://darksky.net/dev. [Accessed: 26-Sep-2020]. [98] S. I. Center, “Map POI (Point of Interest) data,” Peking University Open Research Data Platform, 2017. [Online]. Available: https://doi.org/10.18170/DVN/WSXCNM. [Accessed: 26-Sep-2020]. [99] D. P. Kingma and J. L. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015. [100] J. H. Friedman, “Greedy Function Approximation: A Gradient Boosting Machine,” Ann. Stat., vol. 29, no. 5, pp. 1189–1232, 2001. 104 [101] T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, vol. 13-17-Augu, pp. 785–794. [102] A. Liaw and M. Wiener, “Classification and Regression by randomForest,” R News, vol. 2, no. 3, pp. 18–22, 2002. [103] P. Geurts, D. Ernst, and L. Wehenkel, “Extremely randomized trees,” Mach. Learn., vol. 63, no. 1, pp. 3–42, Apr. 2006. [104] J. A. Nelder and R. W. M. Wedderburn, “Generalized Linear Models,” J. R. Stat. Soc. Ser. A, vol. 135, no. 3, pp. 370–384, May 1972. [105] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by backpropagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, Oct. 1986. [106] F. Chollet, “Keras: the Python deep learning API,” 2015. [Online]. Available: https://keras.io/. [Accessed: 04-Jul-2020]. [107] M. Abadi et al., “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems,” 2015. [Online]. Available: www.tensorflow.org. [Accessed: 04- Jul-2020]. [108] H2O.ai, “H2O AutoML.” 2017. [109] J. T. Heaton, Introduction to Neural Networks with Java. Heaton Research, Inc., 2005. [110] V. Borisov, J. Haug, and G. Kasneci, “CancelOut: A Layer for Feature Selection in Deep Neural Networks,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11728 LNCS, Springer Verlag, 2019, pp. 72–83. [111] Y. Y. Lu, Y. Fan, J. Lv, and W. S. Noble, “Deeppink: Reproducible feature selection in deep neural networks,” in Advances in Neural Information Processing Systems, 2018, pp. 8676–8686. [112] Y. Li, C. Y. Chen, and W. W. Wasserman, “Deep feature selection: Theory and application to identify enhancers and promoters,” J. Comput. Biol., vol. 23, no. 5, pp. 322–336, 2016. [113] B. Shen, M. Liu, X. Liang, W. Zheng, Y. Ouyang, and K. M. Carley, “StepDeep: A novel spatial-temporal mobility event prediction framework based on deep neural network,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2018, pp. 724–733. 105 [114] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in Neural Information Processing Systems, 2014, pp. 3104– 3112. [115] A. LeNail, “NN-SVG: Publication-Ready Neural Network Architecture Schematics,” J. Open Source Softw., vol. 4, no. 33, p. 747, Jan. 2019. [116] Didi, “Didi Di-Tech Challenge Algorithm Competition,” Didi, 2016. [Online]. Available: http://web.archive.org/web/20170311212917/http://research.xiaojukeji.com/competitio n/main.action?competitionId=DiTech2016. [117] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton, “Adaptive Mixtures of Local Experts,” Neural Comput., vol. 3, no. 1, pp. 79–87, Feb. 1991. [118] D. Eigen, M. A. Ranzato, and I. Sutskever, “Learning factored representations in a deep mixture of experts,” in 2nd International Conference on Learning Representations, ICLR 2014 - Workshop Track Proceedings, 2014. [119] R. Caruana, “Multitask Learning,” Mach. Learn., vol. 28, no. 1, pp. 41–75, 1997. [120] S. Ruder, “An Overview of Multi-Task Learning in Deep Neural Networks,” arXiv Prepr. arXiv1706.05098, Jun. 2017. [121] K. Cho et al., “Learning phrase representations using RNN encoder-decoder for statistical machine translation,” in EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 2014, pp. 1724–1734. [122] R. Jozefowicz, W. Zaremba, and I. Sutskever, “An empirical exploration of Recurrent Network architectures,” in 32nd International Conference on Machine Learning, ICML 2015, 2015, vol. 3, pp. 2332–2340. [123] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling,” in NIPS 2014 Workshop on Deep Learning, 2014. | en_US |
dc.identifier.uri | http://hdl.handle.net/123456789/806 | |
dc.description | Supervised by Dr. Shakil Mohammad Rifaat Professor Department of Civil and Environmental Engineering (CEE) Islamic University of Technology (IUT) | en_US |
dc.description.abstract | Transportation network companies (TNCs) are providing on-demand door-to-door ridehailing services in many cities around the world. In order to reduce passenger waiting time and driver search friction, TNCs need to conduct spatio-temporal forecasting of demand and supply-demand gap. However, due to spatio-temporal dependencies pertaining to demand and supply-demand gap in a ride-hailing system, making accurate forecasts for demand and supply-demand gap is a difficult task. Although spatio-temporal deep learning methods have recently proven to be successful in detecting the spatio-temporal dependencies, there exist few more challenges in implementing these methods that require further attention. One of the assumptions in spatio-temporal online taxi-hailing demand forecasting with deep learning is that spatio-temporal dependencies rely on spatial structure and therefore, the zonewise historical data is processed as image pixels. However, spatio-temporal dependencies among the zones are complex and do not capture patterns like image pixels. Furthermore, commonly applied two-dimensional convolution and long-short term memory (LSTM) to detect spatio-temporal patterns from zone-wise historical data increases model complexity, which is not justified with respect to a spatio-temporal deep learning baseline. Therefore, this study applies one-dimesional convolution to historical data with flattened zones to investigate the effectiveness of preserving spatial structure in spatio-temporal forecasting of on-demand taxis and develops a spatio-temporal deep learning baseline architecture containing onedimensional convolutional recurrent layers for justifying increased model complexity in architectures containing two-dimensional convolutional recurrent layers. Experiments with real-world online taxi-hailing data of Didi Chuxing from Chengdu show that convolutional recurrent model implementing one-dimensional convolution in vanilla recurrent neural network with rectified linear activation outperforms convolutional recurrent models combining two-dimensional convolution and long short-term memory network. The findings indicate that preserving spatial structure in online taxi-hailing demand forecasting can be sometimes redundant and complex spatio-temporal deep learning models should be compared to a spatio-temporal deep learning baseline in order to build more computationally efficient architectures for spatio-temporal forecasting. xv Due to confidentiality and privacy issues, ride-hailing data are sometimes released to the researchers by removing spatial adjacency information of the zones, which hinders the detection of spatio-temporal dependencies in deep learning. To that end, a novel spatiotemporal deep learning architecture is proposed in this study for forecasting demand and supply-demand gap in a ride-hailing system with anonymized spatial adjacency information, which integrates feature importance layer with a spatio-temporal deep learning architecture containing one-dimensional convolutional neural network (CNN) and zone-distributed independently recurrent neural network (IndRNN). The developed architecture is tested with real-world datasets of Didi Chuxing, which shows that models developed based on the proposed architecture can outperform machine learning models (e.g., gradient boosting machine, distributed random forest, generalized linear model, artificial neural network). Additionally, the feature importance layer provides an interpretation of the model by revealing the contribution of the input features utilized in prediction. Designing and maintaining the spatio-temporal forecasting models separately in a task-wise and city-wise manner is a challenging task for the continuously expanding TNCs. therefore, a deep multi-task learning architecture is proposed in this study by developing a gated ensemble of spatio-temporal mixture of experts network (GESME-Net) containing convolutional recurrent neural network (CRNN), convolutional neural network (CNN), and recurrent neural network (RNN), which is capable of simultaneously forecasting different spatio-temporal tasks in a city as well as same task across different cities. The proposed architecture is tested with real-world data of Didi Chuxing for: (i) simultaneous forecasting of demand and supply-demand gap in Beijing, and (ii) simultaneous forecasting of demand for Chengdu and Xian. In both scenarios, models from the proposed architecture outperformed the multi-task learning benchmark (e.g., shared bottom model, single-gated spatio-temporal mixture of experts), task-wise and city-wise spatio-temporal deep learning models, and machine learning algorithms (e.g., gradient boosting, random forest, and generalized linear models). The developed architecture provides a basis for spatio-temporal multi-task learning in smart cities. The developments made in this study bridges the gap between applying advanced deep learning methods and maintaining the privacy of the TNCs’ data at the same time, provides a way for reducing the computational cost of spatio-temporal deep learning models for the TNCs, and importantly, shows the viability of utilizing spatio-temporal multi-task learning architecture for jointly forecasting demand and supply-demand gap in a ride-hailing system. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Department of Civil and Environmental Engineering, Islamic University of Technology, Gazipur, Bangladesh | en_US |
dc.title | Deep Multi-task Learning Models for Short-term Supply-demand Forecasting in Ride-hailing System | en_US |
dc.type | Thesis | en_US |