dc.identifier.citation |
[1] M. Andrychowicz, M. Denil, S. Gomez, et al., “Learning to learn by gradient descent by gradient descent,” arXiv preprint arXiv:1606.04474, 2016. [Online]. Available: https://arxiv.org/abs/1606.04474. [2] D. Argueso, A. Picon, U. Irusta, et al., “Few-shot learning approach for plant dis ease classification using images taken in the field,” Computers and Electronics in Agriculture, vol. 182, p. 105 542, 2021. doi: 10.1016/j.compag.2021.1060551. [3] P. Bhavsar, “An ultimate guide to transfer learning in nlp,” TOPBOTS, 2019, Ac cessed: 2021-10-13. [Online]. Available: https://www.topbots.com/transfer learning-in-nlp/. [4] T. B. Brown, B. Mann, N. Ryder, et al., Language models are few-shot learners, 2020. arXiv: 2005.14165 [cs.CL]. [Online]. Available: https://arxiv.org/ abs/2005.14165. [5] J. Brownlee. “Transfer learning for deep learning.” Accessed on: July 11, 2024. (Year of Publication), [Online]. Available: https://machinelearningmastery. com/transfer-learning-for-deep-learning/. [6] S. Chen, Y. Ogawa, and Y. Sekimoto, “Large-scale individual building extraction from open-source satellite imagery via super-resolution-based instance segmen tation approach,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 195, pp. 129–152, Jan. 2023. doi: 10.1016/j.isprsjprs.2022.11.006. [Online]. Available: https : / / www . researchgate . net / publication / 365870304 _ Large - scale _ individual _ building _ extraction _ from _ open - source _ satellite_imagery_via_super-resolution-based_instance_segmentation_ approach. [7] W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. Huang, “A closer look at few-shot classification,” arXiv preprint arXiv:1904.04232, 2019. [8] A. Chowdhury, M. Jiang, S. Chaudhuri, and C. Jermaine, “Few-shot image clas sification: Just use a library of pre-trained feature extractors and a simple clas sifier,” IEEE/CVF International Conference on Computer Vision, pp. 9445–9454, 2021. 52 [9] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel, “Rl2: Fast reinforcement learning via slow reinforcement learning,” arXiv preprint arXiv:1611.02779, 2016. [Online]. Available: https://arxiv.org/abs/1611. 02779. [10] N. Dvornik, C. Schmid, and J. Mairal, “Selecting relevant features from a multi domain representation for few-shot classification,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X 16, Springer, 2020, pp. 769–786. [11] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adap tation of deep networks,” Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1126–1135, 2017. [Online]. Available: http:// proceedings.mlr.press/v70/finn17a.html. [12] Garcia and J. Bruna, “Few-shot learning with graph neural networks,” arXiv preprint arXiv:1711.04043, 2017. [13] T. George Karimpanal and R. Bouffanais, “Self-organizing maps for storage and transfer of knowledge in reinforcement learning,” Adaptive Behavior, vol. 27, no. 2, pp. 111–126, 2019. [14] H. Gharoun, F. Momenifar, F. Chen, and A. H. Gandomi, “Meta-learning ap proaches for few-shot learning: A survey of recent advances,” arXiv (Cornell University), Mar. 2023. doi: 10 . 48550 / arxiv . 2303 . 07502. eprint: arXiv : 2303.07502. [15] M. J. Hasan, M. S. Alom, U. F. Dina, and M. H. Moon, “Maize diseases im age identification and classification by combining cnn with bi-directional long short-term memory model,” in 2020 IEEE Region 10 Symposium (TENSYMP), 2020, pp. 1804–1807. doi: 10.1109/TENSYMP50017.2020.9230796. [16] M. F. Hossain, “Dhan-shomadhan: A dataset of rice leaf disease classification for bangladeshi local rice,” arXiv preprint arXiv:2309.07515, 2023. doi: 10.17632. [Online]. Available: https://data.mendeley.com/datasets/znsxdctwtt/1. [17] A. Howard, M. Sandler, G. Chu, et al., “Searching for mobilenetv3,” arXiv preprint arXiv:1905.02244, 2019. [18] A. Howard, M. Sandler, G. Chu, et al., “Searching for mobilenetv3,”CoRR, vol. abs/1905.02244, 2019. doi: 10.1109/ICCV.2019.00140. arXiv: 1905.02244. [Online]. Avail able: http://arxiv.org/abs/1905.02244. [19] G. Hu, H. Wu, Y. Zhang, and M. Wan, “Data for: A low shot learning method for tea leaf’s disease identification,” Mendeley Data, 2019. doi: 10 . 17632 / 53 dbjyfkn6jr.1. [Online]. Available: https://doi.org/10.17632/dbjyfkn6jr. 1. [20] D. P. Hughes and M. Salathé, “An open access repository of images on plant health to enable the development of mobile disease diagnostics,”CoRR, vol. abs/1511.08060, 2015. eprint: 1511.08060. [Online]. Available: https://data.mendeley.com/ datasets/tywbtsjrjv/1. [21] G. Koch, R. Zemel, and R. Salakhutdinov, “Siamese neural networks for one shot image recognition,” in Proceedings of the 32nd International Conference on Machine Learning, JMLR, 2015, pp. 1472–1480. [22] K. Li and J. Malik, “Learning to optimize neural nets,” arXiv preprint arXiv:1703.00441, 2017. [Online]. Available: https://arxiv.org/abs/1703.00441. [23] Y. Li and J. Yang, “Meta-learning baselines and database for few-shot classifi cation in agriculture,” Computers and Electronics in Agriculture, vol. 182, 2021. doi: 106055. [24] H. Lin, R. Tse, S.-K. Tang, Z.-P. Qiang, and G. Pau, “The positive effect of atten tion module in few-shot learning for plant disease recognition,” in 2022 5th In ternational Conference on Pattern Recognition and Artificial Intelligence (PRAI), 2022, pp. 114–120. doi: 10.1109/PRAI55851.2022.9904046. [25] B. Liu, C. Tan, S. Li, J. He, and H. Wang, “A data augmentation method based on generative adversarial networks for grape leaf disease identification,” IEEE Access, vol. 8, pp. 102 188–102 198, 2020. doi: 10.1109/ACCESS.2020.2999816. [Online]. Available: https : / / ieeexplore . ieee . org / stamp / stamp . jsp ? arnumber=9104723. [26] X. Liu, W. Min, S. Mei, L. Wang, and S. Jiang, “Plant disease recognition: A large-scale benchmark dataset and a visual region and loss reweighting ap proach,” IEEE Transactions on Image Processing, vol. 30, pp. 2003–2015, 2021. [27] Z. Liu, Y. Lin, Y. Cao, et al., “Swin transformer: Hierarchical vision transformer using shifted windows,”CoRR, vol. abs/2103.14030, 2021. doi: 10.48550/arXiv. 2103.14030. arXiv: 2103.14030. [Online]. Available: https://arxiv.org/ abs/2103.14030. [28] B. Min, T. Kim, D. Shin, and D. Shin, “Data augmentation method for plant leaf disease recognition,” Applied Sciences, vol. 13, no. 3, 2023, issn: 2076-3417. doi: 10.3390/app13031465. [Online]. Available: https://www.mdpi.com/2076- 3417/13/3/1465. [29] T. Munkhdalai and H. Yu, “Meta networks,” arXiv preprint arXiv:1703.00837, 2017. [Online]. Available: https://arxiv.org/abs/1703.00837. 54 [30] S. Nesteruk, D. Shadrin, and M. Pukalchik, Image augmentation for multitask few-shot learning: Agricultural domain use-case, 2021. arXiv: 2102.12295 [cs.CV]. [31] N. Ragu and J. Teo, “Object detection and classification using few-shot learn ing in smart agriculture: A scoping mini review,” Frontiers in Sustainable Food Systems, vol. 6, Jan. 2023. doi: 10 . 3389 / fsufs . 2022 . 1039299. [Online]. Available: https://www.frontiersin.org/journals/sustainable-food systems/articles/10.3389/fsufs.2022.1039299/full. [32] M. Sandler, A. G. Howard, M. Zhu, A. Zhmoginov, and L. Chen, “Inverted resid uals and linear bottlenecks: Mobile networks for classification, detection and segmentation,” CoRR, vol. abs/1801.04381, 2018. doi: 10.1109/CVPR.2019. 00274. arXiv: 1801 . 04381. [Online]. Available: http : / / arxiv . org / abs / 1801.04381. [33] A. Santoro, S. Bartunov, M. Botvinick, D.Wierstra, and T. Lillicrap, “Meta-learning with memory-augmented neural networks,” Proceedings of The 33rd Interna tional Conference on Machine Learning, vol. 48, pp. 1842–1850, 2016. [Online]. Available: https://proceedings.mlr.press/v48/santoro16.pdf. [34] J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learn ing,” inAdvances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, et al., Eds., Curran Associates, Inc., vol. 30, 2017, pp. 4077–4087. [35] Y. Sung, L. Yang, T. Zhang, T. Xiang, P. H. S. Torr, and T. M. Hospedales, “Learn ing to compare: Relation network for few-shot learning,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 1199– 1208. [36] Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola, “Rethinking few shot image classification: A good embedding is all you need?” In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16, Springer, 2020, pp. 266–282. [37] V. Tiwari, R. C. Joshi, and M. K. Dutta, “Dense convolutional neural networks based multiclass plant disease detection and classification using leaf images,” Ecological Informatics, vol. 63, p. 101 289, 2021, issn: 1574-9541. doi: https: //doi.org/10.1016/j.ecoinf.2021.101289. [Online]. Available: https: //www.sciencedirect.com/science/article/pii/S1574954121000807. [38] Z. Tu, H. Talebi, H. Zhang, et al., “Maxvit: Multi-axis vision transformer,” in European Conference on Computer Vision, Springer, 2022, pp. 459–479. doi: 10. 1007/978- 3- 031- 20053- 3_27. [Online]. Available: https://arxiv.org/ pdf/2204.01697. 55 [39] M. Turkoglu, D. Hanbay, and A. Sengur, “Multi-model lstm-based convolutional neural networks for detection of apple diseases and pests,” J Ambient Intell Hu man Comput, vol. 13, pp. 3335–3345, 2022. doi: 10.1007/s12652-019-01591- w. [Online]. Available: https://doi.org/10.1007/s12652-019-01591-w. [40] P. Uskaner Hepsağ, “Efficient plant disease identification using few-shot learn ing: A transfer learning approach,” Multimedia Tools and Applications, pp. 1– 16, 2023. [41] A. Vaswani, N. Shazeer, N. Parmar, et al., “Attention is all you need,” arXiv preprint arXiv:1706.03762, 2017. [42] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, “Match ing networks for one shot learning,” in Advances in Neural Information Process ing Systems, D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett, Eds., vol. 29, Curran Associates, Inc., 2016. [43] B. Wang and D. Wang, “Plant leaves classification: A few-shot learning method based on siamese network,” IEEE Access, vol. 7, pp. 151 754–151 763, 2019. [44] Y. Wang and S. Wang, “Imal: An improved meta-learning approach for few-shot classification of plant diseases,” in 2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE), IEEE, 2021, pp. 1–7. doi: 10.1109/ BIBE52308.2021.9635575. [45] X. Wu, H. Deng, Q. Wang, L. Lei, Y. Gao, and G. Hao, “Meta-learning shows great potential in plant disease recognition under few available samples,” The Plant Journal, vol. 114, no. 4, pp. 767–782, Apr. 2023. doi: 10.1111/tpj.16176. [46] J. Yang, X. Guo, Y. Li, F. Marinello, S. Ercisli, and Z. Zhang, “A survey of few shot learning in smart agriculture: Developments, applications, and challenges,” Plant Methods, vol. 18, no. 1, p. 28, 2022. [Online]. Available: https://plantmethods. biomedcentral.com/articles/10.1186/s13007-022-00866-2. [47] Y. Zhao, Rnn, lstm, and bidirectional lstm: Complete guide, 2024. [Online]. Avail able: https://dagshub.com/blog/rnn-lstm-bidirectional-lstm |
en_US |