Compare commits

..

59 Commits

Author SHA1 Message Date
2c42441f0e
Add files 2021-08-20 12:22:04 +02:00
d05ab73303
Update draft 2021-06-22 17:36:57 +02:00
6811642d58
Update draft 2021-06-19 09:42:55 +02:00
5b3f00e5af
Reduce section title 2021-06-18 21:37:53 +02:00
ac3e22a19c
Add fixes 2021-06-18 21:24:04 +02:00
6a0e234031
Update plots 2021-06-18 17:19:07 +02:00
8f5bdcaa33
Add general conclusions 2021-06-18 16:38:52 +02:00
1e7cf62988
Add new future work 2021-06-18 16:34:05 +02:00
42f0c834ac
Add Threats to validity 2021-06-18 16:28:34 +02:00
0ea84c79c1
Fix result of RQ1 2021-06-18 16:20:46 +02:00
c98ef3c4a7
Remove strict & base 2021-06-18 16:18:20 +02:00
d875f60721
Add introduction to chapter 3 2021-06-18 16:13:26 +02:00
1b3d56be02
Fix acronyms 2021-06-18 16:07:33 +02:00
6a4bf1a67c
Update draft 2021-06-17 10:43:55 +02:00
10d9db36fd
Add references 2021-06-17 10:42:10 +02:00
550058d36e
General fixes 2021-06-17 10:39:04 +02:00
89750f2f55
Add threats to validity 2021-06-16 17:38:46 +02:00
1829ac57b7
Add recap boxes 2021-06-16 14:07:48 +02:00
bd36b6a5b7
Fix stato dell'arte 2021-06-16 11:39:51 +02:00
63b72b6d2e
Add related works 2021-06-15 22:16:51 +02:00
96f66ec340
Fix chapter 3 2021-06-15 14:27:04 +02:00
ff81331f6b
Add extreme cases analysis 2021-06-15 12:55:18 +02:00
a1671640be
Fix sec labels 2021-06-14 19:54:04 +02:00
11b18d668f
Move related works to chapter 2 2021-06-14 19:49:45 +02:00
ef54268718
Refactor chapter 4 2021-06-14 19:47:39 +02:00
5a101736e6
Refactor RQ5 2021-06-14 18:54:07 +02:00
6619d5ec75
Refactor RQ4 2021-06-14 18:46:18 +02:00
8f94bd7731
Refactor RQ3 2021-06-14 18:41:05 +02:00
9eebbe7d7d
Refactor RQ2 2021-06-14 18:27:29 +02:00
074a496d82
Refactor RQ1 2021-06-14 18:16:05 +02:00
5c8637b5f1
Add methodology 2021-06-14 17:43:32 +02:00
db68c4baf1
Refactor chapter 3 2021-06-14 17:36:05 +02:00
bd39451614
Move chapter 2 to chapter 3 2021-06-14 15:43:02 +02:00
7ece08dba9
Fix section title 2021-06-14 15:31:00 +02:00
59048ecfb6
Add goals 2021-06-14 15:27:40 +02:00
149b0caf47
Add roadmap 2021-06-14 15:16:47 +02:00
f9ab69ad78
Update draft 2021-06-14 11:59:15 +02:00
224dba4a53
Merge branch 'chapter_4' into draft 2021-06-14 11:49:58 +02:00
1477cfe516
Change plots to use better color coding 2021-06-14 11:40:27 +02:00
ccdc932390
Fix 2021-06-13 10:42:32 +02:00
d95724831a
Add conclusion 2021-06-11 12:55:52 +02:00
a2ef42c747
Update draft with chapter 1 2021-06-11 10:41:53 +02:00
9a83d91651
Merge branch 'chapter_1' into draft 2021-06-11 10:36:56 +02:00
85f23480ac
Add tests 2021-06-10 18:26:07 +02:00
0dbf8da5a3
Refactor relate works 2021-06-10 16:01:45 +02:00
2bb005a9fa
Add research questions 2021-06-10 13:29:03 +02:00
917b3fb1fc
Add introduction 2021-06-10 12:24:09 +02:00
2d4773138d
Update draft 2021-06-09 10:16:25 +02:00
b9224c6959
General fix 2021-06-09 10:13:33 +02:00
97c5ab4f6e
Fix RQ1 2021-06-08 21:15:10 +02:00
4ed2559b5d
Re-generate all images 2021-06-08 16:33:25 +02:00
57c0394721
Add RQ1 2021-06-08 16:33:05 +02:00
404fabe83b
Refactor some phrases 2021-06-08 10:53:00 +02:00
1fa7839b6b
Add RQ5 2021-06-07 15:36:32 +02:00
c4068dc2bc
Add RQ4 2021-06-07 12:59:56 +02:00
5420e9cec7
Add RQ3 2021-06-07 12:20:15 +02:00
cf91cd0f71
Add RQ2 2021-06-07 11:15:15 +02:00
ca6c7e96f9
Add draft 2021-06-05 18:26:42 +02:00
f23321cf5f
Update bibliography 2021-06-05 18:12:53 +02:00
47 changed files with 11602 additions and 103 deletions

View File

@ -7,6 +7,7 @@
urldate = {2021-06-03}, urldate = {2021-06-03},
abstract = {In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (na\"ive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve higher accuracy levels.Na\"ive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\"ive Bayes is not (necessarily) a Bayesian method.}, abstract = {In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (na\"ive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve higher accuracy levels.Na\"ive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\"ive Bayes is not (necessarily) a Bayesian method.},
annotation = {Page Version ID: 1024247473}, annotation = {Page Version ID: 1024247473},
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/5T4T73X4/index.html},
langid = {english} langid = {english}
} }
@ -22,7 +23,7 @@
doi = {10.1109/ESEM.2019.8870187}, doi = {10.1109/ESEM.2019.8870187},
abstract = {Method: We conduct an empirical study of ML-related developer posts on Stack Overflow. We perform in-depth quantitative and qualitative analyses focusing on a series of research questions related to the challenges of developing ML applications and the directions to address them. Results: Our findings include: (1) ML questions suffer from a much higher percentage of unanswered questions on Stack Overflow than other domains; (2) there is a lack of ML experts in the Stack Overflow QA community; (3) the data preprocessing and model deployment phases are where most of the challenges lay; and (4) addressing most of these challenges require more ML implementation knowledge than ML conceptual knowledge. Conclusions: Our findings suggest that most challenges are under the data preparation and model deployment phases, i.e., early and late stages. Also, the implementation aspect of ML shows much higher difficulty level among developers than the conceptual aspect.}, abstract = {Method: We conduct an empirical study of ML-related developer posts on Stack Overflow. We perform in-depth quantitative and qualitative analyses focusing on a series of research questions related to the challenges of developing ML applications and the directions to address them. Results: Our findings include: (1) ML questions suffer from a much higher percentage of unanswered questions on Stack Overflow than other domains; (2) there is a lack of ML experts in the Stack Overflow QA community; (3) the data preprocessing and model deployment phases are where most of the challenges lay; and (4) addressing most of these challenges require more ML implementation knowledge than ML conceptual knowledge. Conclusions: Our findings suggest that most challenges are under the data preparation and model deployment phases, i.e., early and late stages. Also, the implementation aspect of ML shows much higher difficulty level among developers than the conceptual aspect.},
eventtitle = {2019 {{ACM}}/{{IEEE International Symposium}} on {{Empirical Software Engineering}} and {{Measurement}} ({{ESEM}})}, eventtitle = {2019 {{ACM}}/{{IEEE International Symposium}} on {{Empirical Software Engineering}} and {{Measurement}} ({{ESEM}})},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Alshangiti_2019_Why is Developing Machine Learning Applications Challenging.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Alshangiti_2019_Why is Developing Machine Learning Applications Challenging.pdf},
isbn = {978-1-72812-968-6}, isbn = {978-1-72812-968-6},
langid = {english} langid = {english}
} }
@ -39,7 +40,7 @@
doi = {10.1109/ICSE-SEIP.2019.00042}, doi = {10.1109/ICSE-SEIP.2019.00042},
abstract = {Recent advances in machine learning have stimulated widespread interest within the Information Technology sector on integrating AI capabilities into software and services. This goal has forced organizations to evolve their development processes. We report on a study that we conducted on observing software teams at Microsoft as they develop AI-based applications. We consider a nine-stage workflow process informed by prior experiences developing AI applications (e.g., search and NLP) and data science tools (e.g. application diagnostics and bug reporting). We found that various Microsoft teams have united this workflow into preexisting, well-evolved, Agile-like software engineering processes, providing insights about several essential engineering challenges that organizations may face in creating large-scale AI solutions for the marketplace. We collected some best practices from Microsoft teams to address these challenges. In addition, we have identified three aspects of the AI domain that make it fundamentally different from prior software application domains: 1) discovering, managing, and versioning the data needed for machine learning applications is much more complex and difficult than other types of software engineering, 2) model customization and model reuse require very different skills than are typically found in software teams, and 3) AI components are more difficult to handle as distinct modules than traditional software components \textemdash{} models may be ``entangled'' in complex ways and experience non-monotonic error behavior. We believe that the lessons learned by Microsoft teams will be valuable to other organizations.}, abstract = {Recent advances in machine learning have stimulated widespread interest within the Information Technology sector on integrating AI capabilities into software and services. This goal has forced organizations to evolve their development processes. We report on a study that we conducted on observing software teams at Microsoft as they develop AI-based applications. We consider a nine-stage workflow process informed by prior experiences developing AI applications (e.g., search and NLP) and data science tools (e.g. application diagnostics and bug reporting). We found that various Microsoft teams have united this workflow into preexisting, well-evolved, Agile-like software engineering processes, providing insights about several essential engineering challenges that organizations may face in creating large-scale AI solutions for the marketplace. We collected some best practices from Microsoft teams to address these challenges. In addition, we have identified three aspects of the AI domain that make it fundamentally different from prior software application domains: 1) discovering, managing, and versioning the data needed for machine learning applications is much more complex and difficult than other types of software engineering, 2) model customization and model reuse require very different skills than are typically found in software teams, and 3) AI components are more difficult to handle as distinct modules than traditional software components \textemdash{} models may be ``entangled'' in complex ways and experience non-monotonic error behavior. We believe that the lessons learned by Microsoft teams will be valuable to other organizations.},
eventtitle = {2019 {{IEEE}}/{{ACM}} 41st {{International Conference}} on {{Software Engineering}}: {{Software Engineering}} in {{Practice}} ({{ICSE}}-{{SEIP}})}, eventtitle = {2019 {{IEEE}}/{{ACM}} 41st {{International Conference}} on {{Software Engineering}}: {{Software Engineering}} in {{Practice}} ({{ICSE}}-{{SEIP}})},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Amershi_2019_Software Engineering for Machine Learning.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Amershi_2019_Software Engineering for Machine Learning.pdf},
isbn = {978-1-72811-760-7}, isbn = {978-1-72811-760-7},
langid = {english} langid = {english}
} }
@ -56,7 +57,7 @@
doi = {10.1109/MSR.2019.00052}, doi = {10.1109/MSR.2019.00052},
abstract = {Machine learning, a branch of Artificial Intelligence, is now popular in software engineering community and is successfully used for problems like bug prediction, and software development effort estimation. Developers' understanding of machine learning, however, is not clear, and we require investigation to understand what educators should focus on, and how different online programming discussion communities can be more helpful. We conduct a study on Stack Overflow (SO) machine learning related posts using the SOTorrent dataset. We found that some machine learning topics are significantly more discussed than others, and others need more attention. We also found that topic generation with Latent Dirichlet Allocation (LDA) can suggest more appropriate tags that can make a machine learning post more visible and thus can help in receiving immediate feedback from sites like SO.}, abstract = {Machine learning, a branch of Artificial Intelligence, is now popular in software engineering community and is successfully used for problems like bug prediction, and software development effort estimation. Developers' understanding of machine learning, however, is not clear, and we require investigation to understand what educators should focus on, and how different online programming discussion communities can be more helpful. We conduct a study on Stack Overflow (SO) machine learning related posts using the SOTorrent dataset. We found that some machine learning topics are significantly more discussed than others, and others need more attention. We also found that topic generation with Latent Dirichlet Allocation (LDA) can suggest more appropriate tags that can make a machine learning post more visible and thus can help in receiving immediate feedback from sites like SO.},
eventtitle = {2019 {{IEEE}}/{{ACM}} 16th {{International Conference}} on {{Mining Software Repositories}} ({{MSR}})}, eventtitle = {2019 {{IEEE}}/{{ACM}} 16th {{International Conference}} on {{Mining Software Repositories}} ({{MSR}})},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Bangash_2019_What do Developers Know About Machine Learning.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Bangash_2019_What do Developers Know About Machine Learning.pdf},
isbn = {978-1-72813-412-3}, isbn = {978-1-72813-412-3},
langid = {english} langid = {english}
} }
@ -72,7 +73,7 @@
archiveprefix = {arXiv}, archiveprefix = {arXiv},
eprint = {1606.04984}, eprint = {1606.04984},
eprinttype = {arxiv}, eprinttype = {arxiv},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Borges_2016_Understanding the Factors that Impact the Popularity of GitHub Repositories.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Borges_2016_Understanding the Factors that Impact the Popularity of GitHub Repositories.pdf},
keywords = {Computer Science - Social and Information Networks,Computer Science - Software Engineering}, keywords = {Computer Science - Social and Information Networks,Computer Science - Software Engineering},
langid = {english} langid = {english}
} }
@ -84,7 +85,7 @@
url = {https://medium.com/analytics-vidhya/text-classification-using-word-embeddings-and-deep-learning-in-python-classifying-tweets-from-6fe644fcfc81}, url = {https://medium.com/analytics-vidhya/text-classification-using-word-embeddings-and-deep-learning-in-python-classifying-tweets-from-6fe644fcfc81},
urldate = {2021-05-21}, urldate = {2021-05-21},
abstract = {The purpose of this article is to help a reader understand how to leverage word embeddings and deep learning when creating a text\ldots}, abstract = {The purpose of this article is to help a reader understand how to leverage word embeddings and deep learning when creating a text\ldots},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/BDS956UP/text-classification-using-word-embeddings-and-deep-learning-in-python-classifying-tweets-from-6.html}, file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/BDS956UP/text-classification-using-word-embeddings-and-deep-learning-in-python-classifying-tweets-from-6.html},
langid = {english}, langid = {english},
organization = {{Medium}} organization = {{Medium}}
} }
@ -100,11 +101,22 @@
doi = {10.1145/3106237.3106285}, doi = {10.1145/3106237.3106285},
abstract = {Bug reports document unexpected software behaviors experienced by users. To be effective, they should allow bug triagers to easily understand and reproduce the potential reported bugs, by clearly describing the Observed Behavior (OB), the Steps to Reproduce (S2R), and the Expected Behavior (EB). Unfortunately, while considered extremely useful, reporters often miss such pieces of information in bug reports and, to date, there is no effective way to automatically check and enforce their presence. We manually analyzed nearly 3k bug reports to understand to what extent OB, EB, and S2R are reported in bug reports and what discourse patterns reporters use to describe such information. We found that (i) while most reports contain OB (i.e., 93.5\%), only 35.2\% and 51.4\% explicitly describe EB and S2R, respectively; and (ii) reporters recurrently use 154 discourse patterns to describe such content. Based on these findings, we designed and evaluated an automated approach to detect the absence (or presence) of EB and S2R in bug descriptions. With its best setting, our approach is able to detect missing EB (S2R) with 85.9\% (69.2\%) average precision and 93.2\% (83\%) average recall. Our approach intends to improve bug descriptions quality by alerting reporters about missing EB and S2R at reporting time.}, abstract = {Bug reports document unexpected software behaviors experienced by users. To be effective, they should allow bug triagers to easily understand and reproduce the potential reported bugs, by clearly describing the Observed Behavior (OB), the Steps to Reproduce (S2R), and the Expected Behavior (EB). Unfortunately, while considered extremely useful, reporters often miss such pieces of information in bug reports and, to date, there is no effective way to automatically check and enforce their presence. We manually analyzed nearly 3k bug reports to understand to what extent OB, EB, and S2R are reported in bug reports and what discourse patterns reporters use to describe such information. We found that (i) while most reports contain OB (i.e., 93.5\%), only 35.2\% and 51.4\% explicitly describe EB and S2R, respectively; and (ii) reporters recurrently use 154 discourse patterns to describe such content. Based on these findings, we designed and evaluated an automated approach to detect the absence (or presence) of EB and S2R in bug descriptions. With its best setting, our approach is able to detect missing EB (S2R) with 85.9\% (69.2\%) average precision and 93.2\% (83\%) average recall. Our approach intends to improve bug descriptions quality by alerting reporters about missing EB and S2R at reporting time.},
eventtitle = {{{ESEC}}/{{FSE}}'17: {{Joint Meeting}} of the {{European Software Engineering Conference}} and the {{ACM SIGSOFT Symposium}} on the {{Foundations}} of {{Software Engineering}}}, eventtitle = {{{ESEC}}/{{FSE}}'17: {{Joint Meeting}} of the {{European Software Engineering Conference}} and the {{ACM SIGSOFT Symposium}} on the {{Foundations}} of {{Software Engineering}}},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/ACM/Chaparro_2017_Detecting missing information in bug descriptions.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/ACM/Chaparro_2017_Detecting missing information in bug descriptions.pdf},
isbn = {978-1-4503-5105-8}, isbn = {978-1-4503-5105-8},
langid = {english} langid = {english}
} }
@inproceedings{chen2015deepdrivinglearningaffordance,
title = {{{DeepDriving}}: {{Learning Affordance}} for {{Direct Perception}} in {{Autonomous Driving}}},
shorttitle = {{{DeepDriving}}},
author = {Chen, Chenyi and Seff, Ari and Kornhauser, Alain and Xiao, Jianxiong},
date = {2015},
pages = {2722--2730},
url = {https://openaccess.thecvf.com/content_iccv_2015/html/Chen_DeepDriving_Learning_Affordance_ICCV_2015_paper.html},
urldate = {2021-06-09},
eventtitle = {Proceedings of the {{IEEE International Conference}} on {{Computer Vision}}}
}
@article{deboom2016representationlearningvery, @article{deboom2016representationlearningvery,
title = {Representation Learning for Very Short Texts Using Weighted Word Embedding Aggregation}, title = {Representation Learning for Very Short Texts Using Weighted Word Embedding Aggregation},
author = {De Boom, Cedric and Van Canneyt, Steven and Demeester, Thomas and Dhoedt, Bart}, author = {De Boom, Cedric and Van Canneyt, Steven and Demeester, Thomas and Dhoedt, Bart},
@ -119,7 +131,7 @@
archiveprefix = {arXiv}, archiveprefix = {arXiv},
eprint = {1607.00570}, eprint = {1607.00570},
eprinttype = {arxiv}, eprinttype = {arxiv},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/De Boom_2016_Representation learning for very short texts using weighted word embedding.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/De Boom_2016_Representation learning for very short texts using weighted word embedding.pdf},
keywords = {Computer Science - Computation and Language,Computer Science - Information Retrieval}, keywords = {Computer Science - Computation and Language,Computer Science - Information Retrieval},
langid = {english} langid = {english}
} }
@ -151,7 +163,7 @@
issn = {1382-3256, 1573-7616}, issn = {1382-3256, 1573-7616},
doi = {10.1007/s10664-020-09916-6}, doi = {10.1007/s10664-020-09916-6},
abstract = {Many AI researchers are publishing code, data and other resources that accompany their papers in GitHub repositories. In this paper, we refer to these repositories as academic AI repositories. Our preliminary study shows that highly cited papers are more likely to have popular academic AI repositories (and vice versa). Hence, in this study, we perform an empirical study on academic AI repositories to highlight good software engineering practices of popular academic AI repositories for AI researchers. We collect 1,149 academic AI repositories, in which we label the top 20\% repositories that have the most number of stars as popular, and we label the bottom 70\% repositories as unpopular. The remaining 10\% repositories are set as a gap between popular and unpopular academic AI repositories. We propose 21 features to characterize the software engineering practices of academic AI repositories. Our experimental results show that popular and unpopular academic AI repositories are statistically significantly different in 11 of the studied features\textemdash indicating that the two groups of repositories have significantly different software engineering practices. Furthermore, we find that the number of links to other GitHub repositories in the README file, the number of images in the README file and the inclusion of a license are the most important features for differentiating the two groups of academic AI repositories. Our dataset and code are made publicly available to share with the community.}, abstract = {Many AI researchers are publishing code, data and other resources that accompany their papers in GitHub repositories. In this paper, we refer to these repositories as academic AI repositories. Our preliminary study shows that highly cited papers are more likely to have popular academic AI repositories (and vice versa). Hence, in this study, we perform an empirical study on academic AI repositories to highlight good software engineering practices of popular academic AI repositories for AI researchers. We collect 1,149 academic AI repositories, in which we label the top 20\% repositories that have the most number of stars as popular, and we label the bottom 70\% repositories as unpopular. The remaining 10\% repositories are set as a gap between popular and unpopular academic AI repositories. We propose 21 features to characterize the software engineering practices of academic AI repositories. Our experimental results show that popular and unpopular academic AI repositories are statistically significantly different in 11 of the studied features\textemdash indicating that the two groups of repositories have significantly different software engineering practices. Furthermore, we find that the number of links to other GitHub repositories in the README file, the number of images in the README file and the inclusion of a license are the most important features for differentiating the two groups of academic AI repositories. Our dataset and code are made publicly available to share with the community.},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Fan_2021_What makes a popular academic AI repository.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Fan_2021_What makes a popular academic AI repository.pdf},
langid = {english}, langid = {english},
number = {1} number = {1}
} }
@ -163,7 +175,7 @@
date = {2019-09-05}, date = {2019-09-05},
publisher = {{"O'Reilly Media, Inc."}}, publisher = {{"O'Reilly Media, Inc."}},
abstract = {Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how.By using concrete examples, minimal theory, and two production-ready Python frameworks\textemdash Scikit-Learn and TensorFlow\textemdash author Aur\'elien G\'eron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems. You'll learn a range of techniques, starting with simple linear regression and progressing to deep neural networks. With exercises in each chapter to help you apply what you've learned, all you need is programming experience to get started.Explore the machine learning landscape, particularly neural netsUse Scikit-Learn to track an example machine-learning project end-to-endExplore several training models, including support vector machines, decision trees, random forests, and ensemble methodsUse the TensorFlow library to build and train neural netsDive into neural net architectures, including convolutional nets, recurrent nets, and deep reinforcement learningLearn techniques for training and scaling deep neural nets}, abstract = {Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how.By using concrete examples, minimal theory, and two production-ready Python frameworks\textemdash Scikit-Learn and TensorFlow\textemdash author Aur\'elien G\'eron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems. You'll learn a range of techniques, starting with simple linear regression and progressing to deep neural networks. With exercises in each chapter to help you apply what you've learned, all you need is programming experience to get started.Explore the machine learning landscape, particularly neural netsUse Scikit-Learn to track an example machine-learning project end-to-endExplore several training models, including support vector machines, decision trees, random forests, and ensemble methodsUse the TensorFlow library to build and train neural netsDive into neural net architectures, including convolutional nets, recurrent nets, and deep reinforcement learningLearn techniques for training and scaling deep neural nets},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/O'Reilly Media, Inc./Geron_2019_Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/O'Reilly Media, Inc./Geron_2019_Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow.pdf},
isbn = {978-1-4920-3259-5}, isbn = {978-1-4920-3259-5},
keywords = {Computers / Computer Vision & Pattern Recognition,Computers / Data Processing,Computers / Intelligence (AI) & Semantics,Computers / Natural Language Processing,Computers / Neural Networks,Computers / Programming Languages / Python}, keywords = {Computers / Computer Vision & Pattern Recognition,Computers / Data Processing,Computers / Intelligence (AI) & Semantics,Computers / Natural Language Processing,Computers / Neural Networks,Computers / Programming Languages / Python},
langid = {english}, langid = {english},
@ -182,7 +194,7 @@
doi = {10.1145/3379597.3387473}, doi = {10.1145/3379597.3387473},
abstract = {In the last few years, artificial intelligence (AI) and machine learning (ML) have become ubiquitous terms. These powerful techniques have escaped obscurity in academic communities with the recent onslaught of AI \& ML tools, frameworks, and libraries that make these techniques accessible to a wider audience of developers. As a result, applying AI \& ML to solve existing and emergent problems is an increasingly popular practice. However, little is known about this domain from the software engineering perspective. Many AI \& ML tools and applications are open source, hosted on platforms such as GitHub that provide rich tools for large-scale distributed software development. Despite widespread use and popularity, these repositories have never been examined as a community to identify unique properties, development patterns, and trends.}, abstract = {In the last few years, artificial intelligence (AI) and machine learning (ML) have become ubiquitous terms. These powerful techniques have escaped obscurity in academic communities with the recent onslaught of AI \& ML tools, frameworks, and libraries that make these techniques accessible to a wider audience of developers. As a result, applying AI \& ML to solve existing and emergent problems is an increasingly popular practice. However, little is known about this domain from the software engineering perspective. Many AI \& ML tools and applications are open source, hosted on platforms such as GitHub that provide rich tools for large-scale distributed software development. Despite widespread use and popularity, these repositories have never been examined as a community to identify unique properties, development patterns, and trends.},
eventtitle = {{{MSR}} '20: 17th {{International Conference}} on {{Mining Software Repositories}}}, eventtitle = {{{MSR}} '20: 17th {{International Conference}} on {{Mining Software Repositories}}},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/ACM/Gonzalez_2020_The State of the ML-universe.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/ACM/Gonzalez_2020_The State of the ML-universe.pdf},
isbn = {978-1-4503-7517-7}, isbn = {978-1-4503-7517-7},
langid = {english} langid = {english}
} }
@ -198,7 +210,7 @@
doi = {10.1109/ICSME46990.2020.00058}, doi = {10.1109/ICSME46990.2020.00058},
abstract = {The role of machine learning frameworks in soft\- ware applications has exploded in recent years. Similar to non-machine learning frameworks, those frameworks need to evolve to incorporate new features, optimizations, etc., yet their evolution is impacted by the interdisciplinary development teams needed to develop them: scientists and developers. One concrete way in which this shows is through the use of multiple pro\- gramming languages in their code base, enabling the scientists to write optimized low-level code while developers can integrate the latter into a robust framework. Since multi-language code bases have been shown to impact the development process, this paper empirically compares ten large open-source multi-language machine learning frameworks and ten large open-source multi\- language traditional systems in terms of the volume of pull requests, their acceptance ratio i.e., the percentage of accepted pull requests among all the received pull requests, review process duration i.e., period taken to accept or reject a pull request, and bug-proneness. We find that multi-language pull request contributions present a challenge for both machine learning and traditional systems. Our main findings show that in both machine learning and traditional systems, multi-language pull requests are likely to be less accepted than mono-language pull requests; it also takes longer for both multi- and mono-language pull requests to be rejected than accepted. Machine learning frameworks take longer to accept/reject a multi-language pull request than traditional systems. Finally, we find that mono\- language pull requests in machine learning frameworks are more bug-prone than traditional systems.}, abstract = {The role of machine learning frameworks in soft\- ware applications has exploded in recent years. Similar to non-machine learning frameworks, those frameworks need to evolve to incorporate new features, optimizations, etc., yet their evolution is impacted by the interdisciplinary development teams needed to develop them: scientists and developers. One concrete way in which this shows is through the use of multiple pro\- gramming languages in their code base, enabling the scientists to write optimized low-level code while developers can integrate the latter into a robust framework. Since multi-language code bases have been shown to impact the development process, this paper empirically compares ten large open-source multi-language machine learning frameworks and ten large open-source multi\- language traditional systems in terms of the volume of pull requests, their acceptance ratio i.e., the percentage of accepted pull requests among all the received pull requests, review process duration i.e., period taken to accept or reject a pull request, and bug-proneness. We find that multi-language pull request contributions present a challenge for both machine learning and traditional systems. Our main findings show that in both machine learning and traditional systems, multi-language pull requests are likely to be less accepted than mono-language pull requests; it also takes longer for both multi- and mono-language pull requests to be rejected than accepted. Machine learning frameworks take longer to accept/reject a multi-language pull request than traditional systems. Finally, we find that mono\- language pull requests in machine learning frameworks are more bug-prone than traditional systems.},
eventtitle = {2020 {{IEEE International Conference}} on {{Software Maintenance}} and {{Evolution}} ({{ICSME}})}, eventtitle = {2020 {{IEEE International Conference}} on {{Software Maintenance}} and {{Evolution}} ({{ICSME}})},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Grichi_2020_On the Impact of Multi-language Development in Machine Learning Frameworks.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Grichi_2020_On the Impact of Multi-language Development in Machine Learning Frameworks.pdf},
isbn = {978-1-72815-619-4}, isbn = {978-1-72815-619-4},
langid = {english} langid = {english}
} }
@ -214,7 +226,7 @@
doi = {10.1109/ICSME46990.2020.00116}, doi = {10.1109/ICSME46990.2020.00116},
abstract = {Deep Learning techniques have been prevalent in various domains, and more and more open source projects in GitHub rely on deep learning libraries to implement their algorithms. To that end, they should always keep pace with the latest versions of deep learning libraries to make the best use of deep learning libraries. Aptly managing the versions of deep learning libraries can help projects avoid crashes or security issues caused by deep learning libraries. Unfortunately, very few studies have been done on the dependency networks of deep learning libraries. In this paper, we take the first step to perform an exploratory study on the dependency networks of deep learning libraries, namely, Tensorflow, PyTorch, and Theano. We study the project purposes, application domains, dependency degrees, update behaviors and reasons as well as version distributions of deep learning projects that depend on Tensorflow, PyTorch, and Theano. Our study unveils some commonalities in various aspects (e.g., purposes, application domains, dependency degrees) of deep learning libraries and reveals some discrepancies as for the update behaviors, update reasons, and the version distributions. Our findings highlight some directions for researchers and also provide suggestions for deep learning developers and users.}, abstract = {Deep Learning techniques have been prevalent in various domains, and more and more open source projects in GitHub rely on deep learning libraries to implement their algorithms. To that end, they should always keep pace with the latest versions of deep learning libraries to make the best use of deep learning libraries. Aptly managing the versions of deep learning libraries can help projects avoid crashes or security issues caused by deep learning libraries. Unfortunately, very few studies have been done on the dependency networks of deep learning libraries. In this paper, we take the first step to perform an exploratory study on the dependency networks of deep learning libraries, namely, Tensorflow, PyTorch, and Theano. We study the project purposes, application domains, dependency degrees, update behaviors and reasons as well as version distributions of deep learning projects that depend on Tensorflow, PyTorch, and Theano. Our study unveils some commonalities in various aspects (e.g., purposes, application domains, dependency degrees) of deep learning libraries and reveals some discrepancies as for the update behaviors, update reasons, and the version distributions. Our findings highlight some directions for researchers and also provide suggestions for deep learning developers and users.},
eventtitle = {2020 {{IEEE International Conference}} on {{Software Maintenance}} and {{Evolution}} ({{ICSME}})}, eventtitle = {2020 {{IEEE International Conference}} on {{Software Maintenance}} and {{Evolution}} ({{ICSME}})},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Han_2020_An Empirical Study of the Dependency Networks of Deep Learning Libraries2.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Han_2020_An Empirical Study of the Dependency Networks of Deep Learning Libraries2.pdf},
isbn = {978-1-72815-619-4}, isbn = {978-1-72815-619-4},
langid = {english} langid = {english}
} }
@ -229,18 +241,33 @@
pages = {2694--2747}, pages = {2694--2747},
issn = {1382-3256, 1573-7616}, issn = {1382-3256, 1573-7616},
doi = {10.1007/s10664-020-09819-6}, doi = {10.1007/s10664-020-09819-6},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Han_2020_What do Programmers Discuss about Deep Learning Frameworks.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Han_2020_What do Programmers Discuss about Deep Learning Frameworks.pdf},
langid = {english}, langid = {english},
number = {4} number = {4}
} }
@online{hannun2014deepspeechscaling,
title = {Deep {{Speech}}: {{Scaling}} up End-to-End Speech Recognition},
shorttitle = {Deep {{Speech}}},
author = {Hannun, Awni and Case, Carl and Casper, Jared and Catanzaro, Bryan and Diamos, Greg and Elsen, Erich and Prenger, Ryan and Satheesh, Sanjeev and Sengupta, Shubho and Coates, Adam and Ng, Andrew Y.},
date = {2014-12-19},
url = {http://arxiv.org/abs/1412.5567},
urldate = {2021-06-09},
abstract = {We present a state-of-the-art speech recognition system developed using end-to-end deep learning. Our architecture is significantly simpler than traditional speech systems, which rely on laboriously engineered processing pipelines; these traditional systems also tend to perform poorly when used in noisy environments. In contrast, our system does not need hand-designed components to model background noise, reverberation, or speaker variation, but instead directly learns a function that is robust to such effects. We do not need a phoneme dictionary, nor even the concept of a "phoneme." Key to our approach is a well-optimized RNN training system that uses multiple GPUs, as well as a set of novel data synthesis techniques that allow us to efficiently obtain a large amount of varied data for training. Our system, called Deep Speech, outperforms previously published results on the widely studied Switchboard Hub5'00, achieving 16.0\% error on the full test set. Deep Speech also handles challenging noisy environments better than widely used, state-of-the-art commercial speech systems.},
archiveprefix = {arXiv},
eprint = {1412.5567},
eprinttype = {arxiv},
keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning,Computer Science - Neural and Evolutionary Computing},
primaryclass = {cs}
}
@book{harrington2012machinelearningaction, @book{harrington2012machinelearningaction,
title = {Machine {{Learning}} in {{Action}}}, title = {Machine {{Learning}} in {{Action}}},
author = {Harrington, Peter}, author = {Harrington, Peter},
date = {2012-04-19}, date = {2012-04-19},
publisher = {{Manning Publications}}, publisher = {{Manning Publications}},
abstract = {SummaryMachine Learning in Action is unique book that blends the foundational theories of machine learning with the practical realities of building tools for everyday data analysis. You'll use the flexible Python programming language to build programs that implement algorithms for data classification, forecasting, recommendations, and higher-level features like summarization and simplification.About the BookA machine is said to learn when its performance improves with experience. Learning requires algorithms and programs that capture data and ferret out the interestingor useful patterns. Once the specialized domain of analysts and mathematicians, machine learning is becoming a skill needed by many.Machine Learning in Action is a clearly written tutorial for developers. It avoids academic language and takes you straight to the techniques you'll use in your day-to-day work. Many (Python) examples present the core algorithms of statistical data processing, data analysis, and data visualization in code you can reuse. You'll understand the concepts and how they fit in with tactical tasks like classification, forecasting, recommendations, and higher-level features like summarization and simplification.Readers need no prior experience with machine learning or statistical processing. Familiarity with Python is helpful. Purchase of the print book comes with an offer of a free PDF, ePub, and Kindle eBook from Manning. Also available is all code from the book. What's InsideA no-nonsense introductionExamples showing common ML tasksEveryday data analysisImplementing classic algorithms like Apriori and AdaboosTable of ContentsPART 1 CLASSIFICATIONMachine learning basicsClassifying with k-Nearest NeighborsSplitting datasets one feature at a time: decision treesClassifying with probability theory: na\"ive BayesLogistic regressionSupport vector machinesImproving classification with the AdaBoost meta algorithmPART 2 FORECASTING NUMERIC VALUES WITH REGRESSIONPredicting numeric values: regressionTree-based regressionPART 3 UNSUPERVISED LEARNINGGrouping unlabeled items using k-means clusteringAssociation analysis with the Apriori algorithmEfficiently finding frequent itemsets with FP-growthPART 4 ADDITIONAL TOOLSUsing principal component analysis to simplify dataSimplifying data with the singular value decompositionBig data and MapReduce}, abstract = {SummaryMachine Learning in Action is unique book that blends the foundational theories of machine learning with the practical realities of building tools for everyday data analysis. You'll use the flexible Python programming language to build programs that implement algorithms for data classification, forecasting, recommendations, and higher-level features like summarization and simplification.About the BookA machine is said to learn when its performance improves with experience. Learning requires algorithms and programs that capture data and ferret out the interestingor useful patterns. Once the specialized domain of analysts and mathematicians, machine learning is becoming a skill needed by many.Machine Learning in Action is a clearly written tutorial for developers. It avoids academic language and takes you straight to the techniques you'll use in your day-to-day work. Many (Python) examples present the core algorithms of statistical data processing, data analysis, and data visualization in code you can reuse. You'll understand the concepts and how they fit in with tactical tasks like classification, forecasting, recommendations, and higher-level features like summarization and simplification.Readers need no prior experience with machine learning or statistical processing. Familiarity with Python is helpful. Purchase of the print book comes with an offer of a free PDF, ePub, and Kindle eBook from Manning. Also available is all code from the book. What's InsideA no-nonsense introductionExamples showing common ML tasksEveryday data analysisImplementing classic algorithms like Apriori and AdaboosTable of ContentsPART 1 CLASSIFICATIONMachine learning basicsClassifying with k-Nearest NeighborsSplitting datasets one feature at a time: decision treesClassifying with probability theory: na\"ive BayesLogistic regressionSupport vector machinesImproving classification with the AdaBoost meta algorithmPART 2 FORECASTING NUMERIC VALUES WITH REGRESSIONPredicting numeric values: regressionTree-based regressionPART 3 UNSUPERVISED LEARNINGGrouping unlabeled items using k-means clusteringAssociation analysis with the Apriori algorithmEfficiently finding frequent itemsets with FP-growthPART 4 ADDITIONAL TOOLSUsing principal component analysis to simplify dataSimplifying data with the singular value decompositionBig data and MapReduce},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/Manning Publications/Harrington_2012_Machine Learning in Action.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/Manning Publications/Harrington_2012_Machine Learning in Action.pdf},
isbn = {978-1-61729-018-3}, isbn = {978-1-61729-018-3},
keywords = {Computers / Computer Science,Computers / Data Processing,Computers / Databases / Data Mining,Computers / Intelligence (AI) & Semantics,Computers / Mathematical & Statistical Software,Computers / Programming / Algorithms,Computers / Programming / Open Source,Computers / Programming Languages / Python}, keywords = {Computers / Computer Science,Computers / Data Processing,Computers / Databases / Data Mining,Computers / Intelligence (AI) & Semantics,Computers / Mathematical & Statistical Software,Computers / Programming / Algorithms,Computers / Programming / Open Source,Computers / Programming Languages / Python},
langid = {english}, langid = {english},
@ -258,11 +285,38 @@
doi = {10.1109/ICSE.2009.5070510}, doi = {10.1109/ICSE.2009.5070510},
abstract = {Predicting the incidence of faults in code has been commonly associated with measuring complexity. In this paper, we propose complexity metrics that are based on the code change process instead of on the code. We conjecture that a complex code change process negatively affects its product, i.e., the software system. We validate our hypothesis empirically through a case study using data derived from the change history for six large open source projects. Our case study shows that our change complexity metrics are better predictors of fault potential in comparison to other well-known historical predictors of faults, i.e., prior modifications and prior faults.}, abstract = {Predicting the incidence of faults in code has been commonly associated with measuring complexity. In this paper, we propose complexity metrics that are based on the code change process instead of on the code. We conjecture that a complex code change process negatively affects its product, i.e., the software system. We validate our hypothesis empirically through a case study using data derived from the change history for six large open source projects. Our case study shows that our change complexity metrics are better predictors of fault potential in comparison to other well-known historical predictors of faults, i.e., prior modifications and prior faults.},
eventtitle = {2009 {{IEEE}} 31st {{International Conference}} on {{Software Engineering}}}, eventtitle = {2009 {{IEEE}} 31st {{International Conference}} on {{Software Engineering}}},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Hassan_2009_Predicting faults using the complexity of code changes.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Hassan_2009_Predicting faults using the complexity of code changes.pdf},
isbn = {978-1-4244-3453-4}, isbn = {978-1-4244-3453-4},
langid = {english} langid = {english}
} }
@inproceedings{he2016deepresiduallearning,
title = {Deep {{Residual Learning}} for {{Image Recognition}}},
author = {He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
date = {2016},
pages = {770--778},
url = {https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html},
urldate = {2021-06-09},
eventtitle = {Proceedings of the {{IEEE Conference}} on {{Computer Vision}} and {{Pattern Recognition}}}
}
@article{hirschberg2015advancesnaturallanguage,
title = {Advances in Natural Language Processing},
author = {Hirschberg, Julia and Manning, Christopher D.},
date = {2015-07-17},
journaltitle = {Science},
volume = {349},
pages = {261--266},
publisher = {{American Association for the Advancement of Science}},
issn = {0036-8075, 1095-9203},
doi = {10.1126/science.aaa8685},
abstract = {Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area.},
eprint = {26185244},
eprinttype = {pmid},
langid = {english},
number = {6245}
}
@online{humbatova-2019-taxonomyrealfaults, @online{humbatova-2019-taxonomyrealfaults,
title = {Taxonomy of {{Real Faults}} in {{Deep Learning Systems}}}, title = {Taxonomy of {{Real Faults}} in {{Deep Learning Systems}}},
author = {Humbatova, Nargiz and Jahangirova, Gunel and Bavota, Gabriele and Riccio, Vincenzo and Stocco, Andrea and Tonella, Paolo}, author = {Humbatova, Nargiz and Jahangirova, Gunel and Bavota, Gabriele and Riccio, Vincenzo and Stocco, Andrea and Tonella, Paolo},
@ -273,12 +327,44 @@
archiveprefix = {arXiv}, archiveprefix = {arXiv},
eprint = {1910.11015}, eprint = {1910.11015},
eprinttype = {arxiv}, eprinttype = {arxiv},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Humbatova_2019_Taxonomy of Real Faults in Deep Learning Systems.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Humbatova_2019_Taxonomy of Real Faults in Deep Learning Systems.pdf},
keywords = {Computer Science - Artificial Intelligence,Computer Science - Machine Learning,Computer Science - Software Engineering}, keywords = {Computer Science - Artificial Intelligence,Computer Science - Machine Learning,Computer Science - Software Engineering},
langid = {english}, langid = {english},
primaryclass = {cs} primaryclass = {cs}
} }
@online{huval2015empiricalevaluationdeep,
title = {An {{Empirical Evaluation}} of {{Deep Learning}} on {{Highway Driving}}},
author = {Huval, Brody and Wang, Tao and Tandon, Sameep and Kiske, Jeff and Song, Will and Pazhayampallil, Joel and Andriluka, Mykhaylo and Rajpurkar, Pranav and Migimatsu, Toki and Cheng-Yue, Royce and Mujica, Fernando and Coates, Adam and Ng, Andrew Y.},
date = {2015-04-16},
url = {http://arxiv.org/abs/1504.01716},
urldate = {2021-06-09},
abstract = {Numerous groups have applied a variety of deep learning techniques to computer vision problems in highway perception scenarios. In this paper, we presented a number of empirical evaluations of recent deep learning advances. Computer vision, combined with deep learning, has the potential to bring about a relatively inexpensive, robust solution to autonomous driving. To prepare deep learning for industry uptake and practical applications, neural networks will require large data sets that represent all possible driving environments and scenarios. We collect a large data set of highway data and apply deep learning and computer vision algorithms to problems such as car and lane detection. We show how existing convolutional neural networks (CNNs) can be used to perform lane and vehicle detection while running at frame rates required for a real-time system. Our results lend credence to the hypothesis that deep learning holds promise for autonomous driving.},
archiveprefix = {arXiv},
eprint = {1504.01716},
eprinttype = {arxiv},
keywords = {Computer Science - Computer Vision and Pattern Recognition,Computer Science - Robotics},
primaryclass = {cs}
}
@article{liu2020deeplearningsystem,
title = {A Deep Learning System for Differential Diagnosis of Skin Diseases},
author = {Liu, Yuan and Jain, Ayush and Eng, Clara and Way, David H. and Lee, Kang and Bui, Peggy and Kanada, Kimberly and de Oliveira Marinho, Guilherme and Gallegos, Jessica and Gabriele, Sara and Gupta, Vishakha and Singh, Nalini and Natarajan, Vivek and Hofmann-Wellenhof, Rainer and Corrado, Greg S. and Peng, Lily H. and Webster, Dale R. and Ai, Dennis and Huang, Susan J. and Liu, Yun and Dunn, R. Carter and Coz, David},
date = {2020-06},
journaltitle = {Nature Medicine},
shortjournal = {Nat Med},
volume = {26},
pages = {900--908},
publisher = {{Nature Publishing Group}},
issn = {1546-170X},
doi = {10.1038/s41591-020-0842-3},
abstract = {Skin conditions affect 1.9 billion people. Because of a shortage of dermatologists, most cases are seen instead by general practitioners with lower diagnostic accuracy. We present a deep learning system (DLS) to provide a differential diagnosis of skin conditions using 16,114 de-identified cases (photographs and clinical data) from a teledermatology practice serving 17 sites. The DLS distinguishes between 26 common skin conditions, representing 80\% of cases seen in primary care, while also providing a secondary prediction covering 419 skin conditions. On 963 validation cases, where a rotating panel of three board-certified dermatologists defined the reference standard, the DLS was non-inferior to six other dermatologists and superior to six primary care physicians (PCPs) and six nurse practitioners (NPs) (top-1 accuracy: 0.66 DLS, 0.63 dermatologists, 0.44 PCPs and 0.40 NPs). These results highlight the potential of the DLS to assist general practitioners in diagnosing skin conditions.},
issue = {6},
langid = {english},
number = {6},
options = {useprefix=true}
}
@article{liu2021exploratorystudyintroduction, @article{liu2021exploratorystudyintroduction,
title = {An {{Exploratory Study}} on the {{Introduction}} and {{Removal}} of {{Different Types}} of {{Technical Debt}}}, title = {An {{Exploratory Study}} on the {{Introduction}} and {{Removal}} of {{Different Types}} of {{Technical Debt}}},
author = {Liu, Jiakun and Huang, Qiao and Xia, Xin and Shihab, Emad and Lo, David and Li, Shanping}, author = {Liu, Jiakun and Huang, Qiao and Xia, Xin and Shihab, Emad and Lo, David and Li, Shanping},
@ -293,17 +379,26 @@
archiveprefix = {arXiv}, archiveprefix = {arXiv},
eprint = {2101.03730}, eprint = {2101.03730},
eprinttype = {arxiv}, eprinttype = {arxiv},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/FB78NEAM/2101.03730.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Liu_2021_An Exploratory Study on the Introduction and Removal of Different Types of.pdf},
keywords = {Computer Science - Software Engineering}, keywords = {Computer Science - Software Engineering},
langid = {english}, langid = {english},
number = {2} number = {2}
} }
@online{multicolumndeepneural,
title = {Multi-Column Deep Neural Networks for Image Classification},
url = {https://ieeexplore.ieee.org/abstract/document/6248110/},
urldate = {2021-06-09},
abstract = {Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible, wide and deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.},
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/2R4ZFR6C/6248110.html},
langid = {american}
}
@online{naturallanguagetoolkit, @online{naturallanguagetoolkit,
title = {Natural {{Language Toolkit}} \textemdash{} {{NLTK}} 3.5 Documentation}, title = {Natural {{Language Toolkit}} \textemdash{} {{NLTK}} 3.5 Documentation},
url = {https://www.nltk.org/}, url = {https://www.nltk.org/},
urldate = {2021-03-30}, urldate = {2021-03-30},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/VKI2452L/www.nltk.org.html} file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/VKI2452L/www.nltk.org.html}
} }
@online{navlanilatentsemanticindexing, @online{navlanilatentsemanticindexing,
@ -312,10 +407,41 @@
url = {https://machinelearninggeek.com/latent-semantic-indexing-using-scikit-learn/}, url = {https://machinelearninggeek.com/latent-semantic-indexing-using-scikit-learn/},
urldate = {2021-05-17}, urldate = {2021-05-17},
abstract = {In this tutorial, we will focus on Latent Semantic Indexing or Latent Semantic Analysis and perform topic modeling using Scikit-learn.}, abstract = {In this tutorial, we will focus on Latent Semantic Indexing or Latent Semantic Analysis and perform topic modeling using Scikit-learn.},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/MB9PJVXP/latent-semantic-indexing-using-scikit-learn.html}, file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/MB9PJVXP/latent-semantic-indexing-using-scikit-learn.html},
langid = {american} langid = {american}
} }
@online{oktay2018attentionunetlearning,
title = {Attention {{U}}-{{Net}}: {{Learning Where}} to {{Look}} for the {{Pancreas}}},
shorttitle = {Attention {{U}}-{{Net}}},
author = {Oktay, Ozan and Schlemper, Jo and Folgoc, Loic Le and Lee, Matthew and Heinrich, Mattias and Misawa, Kazunari and Mori, Kensaku and McDonagh, Steven and Hammerla, Nils Y. and Kainz, Bernhard and Glocker, Ben and Rueckert, Daniel},
date = {2018-05-20},
url = {http://arxiv.org/abs/1804.03999},
urldate = {2021-06-09},
abstract = {We propose a novel attention gate (AG) model for medical imaging that automatically learns to focus on target structures of varying shapes and sizes. Models trained with AGs implicitly learn to suppress irrelevant regions in an input image while highlighting salient features useful for a specific task. This enables us to eliminate the necessity of using explicit external tissue/organ localisation modules of cascaded convolutional neural networks (CNNs). AGs can be easily integrated into standard CNN architectures such as the U-Net model with minimal computational overhead while increasing the model sensitivity and prediction accuracy. The proposed Attention U-Net architecture is evaluated on two large CT abdominal datasets for multi-class image segmentation. Experimental results show that AGs consistently improve the prediction performance of U-Net across different datasets and training sizes while preserving computational efficiency. The code for the proposed architecture is publicly available.},
archiveprefix = {arXiv},
eprint = {1804.03999},
eprinttype = {arxiv},
keywords = {Computer Science - Computer Vision and Pattern Recognition},
primaryclass = {cs}
}
@article{rochkind1975sourcecodecontrol,
title = {The Source Code Control System},
author = {Rochkind, Marc J.},
date = {1975-12},
journaltitle = {IEEE Transactions on Software Engineering},
volume = {SE-1},
pages = {364--370},
issn = {1939-3520},
doi = {10.1109/TSE.1975.6312866},
abstract = {The Source Code Control System (SCCS) is a software tool designed to help programming projects control changes to source code. It provides facilities for storing, updating, and retrieving all versions of modules, for controlling updating privileges for identifying load modules by version number, and for recording who made each software change, when and where it was made, and why. This paper discusses the SCCS approach to source code control, shows how it is used and explains how it is implemented.},
eventtitle = {{{IEEE Transactions}} on {{Software Engineering}}},
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/8KN2BXLY/6312866.html},
keywords = {Configuration management,Control systems,Documentation,Laboratories,Libraries,Process control,program maintenance,Software,software control,software project management},
number = {4}
}
@article{scalabrino2019listeningcrowdrelease, @article{scalabrino2019listeningcrowdrelease,
title = {Listening to the {{Crowd}} for the {{Release Planning}} of {{Mobile Apps}}}, title = {Listening to the {{Crowd}} for the {{Release Planning}} of {{Mobile Apps}}},
author = {Scalabrino, Simone and Russo, Barbara and Oliveto, Rocco}, author = {Scalabrino, Simone and Russo, Barbara and Oliveto, Rocco},
@ -324,9 +450,40 @@
volume = {45}, volume = {45},
pages = {19}, pages = {19},
abstract = {The market for mobile apps is getting bigger and bigger, and it is expected to be worth over 100 Billion dollars in 2020. To have a chance to succeed in such a competitive environment, developers need to build and maintain high-quality apps, continuously astonishing their users with the coolest new features. Mobile app marketplaces allow users to release reviews. Despite reviews are aimed at recommending apps among users, they also contain precious information for developers, reporting bugs and suggesting new features. To exploit such a source of information, developers are supposed to manually read user reviews, something not doable when hundreds of them are collected per day. To help developers dealing with such a task, we developed CLAP (Crowd Listener for releAse Planning), a web application able to (i) categorize user reviews based on the information they carry out, (ii) cluster together related reviews, and (iii) prioritize the clusters of reviews to be implemented when planning the subsequent app release. We evaluated all the steps behind CLAP, showing its high accuracy in categorizing and clustering reviews and the meaningfulness of the recommended prioritizations. Also, given the availability of CLAP as a working tool, we assessed its applicability in industrial environments.}, abstract = {The market for mobile apps is getting bigger and bigger, and it is expected to be worth over 100 Billion dollars in 2020. To have a chance to succeed in such a competitive environment, developers need to build and maintain high-quality apps, continuously astonishing their users with the coolest new features. Mobile app marketplaces allow users to release reviews. Despite reviews are aimed at recommending apps among users, they also contain precious information for developers, reporting bugs and suggesting new features. To exploit such a source of information, developers are supposed to manually read user reviews, something not doable when hundreds of them are collected per day. To help developers dealing with such a task, we developed CLAP (Crowd Listener for releAse Planning), a web application able to (i) categorize user reviews based on the information they carry out, (ii) cluster together related reviews, and (iii) prioritize the clusters of reviews to be implemented when planning the subsequent app release. We evaluated all the steps behind CLAP, showing its high accuracy in categorizing and clustering reviews and the meaningfulness of the recommended prioritizations. Also, given the availability of CLAP as a working tool, we assessed its applicability in industrial environments.},
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Scalabrino_2019_Listening to the Crowd for the Release Planning of Mobile Apps.pdf}, file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Scalabrino_2019_Listening to the Crowd for the Release Planning of Mobile Apps.pdf},
langid = {english}, langid = {english},
number = {1} number = {1}
} }
@article{shannon1948mathematicaltheorycommunication,
title = {A Mathematical Theory of Communication},
author = {Shannon, C. E.},
date = {1948-07},
journaltitle = {The Bell System Technical Journal},
volume = {27},
pages = {379--423},
issn = {0005-8580},
doi = {10.1002/j.1538-7305.1948.tb01338.x},
abstract = {The recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist1 and Hartley2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information.},
eventtitle = {The {{Bell System Technical Journal}}},
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/ZLGCL7V5/6773024.html},
number = {3}
}
@inproceedings{zhang2018empiricalstudytensorflow,
title = {An Empirical Study on {{TensorFlow}} Program Bugs},
booktitle = {Proceedings of the 27th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}},
author = {Zhang, Yuhao and Chen, Yifan and Cheung, Shing-Chi and Xiong, Yingfei and Zhang, Lu},
date = {2018-07-12},
pages = {129--140},
publisher = {{ACM}},
location = {{Amsterdam Netherlands}},
doi = {10.1145/3213846.3213866},
abstract = {Deep learning applications become increasingly popular in important domains such as self-driving systems and facial identity systems. Defective deep learning applications may lead to catastrophic consequences. Although recent research e orts were made on testing and debugging deep learning applications, the characteristics of deep learning defects have never been studied. To ll this gap, we studied deep learning applications built on top of TensorFlow and collected program bugs related to TensorFlow from StackOverow QA pages and Github projects. We extracted information from QA pages, commit messages, pull request messages, and issue discussions to examine the root causes and symptoms of these bugs. We also studied the strategies deployed by TensorFlow users for bug detection and localization. These ndings help researchers and TensorFlow users to gain a better understanding of coding defects in TensorFlow programs and point out a new direction for future research.},
eventtitle = {{{ISSTA}} '18: {{International Symposium}} on {{Software Testing}} and {{Analysis}}},
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/ACM/Zhang_2018_An empirical study on TensorFlow program bugs.pdf},
isbn = {978-1-4503-5699-2},
langid = {english}
}

BIN
draft.pdf Normal file

Binary file not shown.

View File

@ -9,5 +9,16 @@ thesis: src/* out ieee.csl
--csl ieee.csl \ --csl ieee.csl \
--bibliography bibliography.bib --bibliography bibliography.bib
draft: src/* ieee.csl
pandoc src/*.md src/metadata.yaml \
-o draft.pdf \
--template latekiss \
--resource-path src \
--top-level-division chapter \
-F pandoc-crossref \
--citeproc \
--csl ieee.csl \
--bibliography bibliography.bib
out: out:
mkdir out mkdir out

35
src/chapter_1.md Normal file
View File

@ -0,0 +1,35 @@
# Introduzione
Lo sviluppo del software è stato caratterizzato da diversi cambiamenti rispetto alle applicazioni dominanti.
Negli anni ottanta il paradigma dominante era quello dei personal computer, poi abbiamo avuto Internet a cui è seguita la nascita del Web al \ac{CERN}.
Nel 2007 con l'annuncio del primo iPhone è inizia l'era del *mobile computing* a cui è seguita quella del *cloud computing*.
Negli ultimi anni l'industria non è stata a guardare, ma ha dato vita a sempre più prodotti che fanno uso di \ac{AI} e \ac{ML}.
Gli strumenti e i software che fanno uso di queste tecnologie sono ormai parte della nostra vita quotidiana e pervadono i campi più disparati.
Tra questi sicuramente possiamo annoverare: riconoscimento di immagini, diagnosi di malattie, \ac{NLP}, guida autonoma e riconoscimento vocale.
La crescente produzione di software basato sul \ac{ML} ha generato un forte impulso anche per quanto riguarda la ricerca.
L'attenzione non è stata puntata unicamente sullo studio di nuovi modelli e architetture, ma anche sul processo di sviluppo di questi prodotti per andare a valutare i vari problemi da un punto di vista ingegneristico.
In letteratura non mancano studi atti ad evidenziare le differenze tra progetti di \ac{ML} e progetti classici [@gonzalez2020statemluniverse10], né tanto meno confronti dei progetti rispetto alle dipendenze e alle librerie utilizzate [@han2020empiricalstudydependency].
Molti studi sono, invece, incentrati sulle problematiche legate allo sviluppo di applicazioni di \ac{ML}.
In alcuni casi l'analisi è stata svolta per librerie specifiche [@zhang2018empiricalstudytensorflow], in altri casi il focus è stato puntato sulle discussioni di \ac{SO} [@hassan2009predictingfaultsusing; @shannon1948mathematicaltheorycommunication].
In altri casi ancora l'attenzione è stata rivolta su problematiche specifiche come quella del \ac{SATD} [@liu2021exploratorystudyintroduction].
Anche il seguente lavoro si concentra sui difetti riscontrati all'interno delle applicazioni di \ac{ML}.
In questo caso però la ricerca di differenze è legata agli interventi di *issue fixing* relativi al \ac{ML} rispetto ad interventi di correzione generici.
## Obiettivi della tesi {#sec:goals}
Questo studio vuole verificare la presenza di differenze, all'interno di progetti di \ac{ML}, rispetto a come sono trattate le *issue* legate a tematiche di \ac{ML} e quelle generiche.
In particolare si vuole investigare come la risoluzione di queste problematiche va ad impattare sull'architettura, sia in termini di moduli modificati sia in termini di entropia generata.
Si vuole anche scoprire se sono presenti delle fasi del processo di sviluppo che sono più critiche di altre.
Infine si vuole comprendere se le *issue* sono trattate tutte allo stesso modo per quanto riguarda il livello di discussione e il tempo necessario alla loro risoluzione.
## Struttura della tesi
Nel capitolo [-@sec:related-works] viene svolta una panoramica sullo stato dell'arte.
Nel capitolo [-@sec:methodology] vengono presentate le \ac{RQ}, viene descritta la procedura utilizzata per la raccolta dei commit e delle issue e come queste sono state classificate.
Inoltre viene illustrata la metodologia di analisi impiegata per lo studio di ogni *\ac{RQ}*.
I risultati delle analisi e una discussione qualitativa su alcuni *casi estremi* sono riportati nel capitolo [-@sec:results].
Infine il capitolo [-@sec:conclusions] chiude questa tesi.

View File

@ -1,111 +1,227 @@
# Collezione dei dati e analisi preliminare # Stato dell'arte {#sec:related-works}
## Selezione dei progetti In questo capitolo verranno presentati diversi lavori di ricerca alla base di questo studio.
I lavori, sebbene tutti incentrati sul \ac{ML}, vanno ad approfondire diversi aspetti.
In alcuni casi l'attenzione principale è rivolta alle difficoltà e alle problematiche riscontrate dagli sviluppatori.
In altri casi viene svolto un confronto tra progetti di \ac{ML} e progetti generici o tra progetti che fanno uso di diversi framework di \ac{ML}.
Infine viene anche presentato un lavoro sulla complessità del processo di cambiamento del software e su i suoi effetti sull'introduzione di difetti.
L'individuazione dei progetti da analizzare è avvenuta mediate l'ausilio dell'\ac{API} messa a disposizione da GitHub. \section[Confronto tra progetti di ML e progetti generici]{Confronto tra progetti di machine learning e progetti generici}
In particolare è stata eseguita una query per ottenere una lista di repository che fanno uso di librerie e framework di \ac{ML} come `TensorFlow`, `Pytorch` e `scikit-learn`.
In questo modo è stato possibile ottenere una lista di $26758$ repository che è stata successivamente filtrata per individuare solo i progetti d'interesse per la seguente analisi.
L'operazione di filtraggio è avvenuta attraverso due fasi; una prima automatica e una seconda manuale. Nello studio di Gonzalez *et al.* [@gonzalez2020statemluniverse10] vengono presentate le principali differenze tra i repository di \ac{ML} e i progetti classici.
La prima fase ha avuto l'obiettivo di selezionare unicamente i repository *popolari*. I dati per lo studio sono stati recuperati attraverso l'\ac{API} messa a disposizione di GitHub attraverso la quale è stato possible collezionare i dati associati a 9325 progetti open source così raggruppati:
Nella maggior parte dei casi viene utilizzato il numero di stelle come indice della popolarità di un progetto [@borges2016understandingfactorsthat], ma per questo lavoro si è preferito dare maggiore rilevanza ad altri aspetti, come il numero di fork, il numero di *contributors* e il numero di issues chiuse.
Questa scelta è stata dettata dall'esigenza di selezionare non solo repository popolari, ma anche caratterizzati da una forte partecipazione della community.
I progetti che hanno superato questa prima selezione dovevano: - 5224 progetti legati all'\ac{AI} e al \ac{ML} divisi a loro volta in:
- 700 tools e framework
- 4524 applicazioni
- 4101 progetti generici
- essere lavori originali, per cui sono stati esclusi tutti i fork. Gli aspetti considerati dallo studio sono molteplici e di varia natura.
- avere almeno cento issues chiuse. Una prima analisi è stata condotta rispetto alla nascita dei vari repository.
- avere almeno dieci contributors. In questo modo è stato possibile individuare nel 2017 l'anno della forte crescita dei repository di \ac{AI} & \ac{ML}.
- avere almeno venticinque fork. Infatti questo è stato il primo anno in cui sono stati creati più progetti legati al \ac{ML} rispetto a quelli generici.
Alla fine di questa prima selezione il numero di repository si è ridotto a sessantasei e sono stati analizzati manualmente per rimuovere listati associati a libri e/o tutorial, progetti non in lingua inglese e librerie. Una seconda analisi ha permesso di comprendere come varia la partecipazione ai vari progetti.
Alla fine di questa seconda fase il numero di progetti è sceso a trenta. Per poter svolgere questa analisi i contributori sono stati divisi in:
## Fetch di issues e commit - *esterni*: i loro contributi sono limitati ad aprire *issue* e commentare le discussioni.
- *interni*: oltre a svolgere i compiti precedentemente elencati devono anche aver chiuso delle issue o eseguito dei commit sul progetto.
Una volta individuati i progetti da analizzare si è reso necessario recuperare l'intera storia dei progetti e le issues ad essi associate. In base a questa divisione si è visto come i tools di \ac{ML} hanno un numero di contributori interni superiore rispetto ai progetti generici.
Per entrambe le operazioni è stato utilizzato il tool *perceval*[@duenas2018percevalsoftwareproject]. Quest'ultimi però hanno una maggiore partecipazione esterna.
Nel caso delle issues, essendo queste informazioni non direttamente contenute all'interno del repository `git`, è stato necessario utilizzare nuovamente l'\ac{API} di GitHub. Se invece l'analisi viene svolta considerando unicamente gli autori dei commit si scopre che i progetti generici mediamente hanno più *contributors*, ma i top 4 repository con più committer sono tutti legati al mondo del \ac{ML}.
Poiché le chiamate associate ad un singolo *token* sono limitate nel tempo si è scelto di configurare *perseval* in modo tale da introdurre in automatico uno ritardo ogni qualvolta veniva raggiunto il limite.
Inoltre il codice è stato dispiegato su un \ac{VPS} in modo da poter eseguire il fetch senza che fosse necessario mantenere attiva una macchina fisica.
Con il processo precedentemente illustrato è stato possibile recuperare: Un'ulteriore analisi è stata svolta anche per quanto riguarda il linguaggio con cui sono stati realizzati i vari progetti.
Sia nel caso delle applicazioni che nei tools di \ac{ML} il linguaggio più popolare è Python, mentre la seconda posizione varia.
Nel caso dei tools questa è occupata da C++, mentre nelle applicazioni dai Notebook Jupyter.
Nei progetti generici invece Python occupa solo la terza posizione in quanto a popolarità e le prime due sono occupate da JavaScript e Java.
- $34180$ commit. ## Analisi in base al framework utilizzato
- $15267$ tra issues e pull request.
## Classificazione dei dati Nello studio di Han *et al.* [@han2020empiricalstudydependency] sono stati considerati 1150 progetti GitHub così distribuiti:
### Classificazione delle issues - 708 progetti che dipendono da `TensorFlow`.
- 339 progetti che dipendono da `PyTorch`.
- 103 progetti che dipendono da `Theano`.
Al fine di poter eseguire un confronto tra i *fix* di \ac{ML} e quelli *generici* è stato necessario classificare sia le issues che i commit. Per poter classificare i progetti manualmente gli autori hanno considerato il nome del progetto, la descrizione, le label e il contenuto del readme.
Per quanto riguarda i primi si è scelto di attuare una classificazione basata sul testo, in particolare considerando il titolo e il corpo della issue, ma escludendo i commenti di risposta in modo da non rendere i dati troppo rumorosi. La classificazione è avvenuta sia rispetto all'obiettivo del progetto sia rispetto al dominio applicativo.
Il numero elevato di elementi non rende praticabile una classificazione manuale per cui si è optato per una classificazione automatica. Come obiettivi dei progetti sono stati considerati:
A tal fine sono stati implementati ed analizzati due classificatori, uno supervisionato e uno non supervisionato.
I due modelli considerati sono: - *Competitions*: progetti realizzati per la partecipazione a delle competizioni o sfide.
- *Learning & Teaching*: progetti realizzati per libri e/o tutorial o per esercitarsi.
- *Paper Experiments*: progetti realizzati al fine di ricerca.
- *Software Development*: comprende librerie, plug-in, tools ecc.
- *Other*
- un classificatore statico basato su una lista di vocaboli tipici del \ac{ML}. La classifica delle librerie più utilizzate è rimasta sostanzialmente invariata per tutte le categorie; il primo posto è occupato da `TensorFlow` seguito da `PyTorch` e `Theano`.
- un modello *naïve Bayes* [@2021naivebayesclassifier; @harrington2012machinelearningaction]. L'unica eccezione riguarda i progetti realizzati a fini di ricerca.
In questo caso `TensorFlow` e `PyTorch` sono in posizioni invertite.
Anche per quanto riguarda la classificazione rispetto al dominio applicativo la situazione è costante.
Infatti, indipendentemente dalla libreria utilizzata, i progetti più frequenti sono quelli che hanno a che fare con video e immagini e con il \ac{NLP}.
La classificazione mediate il classificatore statico non necessita di un *labeling* manuale dei dati, ma richiede la definizione dei vocaboli tipici del \ac{ML}. Un'ulteriore \ac{RQ} ha valutato il tipo di dipendenza, facendo distinzione tra dipendenze dirette e indirette.
Questa lista non è stata costruita da zero, ma è basata su lavori precedenti[@humbatova-2019-taxonomyrealfaults]. Per tutte è tre le librerie si è visto che è più probabile avere una dipendenza diretta che indiretta.
In questo modo tutte le issues che utilizzavano almeno un vocabolo tipico del \acl{ML} sono state classificate come issues di \ac{ML}. `PyTorch` è la libreria che più frequentemente è importata direttamente, mentre `Theano` ha una probabilità di essere importata direttamente quasi uguale a quella di essere importata indirettamente.
Nel caso del modello *naïve Bayes*, essendo questo un algoritmo di apprendimento supervisionato, si è resa necessaria una classificazione manuale delle issues. Un'ulteriore analisi è stata condotta per individuare quanto frequentemente i progetti aggiornano le loro dipendenze o eseguono dei downgrade.
A tal scopo è stato eseguito un campionamento stratificato in base al progetto di provenienza di $376$ issues che sono state divise tra due lettori e labellate. In questo caso si è visto che i progetti basati su `TensorFlow` e `PyTorch` aggiornano le proprie dipendenze molto più frequentemente rispetto ai progetti basati su `Theano`.
Durante il labeling si scelto di classificare ulteriormente le issue di \ac{ML} al fine di individuare anche la fase in cui il problema si è palesato. Mentre il tasso di downgrade è sostanzialmente equivalente.
La definizioni delle varie fasi è avvenuta partendo da un lavoro di *Microsoft*[@amershi-2019-softwareengineeringmachine]. Nel caso dei progetti che dipendono da `TensorFlow` la maggior parte dei downgrade viene spiegata dalla volontà di non utilizzare la nuova \ac{API} introdotta nella versione 2.0 della libreria.
Sempre analizzando la versione della libreria utilizzata si è visto che i progetti basati su `Theano` sono quelli che utilizzano più frequentemente l'ultima versione disponibile della libreria.
Le fasi considerate sono: In un altro lavoro, Han *et al.* [@han2020whatprogrammersdiscuss] hanno spostato il focus sugli argomenti di discussione e su come questi variano in base al framework utilizzato.
In questo caso all'interno dei dataset non sono rientrati unicamente i dati recuperati da GitHub, ma anche le discussioni su \ac{SO}.
- *Model Requirements*: questa fase comprende tutte le discussioni rispetto all'individuazione del modello più adatto, le funzionalità che questo deve esporre e come adattare un modello esistente per eseguire una diversa funzionalità. Questo studio ha permesso di evidenziare differenze e similitudini per quanto riguarda le discussioni che si generano intorno ai tre framework di interesse.
- *Data Collection*: comprende le operazioni volte alla definizione di un dataset. In particolare emerge che le fasi più discusse sono quelle di *model training* e di *preliminary preparation*.
Rientrano in questa fase sia la ricerca di dataset già esistenti che la costruzione di nuovi dataset. Mentre la fase meno discussa è quella di *model tuning*.
- *Data Labeling*: questa fase si rende necessaria ogni qual volta si opera con modelli basati su apprendimento supervisionato. Per quanto riguarda le differenze, dallo studio, emerge che `TensorFlow` e `PyTorch` hanno topic di discussione totalmente confrontabili.
- *Data cleaning*: in questa fase non rientrano soltanto le operazioni strettamente di pulizia dei dati come ad esempio rimozione di record rumorosi o incompleti, ma tutte le trasformazioni eseguite sui dati, quindi anche operazioni di standardizzazione, flip di immagini ecc. Oltre ai topic citati precedentemente, per questi framework, si discute molto anche della *data preparation*.
- *Feature Engineering*: questa fase serve per identificare le trasformazioni da attuare sui dati e le migliori configurazioni degli *hyperparametri* al fine di migliorare il modello. Mentre le discussioni riguardanti `Theano` sono quasi esclusivamente concentrate sul *model training*.
- *Model Training*: questa fase racchiude il training vero e proprio del modello.
- *Model Evaluation*: in questa fase vengono valutate le performance del modello utilizzando metriche standard come *precision* e *recall*, ma anche andando a confrontare i risultati ottenuti rispetto a quelli generati da altri modelli o rispetto all'esperienza[^esperienza].
- *Model Deployment*: questa fase riguarda il dispiegamento del modello sul dispositivo target.
- *Model Monitoring*: una volta dispiegato il modello deve essere continuamente monitora al fini di assicurasi un corretto comportamento anche sui dati reali.
[^esperienza]: Non sempre è possibile valutare un modello in modo oggettivo, ci sono determinati contesti, come ad esempio la generazione di *deep fakes*, in cui è comunque necessaria una valutazione umana per determinare la qualità del risultato. Da questi due studi si evince una forte somiglianza per quanto riguarda `TensorFlow` e `PyThorch`.
La principale differenza viene riscontrata per quanto riguarda i campi di applicazione, con `TensorFlow` che viene generalmente preferito fatti salvi gli ambiti di ricerca.
Mentre `Theano` presenta molte diversità sia per quanto riguarda gli impieghi che le discussioni.
A partire dal dataset *labellato* è stato possibile costruire un training e un test set, mediante i quali è stato possibile allenare e valutare le performance del modello bayesiano. \section[Analisi dei progetti di ML multi-linguaggio]{Analisi dei progetti di machine learning multi-linguaggio}
Mentre le performance del primo modello sono state valutate sull'intero dataset.
\begin{figure}[!ht] Lo studio di Grichi *et al.* [@grichi2020impactmultilanguagedevelopment] si concentra sui sistemi *multi-linguaggio*.
\subfloat[Numero di issues rispetto al tipo\label{fig:labeling-type}]{% In questo caso si vuole comprendere se i sistemi di \ac{ML} sono più soggetti all'essere realizzati attraverso linguaggi diversi.
\includegraphics[width=0.45\textwidth]{src/figures/count-type.pdf} Inoltre analizzando le \ac{PR} realizzate in più linguaggi si vuole investigare se queste sono accettate con la stessa frequenza di quelle *mono-linguaggio* e se la presenza di difetti è equivalente.
}
\hfill
\subfloat[Numero di issues rispetto alla fase\label{fig:labeling-phases}]{%
\includegraphics[width=0.45\textwidth]{src/figures/count-phases.pdf}
}
\caption{Risultati della classificazione manuale delle issues}
\label{fig:labeling}
\end{figure}
Al fine di poter confrontare i due modelli sono state utilizzate le metriche di precision e recall. L'analisi è stata svolta su 27 progetti open source hostati su GitHub.
Com'è possibile notare dai valori riportati in @tbl:confronto-modelli-classificazione-issues, il modello basato sulla lista di vocaboli è leggermente più preciso del modello bayesiano, ma presenta una recall decisamente più bassa. I progetti sono poi stati classificati in tre categorie:
Dalla @fig:labeling-type si evince la natura minoritaria delle issues di \ac{ML} rispetto alle issues generiche, per questo motivo si è scelto di preferire il modello naïve Bayes in modo da perdere quante meno istanze possibili anche a costo di sacrificare leggermente la precisione.
| | Classificatore statico | naïve Bayes | - Cat I: include 10 sistemi di \ac{ML} *multi-linguaggio*.
|-----------|------------------------|-------------| - Cat II: include 10 sistemi generici *multi-linguaggio*.
| precision | 0.46 | 0.41 | - Cat III: include 7 sistemi di \ac{ML} *mono-linguaggio*.
| recall | 0.74 | 0.94 |
: Confronto dei due modelli per la classificazione delle issues. {#tbl:confronto-modelli-classificazione-issues} Successivamente sono state scaricate le \ac{PR} di ogni progetto considerato.
Le \ac{PR} sono state categorizzate per individuare quelle accettate e quelle rifiutate.
Inoltre le \ac{PR} sono state categorizzate anche il base al numero di linguaggi utilizzati.
In questo modo è stato possibile individuare le \ac{PR} *mono-linguaggio* e quelle *multi-linguaggio*.
Infine per ogni \ac{PR} è stato individuato il tempo necessario alla sua accettazione o chiusura e i difetti introdotti dalla \ac{PR}.
### Classificazione dei commit Per quanto riguarda la percentuale di linguaggi di programmazione utilizzati i progetti della categoria I e II sono comparabili.
La principale differenza riguarda i tipi di linguaggi utilizzati.
Nel caso dei progetti *multi-linguaggio* di \ac{ML} l'accoppiata più comune è Python e C/C++.
Mentre nel caso dei progetti generici la coppia più comune è data da Java e C/C++.
I progetti della categoria I e II sono paragonabili anche rispetto al numero di \ac{PR} e \ac{PR} *multi-linguaggio*.
Prima di poter classificare i commit si è reso necessaria un'ulteriore fase di filtraggio in modo da poter separare i commit di *issue fixing* da quelli generici. Lo studio ha evidenziato come all'interno dei progetti di \ac{ML} le \ac{PR} *mono-linguaggio* sono accettate molto più facilmente rispetto a quelle *multi-linguaggio*.
Sono stati considerati come commit di *fix* tutti quei commit al cui interno veniva fatto riferimento a delle issues attraverso la notazione *"#"*. Inoltre anche nel caso in cui queste vengano accettate, il tempo necessario alla loro accettazione è maggiore.
Questa operazione ha ridotto il dataset dei commit a $3321$ unità la cui distribuzione in base al tipo è riportata in @fig:count-commit. Infine si è visto anche che rispetto alle \ac{PR} *multi-linguaggio* non esistono differenze in base all'introduzione di *bug* tra i progetti della categoria I e II.
Mentre le \ac{PR} che includono un singolo linguaggio sembrano essere più affette da *bug* nel caso dei sistemi di \ac{ML}.
A questo punto è stato possibile separare i *fix* di \acl{ML} da quelli generici. \section[Problematiche caratteristiche del ML]{Problematiche caratteristiche del machine learning}
La classificazione è avvenuta attraverso la lista delle issues citate all'interno del *commit message* e sono stati considerati come commit di \ac{ML} tutti quei commit che facevano riferimento ad almeno una issue di \ac{ML}.
![Risultato della classificazione dei commit](figures/count-commit.pdf){#fig:count-commit} In letteratura sono presenti anche lavori che si concentrano sull'analisi delle problematiche e dei *bug* riscontrati all'interno di applicazioni di \ac{ML}.
Nello studio di Zhang *et al.* [@zhang2018empiricalstudytensorflow] l'attenzione è rivolta unicamente alle problematiche correlate a `TensorFlow`.
Per lo studio sono stati recuperati dei *bug* di `TensorFlow` sia da progetti su GitHub (88 elementi) sia da quesiti su \ac{SO} (87 elementi).
Gli autori dello studio, per poter individuare la causa dei *bug* e i loro sintomi hanno dovuto analizzare manualmente gli elementi del dataset.
Nel caso di *bug* discussi su \ac{SO} le informazioni sono state recuperate dalla discussione.
Mentre nel caso dei *bug* recuperati da GitHub le informazioni sono state recuperate tramite lo studio dell'intervento di *fix* e il messaggio associato ad esso.
In questo modo è stato possibile individuare tre sintomi:
- *Error*: durante l'esecuzione viene sollevato un errore riconducibile a `TensorFlow`.
- *Low Effectiveness*: il programma presenta dei valori di *accuracy*, *loss* ecc. estremamente scadenti.
- *Low Efficiency*: il programma viene eseguito troppo lentamente.
Per quanto riguarda le cause è stato possibile individuarne sei:
- *Incorrect Model Parameter or Structure*: il *bug* è riconducibile ad un cattivo utilizzo dei parametri del modello o alla sua struttura.
- *Unaligned Tensor*: si verifica ogni qual volta la *shape* dell'input non corrisponde con quella attesa.
- *Confusion with TensorFlow Computation Model*: in questo caso i *bug* si verificano quando gli utenti non sono pratici del modello computazionale utilizzato da `TensorFlow`.
- *TensorFlow \ac{API} Change*: il *bug* dipende da un cambiamento nell'\ac{API} di `TensorFlow`.
- *TensorFlow \ac{API} Misuse*: in questo caso il *bug* è riconducibile ad un utilizzo scorretto dell'\ac{API} di `TensorFlow`.
- *Structure Inefficiency*: questa categoria può essere vista come una versione più *soft* della prima categoria.
Infatti in questo caso il problema nella struttura non genera un errore ma solo delle inefficienze.
Anche lo studio di Humbatova *et al.* [@humbatova-2019-taxonomyrealfaults] ha come obbiettivo l'analisi delle problematiche legate al \ac{ML}.
In questo caso però la visione è più ampia e non si limita ad una singola libreria.
Inoltre in questo caso lo scopo ultimo del lavoro è la costruzione di una tassonomia per le problematiche di \ac{ML}.
Anche in questo caso i dati sono stati recuperati sia da \ac{SO} che da GitHub.
Inoltre per questo studio è stata anche svolta un'intervista a 20 persone tra ricercatori e sviluppatori nel campo del \ac{ML}.
Partendo da questi dati è stata costruita una tassonomia attraverso un approccio *bottom-up*.
La tassonomia si compone di 5 categorie *top-level*, 3 delle quali sono state divise in sotto categorie.
Tra le categorie di primo livello ci sono:
- *Model*: in questa categoria rientrano tutte le problematiche che riguardano la struttura e le proprietà del modello.
- *Tensors & Inputs*: rientrano in questa categoria tutti i problemi rispetto alla *shape* dei dati e al loro formato.
- *Training*: questa categoria è la più ampia della tassonomia.
Rientrano in questa categoria la qualità e il preprocessing dei dati utilizzati per l'apprendimento, il *tuning* degli *hyperparametri*, la scelta della funzione di perdita più appropriata ecc.
- *GPU Usage*: in questa categoria ricadono tutti i problemi nell'uso della \ac{GPU}.
- *API*: rientrano in questa categoria tutti i problemi generati da un non corretto utilizzo dell'\ac{API} del framework di \ac{ML}.
Come si può notare, fatta salva la specificità del primo lavoro, esiste una forte similitudine tra le categorie di problemi individuate dai due studi.
\section[Studio di discussioni Stack Overflow riguardanti il ML]{Studio di discussioni Stack Overflow riguardanti il machine learning}
Nello studio di Bangash *et al.* [@bangash2019whatdevelopersknow] viene svolta un'analisi degli argomenti di \ac{ML} discussi più frequentemente dagli sviluppatori.
In questo caso, a differenza dello studio di Han *et al.* [@han2020whatprogrammersdiscuss] discusso precedentemente, non viene svolta alcuna distinzione in base alla libreria utilizzata.
Inoltre questo studio utilizza unicamente informazioni recuperate da \ac{SO}, mentre l'altro lavoro univa le domande di \ac{SO} alla discussione generata all'interno dei repository di GitHub.
In questo caso il topic più frequentemente discusso riguarda la presenza di errori all'interno del codice.
Seguono discussioni rispetto agli algoritmi di apprendimento e al training dei dati.
Lo studio ha evidenziato anche come molte discussioni riguardano librerie e framework di \ac{ML} come ad esempio `numpy`, `pandas`, `keras`, `Scikit-Learn`, ecc.
Tutte queste discussioni sono state inserite nel topic *framework*.
Anche nel lavoro di Alshangiti *et al.* [@alshangiti2019whydevelopingmachine] vengono analizzate le domande presenti sulla piattaforma \ac{SO}.
In questo caso però oltre ad un analisi qualitativa rispetto al contenuto di queste discussioni è stata eseguita anche un'analisi comparativa tra le discussioni inerenti al \ac{ML} e le altre.
Per svolgere questa analisi gli autori sono partiti dal dump del database di \ac{SO} e hanno individuato tre campioni:
- *Quantitative Study Sample*: si compone di 86983 domande inerenti al \ac{ML}, con le relative risposte.
L'individuazione dei post è avvenuta attraverso la definizione di una lista contente 50 tag utilizzate su \ac{SO} per le domande di \ac{ML}.
- *Qualitative Study Sample*: contiene 684 post realizzati da 50 utenti.
Questo campione è stato ottenuto eseguendo un ulteriore campionamento sul campione discusso al punto precedente.
- *Baseline Sample*: si compone di post che non riguardano il \ac{ML}.
Questo campione viene utilizzato per comparare le domande di \ac{ML} con quelle generiche.
La prima *\ac{RQ}* dello studio vuole verificare se rispondere ad una domanda inerente al \ac{ML} sia più complicato.
Per valutare la complessità di risposta sono state contate le domande che non presentano alcuna risposta, le domande che non presentano risposte accettate e la mediana del tempo necessario affinché una domanda abbia una risposta accettata.
Dal confronto tra il primo e il terzo sample rispetto a queste metriche è emerso che i post inerenti al \ac{ML} hanno una maggiore probabilità di non avere risposte/risposte accettate.
Inoltre si è visto come mediamente le domande di \ac{ML} necessitano di un tempo dieci volte maggiore per poter avere una risposta accettata.
Una spiegazione a questo fenomeno ci viene fornita dalla seconda *\ac{RQ}* in cui viene evidenziato che all'interno della community di \ac{SO} c'è una carenza di esperti di \ac{ML} [^expertise-rank].
[^expertise-rank]: L'individuazione degli esperti è avvenuta secondo l'approccio *ExpertiseRank*.
Questo approccio crea un grafo diretto, in cui gli utenti sono rappresentati dai nodi e gli archi rappresentano una relazione di aiuto, attraverso il quale è possibile determinare l'esperienza degli utenti.
Per esempio considerando un caso in cui l'utente B ha aiutato l'utente A avremo che l'esperienza di B è superiore a quella di A.
Se l'utente C risponde ad una domanda di B, allora questo avrà una esperienza superiore sia ad A che a B, in quanto è stato in grado di aiutare un utente (B) che aveva dimostrato a sua volta di essere esperto (rispondendo ad A).
Lo studio è stato in grado anche di individuare le fasi in cui gli sviluppatori riscontrano maggiori problematiche.
In generale le maggiori difficoltà sono state riscontrate nel *preprocessing dei dati*, nella configurazione dell'ambiente di sviluppo e nel deployment del modello.
Per quanto riguarda i task specifici del \ac{DL} le maggiori problematiche riguardano applicazioni di \ac{NLP} e riconoscimento degli oggetti.
Infine lo studio ha mostrato come, nonostante la vasta adozione, molti utenti riscontrano problemi nell'utilizzo dell'\ac{API} di `TensorFlow`.
## Entropia di un cambiamento {#sec:entropy}
Nello studio di Hassan [@hassan2009predictingfaultsusing] si vuole investigare in che modo la complessità del processo del cambiamento del software vada ad impattare sull'introduzione di difetti all'interno della codebase.
Per valutare la complessità del processo di cambiamento è stato *preso in prestito* il concetto di entropia [@shannon1948mathematicaltheorycommunication] utilizzato nella teoria della comunicazione.
Lo studio è stato condotto su sei progetti open source di grandi dimensioni.
Attraverso i sistemi di *version control* e all'analisi lessicale dei messaggi di cambiamento sono stati individuate tre tipologie di cambiamento.
- *Fault Repairing modification*: include i cambiamenti attuati per risolvere un difetto nel prodotto software.
Questa categoria di modifiche non è stata utilizzata per il calcolo dell'entropia, ma per validare lo studio.
- *General Maintenance modification*: include cambiamenti di mantenimento che non vanno ad influenzare il comportamento del codice.
Rientrano in questa categoria la re-indentazione del codice, cambiamenti alla nota del copyright ecc.
Questi cambiamenti sono stati esclusi dallo studio.
- *Feature Introduction modification*: include tutti i cambiamenti che vanno ad alterare il comportamento del codice.
Questi cambiamenti sono stati individuati per esclusione e sono stati utilizzati per il calcolo dell'entropia.
All'interno dello studio vengono definiti tre modelli che permettono di calcolare la complessità del processo di cambiamento software.
- *Basic Code Change model*: è il primo modello presentato, assume un periodo costante per il calcolo dell'entropia e considera costante il numero di file presenti all'interno del progetto.
- *Extend Code Change model*: è un'evoluzione del modello di base che lo rende più flessibile.
- *File Code Change model*: i modelli illustrati precedentemente forniscono un valore complessivo di entropia per l'intero progetto.
Questo modello permette di valutare l'entropia in modo distinto per ogni file.
Lo studio ha dimostrato che nel caso di sistemi di grandi dimensioni, la complessità del processo di cambiamento è in grado di predire l'occorrenza di fault.
Inoltre viene anche mostrato come la predizione basata sulla complessità del processo sia più precisa rispetto alla predizione basata sulla complessità del codice.

252
src/chapter_3.md Normal file
View File

@ -0,0 +1,252 @@
# Costruzione del dataset e metodologia {#sec:methodology}
L'obiettivo di questa tesi è verificare la presenza di differenza all'interno di progetti di \ac{ML} rispetto a come sono trattati gli interventi di *issue fixing* legati al \ac{ML} e quelli generici.
L'attenzione è rivolta all'impatto degli interventi sull'architettura del sistema, alle tempistiche necessarie alla risoluzione e al livello di discussione di questi difetti.
Inoltre si vuole anche comprendere se esistono delle fasi del processo di sviluppo che sono più critiche di altre.
## Research Questions
Gli obiettivi di questa tesi sono stati racchiusi in cinque \ac{RQ} di seguito elencate.
- **RQ1**: *come il machine learning e' distribuito sull'architettura dei progetti?*
In questa *\ac{RQ}* si vuole investigare l'architettura dei progetti.
In particolare l'attenzione viene concentrata sui file e sulle directory modificate durante interventi di *issue fixing*.
Obiettivo di questa domanda è anche individuare la percentuale di file che utilizzano import riconducibili a librerie e framework di \ac{ML}.
- **RQ2**: *come sono distribuiti i bug sulle diverse fasi di machine learning?*
Il workflow tipico per lo sviluppo di un'applicazione di \ac{ML} si compone di più fasi.
L'obiettivo di questa *\ac{RQ}* è quello di individuare le fasi più critiche per quanto riguarda l'introduzione di difetti all'interno del prodotto software.
- **RQ3**: *esiste una differenza di entropia del cambiamento tra machine learning bug e altri bug?*
A partire dai lavori precedenti svolti sull'entropia di un cambiamento, si vuole investigare se esiste una differenza in termini di entropia generata tra le correzioni dei difetti ascrivibili al \ac{ML} e gli altri difetti.
- **RQ4**: *come varia il livello di discussione tra machine learning bug e altri bug?*
Questa *\ac{RQ}* riguarda il livello di discussione dei *bug*.
In particolare si vuole comprendere se, all'interno dei progetti di \ac{ML}, i bug generici sono discussi con lo stesso livello di approfondimento di quelli specifici del \ac{ML}.
- **RQ5**: *come varia il time-to-fix tra machine learning bug e altri bug?*
Un altro aspetto caratteristico di un *fix* è il tempo necessario per poter essere attuato.
Questa *\ac{RQ}* ha lo scopo di verificare l'esistenza di differenze tra i *bug* generici e quelli di \ac{ML}.
## Selezione dei progetti
L'individuazione dei progetti da analizzare è avvenuta mediate l'ausilio dell'\ac{API} messa a disposizione da GitHub.
In particolare è stata eseguita una query per ottenere una lista di repository che fanno uso di librerie e framework di \ac{ML} come `TensorFlow`, `Pytorch` e `scikit-learn`.
In questo modo è stato possibile ottenere una lista di $26758$ repository che è stata successivamente filtrata per individuare solo i progetti d'interesse per il seguente studio.
L'operazione di filtraggio è avvenuta attraverso due fasi; una prima automatica e una seconda manuale.
La prima fase ha avuto l'obiettivo di selezionare unicamente i repository *popolari*.
Nella maggior parte dei casi viene utilizzato il numero di stelle come indice della popolarità di un progetto [@borges2016understandingfactorsthat], ma per questo lavoro si è preferito dare maggiore rilevanza ad altri aspetti, come il numero di fork, il numero di *contributors* e il numero di issue chiuse.
Questa scelta è stata dettata dall'esigenza di selezionare non solo repository popolari, ma anche caratterizzati da una forte partecipazione della community.
I progetti che hanno superato questa prima selezione dovevano:
- essere lavori originali, per cui sono stati esclusi tutti i fork.
- avere almeno cento issue chiuse.
- avere almeno dieci contributors.
- avere almeno venticinque fork.
Alla fine di questa prima selezione il numero di repository si è ridotto a sessantasei e sono stati analizzati manualmente per rimuovere listati associati a libri e/o tutorial, progetti non in lingua inglese e librerie.
Alla fine di questa seconda fase il numero di progetti è sceso a trenta.
## Fetch di issue e commit
Una volta individuati i progetti da analizzare si è reso necessario recuperare l'intera storia dei progetti e le issue ad essi associate.
Per entrambe le operazioni è stato utilizzato il tool *perceval* [@duenas2018percevalsoftwareproject].
Nel caso delle issue, essendo queste informazioni non direttamente contenute all'interno del repository `git`, è stato necessario utilizzare nuovamente l'\ac{API} di GitHub.
Poiché le chiamate associate ad un singolo *token* sono limitate nel tempo si è scelto di configurare *perseval* in modo tale da introdurre in automatico un ritardo ogni qualvolta veniva raggiunto il limite.
Inoltre il codice è stato dispiegato su un \ac{VPS} in modo da poter eseguire il fetch senza che fosse necessario mantenere attiva una macchina fisica.
Con il processo precedentemente illustrato è stato possibile recuperare:
- $34180$ commit.
- $15267$ tra issue e pull request.
## Classificazione dei dati
### Classificazione delle issue {#sec:classificazione-issues}
Al fine di poter eseguire un confronto tra i *fix* di \ac{ML} e quelli *generici* è stato necessario classificare sia le issue che i commit.
Il numero elevato di elementi non rende praticabile una classificazione manuale per cui si è optato per una classificazione automatica.
Per quanto riguarda i primi si è scelto di attuare una classificazione basata sul testo, in particolare considerando il titolo e il corpo della issue, ma escludendo i commenti di risposta in modo da non rendere i dati troppo rumorosi.
A tal fine sono stati implementati ed analizzati due classificatori, uno supervisionato e uno non supervisionato.
I due modelli considerati sono:
- un classificatore statico basato su una lista di vocaboli tipici del \ac{ML}.
- un modello *naïve Bayes* [@2021naivebayesclassifier; @harrington2012machinelearningaction].
La classificazione mediante il classificatore statico non necessita di un *labeling* manuale dei dati, ma richiede la definizione dei vocaboli tipici del \ac{ML}.
La lista dei termini caratteristici del \ac{ML} non è stata costruita da zero, ma è basata sul lavoro di Humbatova *et al.* [@humbatova-2019-taxonomyrealfaults].
In questo modo tutte le issue che utilizzavano almeno un vocabolo tipico del \ac{ML} sono state classificate come issue di \ac{ML}.
Nel caso del modello *naïve Bayes*, essendo questo un algoritmo di apprendimento supervisionato, si è resa necessaria una classificazione manuale delle issue.
A tal scopo è stato eseguito un campionamento stratificato in base al progetto di provenienza di $376$ issue che sono state divise tra due lettori e labellate.
La label delle *issue* è stata determinata andando ad analizzare il titolo, il corpo e i commenti associati alla *issue*.
Durante il labeling si è scelto di classificare ulteriormente le issue di \ac{ML} al fine di individuare anche la fase in cui il problema si è palesato.
La definizione delle varie fasi è avvenuta partendo dal lavoro di Amershi *et al.* [@amershi-2019-softwareengineeringmachine] realizzato nei laboratori di *Microsoft*.
Le fasi considerate sono:
- *Model Requirements*: questa fase comprende tutte le discussioni rispetto all'individuazione del modello più adatto, le funzionalità che questo deve esporre e come adattare un modello esistente per eseguire una diversa funzionalità.
- *Data Collection*: comprende le operazioni volte alla definizione di un dataset.
Rientrano in questa fase sia la ricerca di dataset già esistenti che la costruzione di nuovi dataset.
- *Data Labeling*: questa fase si rende necessaria ogni qual volta si opera con modelli basati su apprendimento supervisionato.
- *Data cleaning*: in questa fase non rientrano soltanto le operazioni strettamente di pulizia dei dati come ad esempio rimozione di record rumorosi o incompleti, ma tutte le trasformazioni eseguite sui dati, quindi anche operazioni di standardizzazione, flip di immagini ecc.
- *Feature Engineering*: questa fase serve per identificare le trasformazioni da attuare sui dati e le migliori configurazioni degli *hyperparametri* al fine di migliorare il modello.
- *Model Training*: questa fase racchiude il training vero e proprio del modello.
- *Model Evaluation*: in questa fase vengono valutate le performance del modello utilizzando metriche standard come *precision* e *recall*, ma anche andando a confrontare i risultati ottenuti rispetto a quelli generati da altri modelli o rispetto all'esperienza[^esperienza].
- *Model Deployment*: questa fase riguarda il dispiegamento del modello sul dispositivo target.
- *Model Monitoring*: una volta dispiegato il modello deve essere continuamente monitorato al fine di assicurasi un corretto comportamento anche sui dati reali.
[^esperienza]: Non sempre è possibile valutare un modello in modo oggettivo, ci sono determinati contesti, come ad esempio la generazione di *deep fakes*, in cui è comunque necessaria una valutazione umana per determinare la qualità del risultato.
A partire dal dataset *labellato* è stato possibile costruire un training e un test set, mediante i quali è stato possibile allenare e valutare le performance del modello bayesiano.
Mentre le performance del primo modello sono state valutate sull'intero dataset.
\begin{figure}[!ht]
\subfloat[Numero di issue rispetto al tipo\label{fig:labeling-type}]{%
\includegraphics[width=0.45\textwidth]{src/figures/count-type.pdf}
}
\hfill
\subfloat[Numero di issue rispetto alla fase\label{fig:labeling-phases}]{%
\includegraphics[width=0.45\textwidth]{src/figures/count-phases.pdf}
}
\caption{Risultati della classificazione manuale delle issue}
\label{fig:labeling}
\end{figure}
Al fine di poter confrontare i due modelli sono state utilizzate le metriche di *precision* e *recall*.
Com'è possibile notare dai valori riportati in @tbl:confronto-modelli-classificazione-issues, il modello basato sulla lista di vocaboli è leggermente più preciso del modello bayesiano, ma presenta una *recall* decisamente più bassa.
Dalla @fig:labeling-type si evince la natura minoritaria delle issue di \ac{ML} rispetto alle issue generiche, per questo motivo si è preferito il modello naïve Bayes in modo da perdere quante meno istanze possibili anche a costo di sacrificare leggermente la precisione.
| | Classificatore statico | naïve Bayes |
|-----------|------------------------|-------------|
| precision | 0.46 | 0.41 |
| recall | 0.74 | 0.94 |
: Confronto dei due modelli per la classificazione delle issue. {#tbl:confronto-modelli-classificazione-issues}
### Classificazione dei commit {#sec:classificazione-commit}
Prima di poter classificare i commit si è reso necessaria un'ulteriore fase di filtraggio in modo da poter separare i commit di *issue fixing* da quelli generici.
Sono stati considerati come commit di *fix* tutti quei commit al cui interno veniva fatto riferimento a delle *issue* attraverso la notazione *"#"*.
Questa operazione ha ridotto il dataset dei commit a $3321$ unità la cui distribuzione in base al tipo è riportata in @fig:count-commit.
Da ogni commit sono state estratte le informazioni rilevanti per le analisi.
In particolare è stato conservato:
- Il progetto di appartenenza.
- L'hash del commit.
- La data del commit.
- L'autore del commit.
- La lista dei file modificati.
- Le linee modificate.
- La lista delle *issue* citate.
\newpage
A questo punto è stato possibile separare i *fix* di \ac{ML} da quelli generici.
La classificazione è avvenuta attraverso la lista delle issue citate all'interno del *commit message* e sono stati considerati come commit di \ac{ML} tutti quei commit che facevano riferimento ad almeno una issue di \ac{ML}.
![Risultato della classificazione dei commit](figures/count-commit.pdf){#fig:count-commit width=80%}
## Metodologia
### RQ1: come il machine learning e' distribuito sull'architettura dei progetti?
In questa prima domanda si vuole andare a capire quant'è ampia la *superficie* del progetto che viene modificata durante gli interventi di *fix*, facendo distinzione tra le correzioni che riguardano il \ac{ML} e quelle generiche.
Inoltre si vuole anche comprendere quanti file importano librerie tipiche del \ac{ML}.
Per poter svolgere la prima analisi è stato necessario individuare il numero totale di file modificati per *fix* generici e per i *fix* specifici del \ac{ML}.
A tal fine i commit sono stati raggruppati rispetto al progetto e al tipo di cambiamento (\ac{ML}, no \ac{ML}).
All'interno di ogni raggruppamento si è eseguita la concatenazione della lista dei file modificati.
Poiché non si è interessati al numero di modifiche che ha subito ogni file le liste sono state trasformate in insiemi per eliminare le ripetizioni.
Come output di questa fase si è ottenuto per ogni progetto:
- l'insieme dei file modificati per *fix* di \ac{ML}
- l'insieme dei file modificati per fix generici
Infine eseguendo l'union set tra questi due insiemi si è ottenuto l'insieme totale dei file modificati durante i *fix*.
A questo punto per ogni progetto si è calcolata la percentuale di file modificati durante interventi di *fix* di \ac{ML} (`ml_file_ratio`) e la percentuale di file modificati durante *fix* generici (`no_ml_file_ratio`).
Attraverso la funzione di libreria Python `os.path.dirname` sono stati ottenuti i tre insiemi sopra citati anche per quanto riguarda le directory.
E in modo analogo si è calcolata la percentuale di directory modificate durante interventi di \ac{ML} (`ml_dirs_ratio`) e interventi generici (`no_ml_dirs_ratio`).
Queste distribuzioni sono state analizzate graficamente attraverso l'ausilio di boxplot.
Per la seconda analisi si è reso necessario conoscere per ogni file la lista degli import utilizzati.
Questa informazione è stata recuperata attraverso uno script, che dato in input un progetto restituisce la lista dei file affiancati dalla lista degli import utilizzati all'interno del file stesso.
L'individuazione dei file di \ac{ML} è avvenuta mediante la definizione di due gruppi di librerie tipiche del \ac{ML}.
- Gruppo 1: librerie specifiche del \ac{ML} come ad esempio `keras`, `TensorFlow` e `Pytorch`.
- Gruppo 2: librerie utilizzate in ambito \ac{ML}, ma anche in altri contesti. Appartengono a questo gruppo librerie come `numpy`, `scipy` e `pandas`.
Ogni file è stato classificato come di \ac{ML} o meno in base a due livelli.
Nel primo caso, indicato con *all*, per rientrare all'interno dei file che fanno uso di librerie di \ac{ML} bastava importare almeno una libreria contenuta in uno dei due gruppi precedentemente descritti.
Mentre nel secondo caso, indicato con *wo_pandas_numpy_scipy*, era necessario importare almeno una libreria presente nel primo gruppo.
Per entrambe le classificazioni si è andato a valutare a quanto ammontava la percentuale di file di \ac{ML} appartenenti ad ogni progetto.
Anche in questo caso le distribuzioni sono state analizzate attraverso l'ausilio di un boxplot.
### RQ2: come sono distribuiti i bug sulle diverse fasi di machine learning?
Come illustrato nella @sec:classificazione-commit per poter determinare la natura di un *issue fix* si è fatto ricorso alla classificazione delle *issue* ad esso associate.
La maggior parte delle *issue* è stata classificata automaticamente, ma è stato comunque necessario classificarne una porzione in modo manuale per poter avere un train/test set.
Come detto precedentemente, nel caso delle *issue* classificate a mano, oltre all'individuazione della tipologia (\ac{ML}, non \ac{ML}) è stata individuata anche la fase in cui il problema si palesava (si veda @sec:classificazione-issues).
In questa *\ac{RQ}* si vuole andare a valutare come questo dato aggiuntivo sulle fasi viene *proiettato* sui commit di *fix*.
Per poter svolgere questa analisi è necessario incrociare i dati sui commit di *fix* con la classificazione delle *issue*.
A partire dal dataset delle *issue* è stato creato per ogni progetto un dizionario *issue* $\rightarrow$ fase.
Quindi per ogni commit si è individuata la fase attraverso questo dizionario ausiliario.
In particolare un commit poteva citare:
- nessuna *issue* inclusa nel dizionario. In questo caso non è possibile individuare la fase del commit.
- una *issue* presente nel dizionario. In questo caso al commit viene assegnata la fase della *issue*.
- più di una *issue* presente nel dizionario. In questo caso al commit venivano associate più fasi[^multi-phases].
[^multi-phases]: Nessun commit di *fix* presente nel dataset utilizzato è rientrato in questa categoria.
L'analisi quantitativa è avvenuta attraverso un barplot in cui venivano riportati unicamente i commit a cui è stato possibile assegnare almeno una fase.
### RQ3: esiste una differenza di entropia del cambiamento tra machine learning bug e altri bug?
La successiva analisi aveva lo scopo di verificare l'esistenza di una differenza tra l'entropia del *fix* rispetto alla natura di questi.
Il lavoro di questa analisi è basato sul modello *BCC* discusso nella @sec:entropy.
L'analisi è stata svolta sia a livello di file, sia a livello di linee quindi per ogni commit del dataset è stato necessario individuare sia il numero di file che hanno subito delle modifiche, sia il numero di linee alterate, considerando in questo modo sia le aggiunte che le rimozioni.
Il dato rispetto alle linee modificate è già presente nel dataset di partenza (si veda @sec:classificazione-commit), mentre il numero di file modificati può essere ricavato dalla lista dei file modificati nel commit.
Inoltre per poter calcolare la probabilità di un cambiamento è stato necessario conoscere anche il numero totale di file e di linee di ogni progetto.
Questi valori sono stati calcolati attraverso la storia `git` del branch `master`[^branch-master].
Per ogni commit sono stati individuati i file aggiunti ($+1$) e rimossi ($-1$) in modo tale da poter calcolare il delta-cambiamento del commit.
Eseguendo la somma di questo delta su tutti i commit si è ottenuto il numero totale di file del progetto.
In modo analogo si è proceduto anche per quanto riguarda le linee.
[^branch-master]: Oltre al branch `master` è stato considerato anche il branch `main` diventato molto comune dopo le proteste del movimento Black Lives Matter e il branch `master-V2` unico branch utilizzato da un progetto.
Le due distribuzioni sono state valutate graficamente attraverso un boxplot.
Inoltre sono stati svolti dei test statistici (*Wilcoxon ranksum* e *Cliff's delta*) per verificare la rilevanza di queste differenze.
### RQ4: come varia il livello di discussione tra machine learning bug e altri bug?
Per rispondere a questa domanda è stato necessario andare a valutare il numero di commenti presenti all'interno di ogni issue.
Questo dato non è presente nel dataset dei commit generato inizialmente (si veda @sec:classificazione-commit), ma può essere ricavato a partire dalla lista delle *issue* citate.
Dato un commit si è considerata la lista delle *issue* citate, e per ogni *issue* citata si è calcolato il numero di commenti.
Poiché un singolo commit può far riferimento a più *issue* è stato necessario anche calcolare il numero di commenti medi.
Il livello della discussione non viene determinato solo dal numero di commenti, ma anche dalla lunghezza di questi.
Quindi per ogni *issue* è stato calcolato anche il numero medio di parole presenti all'interno di un commento.
I dati per entrambe le distribuzioni sono stati valutati graficamente attraverso l'ausilio di un boxplot e attraverso i test statistici illustrati precedentemente.
### RQ5: come varia il time-to-fix tra machine learning bug e altri bug?
In quest'ultima analisi si vuole andare a valutare se c'è differenza nel tempo necessario per eseguire il *fix*.
Anche in questo caso, per poter rispondere alla domanda, è necessario incrociare i dati dei commit con quelli delle *issue* attraverso la lista delle *issue* citate.
Dato una *issue* sono stati individuate la data di apertura e di chiusura.
Nel caso in cui ad un commit sono associate più *issue* è stata presa come data di apertura il minimo tra tutte le date di apertura delle *issue* e, in modo analogo, si è proceduto anche per la data di chiusura con la differenza che i dati sono stati aggregati attraverso la funzione `max`.
Una volta noto il momento di apertura e di chiusura della problematica è stato possibile calcolare il numero di giorni intercorsi tra questi due istanti temporali.
Le distribuzioni così ottenute sono state analizzate ancora una volta mediante un *boxplot*, il test *Wilcoxon ranksum* e il test *Cliff's delta*.

196
src/chapter_4.md Normal file
View File

@ -0,0 +1,196 @@
# Risultati {#sec:results}
\hypertarget{sec:rq1}{%
\section[RQ1: come il ML e' distribuito sull'architettura dei progetti?]{RQ1: come il machine learning e' distribuito sull'architettura dei progetti?}\label{sec:rq1}}
Dalla @fig:files-directories si può notare che i cambiamenti generici vanno ad impattare su una superficie maggiore del sistema, sia che l'analisi sia svolta al livello di file che di directory.
Un'ulteriore aspetto interessante riguarda la varianza delle distribuzioni, infatti, indipendentemente dalla granularità dell'analisi, il dato riguardante i cambiamenti di \ac{ML} è caratterizzato da una maggiore varianza.
![Percentuale di file e directory modificate in base al tipo di cambiamento](figures/files-and-directories.pdf){#fig:files-directories width=100%}
Nel boxplot in @fig:imports sono invece riportati i risultati per quanto riguarda l'utilizzo di import di \ac{ML}.
Si può notare che, indipendentemente dal livello di analisi, la percentuale di file che utilizzano librerie di \ac{ML} è caratterizzata da una forte varianza.
Ciò indica che i progetti inclusi all'interno dello studio sono di varia natura e che alcuni sono più incentrati sul \ac{ML} rispetto ad altri.
Inoltre, considerando l'analisi *strict*, è possibile osservare come solo un $25\%$ dei progetti abbia una percentuale di file di \ac{ML} superiore al $45\%$.
![Percentuale di file che utilizzano librerie di ML](figures/imports.pdf){#fig:imports width=80%}
In relazione all'analisi *wo_pandas_numpy_scipy* sono stati poi analizzati i cinque progetti più \ac{ML} *intensive* per valutare eventuali caratteristiche comuni rispetto al dominio applicativo.
Com'è possibile notare dalla @tbl:ml-intensive i vari progetti si occupano di problematiche diverse, ma in quasi tutti i casi è prevista l'estrapolazione di informazioni da immagini.
L'unica eccezione è data dal progetto *jdb78/pytorch-forecasting* che si occupa del *forecasting* di serie temporali.
| Progetto | Dominio Applicativo |
|-----------------------------|---------------------------|
| *davidsandberg/facenet* | Riconoscimento facciale |
| *jdb78/pytorch-forecasting* | Time series forecasting |
| *tianzhi0549/FCOS* | Riconoscimento di oggetti |
| *emedvedev/attention-ocr* | Riconoscimento del testo |
| *Tianxiaomo/pytorch-YOLOv4* | Riconoscimento di oggetti |
: Dominio applicativo dei progetti con maggior uso di librerie di \ac{ML} {#tbl:ml-intensive}
\begin{tcolorbox}[colback=white, boxrule=0.3mm]
Sia nel caso in cui l'analisi sia svolta sui file modificati, sia nel caso in cui sia svolta sugli import, il dato riguardante il \ac{ML} è caratterizzato da una forte varianza.
Questo vuol dire che la diversa natura dei progetti considerati nello studio genera delle caratteristiche diverse per quanto riguarda l'architettura.
\end{tcolorbox}
\newpage
\hypertarget{sec:rq2}{%
\section[RQ2: come sono distribuiti i bug sulle diverse fasi di ML?]{RQ2: come sono distribuiti i bug sulle diverse fasi di machine learning?}\label{sec:rq2}}
Andando a confrontare la distribuzione delle fasi sui commit (@fig:count-fix-phases) rispetto alla distribuzione sulle issue (@fig:labeling-phases) è possibile notare la scomparsa della fase *data collection*.
Inoltre è evidente anche la riduzione delle occorrenze di *model training* e una crescita d'importanza per quanto riguarda le fasi di *model requirements* e *model deployment*.
Sfortunatamente i dati disponibili per questa analisi sono molto limitati (è stato possibile ricavare la fase solo per quaranta *fix*), per cui non è stato possibile effettuare delle analisi più approfondite.
![Istanze dei fix in base alla fase](figures/count-fix-phases.pdf){#fig:count-fix-phases width=70%}
\hypertarget{sec:rq3}{%
\section[RQ3: esiste una differenza di entropia del cambiamento tra ML bug e altri bug?]{RQ3: esiste una differenza di entropia del cambiamento tra machine learning bug e altri bug?}\label{sec:rq3}}
Dal boxplot[^boxplot-entropy] in @fig:files-entropy è possibile notare una distribuzione equivalente per le due tipologie di fix.
Una situazione analoga si riscontra anche nell'analisi sulle linee (@fig:lines-entropy) anche se in questo caso è possibile notare che i valori di entropia associati ai fix di \ac{ML} sono shiftati leggermente verso l'alto.
[^boxplot-entropy]: Per ragioni di visualizzazione è stato scelto il $95$-$esimo$ quantile come limite superiore di entrambi i grafici.
\begin{figure}[!ht]
\subfloat[Entropia calcolata sui file\label{fig:files-entropy}]{%
\includegraphics[width=0.45\textwidth]{src/figures/files-entropy.pdf}
}
\hfill
\subfloat[Entropia calcolata sulle linee\label{fig:lines-entropy}]{%
\includegraphics[width=0.45\textwidth]{src/figures/lines-entropy.pdf}
}
\caption{Entropia in base al tipo di fix}
\label{fig:entropy}
\end{figure}
Per verificare la rilevanza statistica di questa diversità sono stati svolti il *Wilcoxon ranksum* test e il *Cliff's delta* i cui risultati sono riportati nella @tbl:test-entropy.
Nel caso dell'entropia del cambiamento calcolata sui file possiamo dire che la differenza è marginale poiché il *p-value* è prossimo a $0.05$, mentre nel caso dell'entropia calcolato sulle linee la differenza viene confermata dal test.
In entrambi i casi, però, l'*effect size* è trascurabile segno che la complessità dell'intervento non varia in base al tipo di intervento.
| | Wilcoxon ranksum p-values | Cliff's delta |
|------|:----------------:|:-------------:|
| file | 0.059 | 0.044 |
| line | 5.932e-06 | 0.105 |
: Risultati dei test statistici per quanto riguarda l'entropia del cambiamento {#tbl:test-entropy}
\begin{tcolorbox}[colback=white, boxrule=0.3mm]
Non sono emerse differenze statisticamente rilevanti per quanto riguarda la complessità del processo di cambiamento.
\end{tcolorbox}
\hypertarget{sec:rq4}{%
\section[RQ4: come varia la discussione tra ML bug e altri bug?]{RQ4: come varia il livello di discussione tra machine learning bug e altri bug?}\label{sec:rq4}}
Osservando invece il boxplot[^boxplot-discussion] in @fig:discussion-comments si evince una differenza molto più marcata tra le due distribuzioni.
In particolare è possibile notare che le *issue fix* di \ac{ML} presentano una maggiore discussione e anche una maggiore varianza.
Se consideriamo la differenza interquartile, in modo da escludere completamente eventuali outlier, possiamo osservare che nei *fix* generici questa varia tra zero e uno.
Ciò vuol dire che il $50\%$ interno delle issue o non presenta commenti o ne presenta uno solo.
Mentre la differenza interquartile dei *fix* di \ac{ML} è compreso tra uno e cinque, quindi nel $50\%$ interno tutte le issue hanno almeno un commento di risposta.
[^boxplot-discussion]: In questo caso il limite superiore è pari al $97$-$esimo$ quantile.
\newpage
\begin{figure}[!ht]
\subfloat[Numero di commenti medi\label{fig:discussion-comments}]{%
\includegraphics[width=0.45\textwidth]{src/figures/comments.pdf}
}
\hfill
\subfloat[Numero di parole medie per commento\label{fig:discussion-words}]{%
\includegraphics[width=0.45\textwidth]{src/figures/words.pdf}
}
\caption{Livello di discussione in base al tipo}
\label{fig:discussion}
\end{figure}
I risultati dell'analisi rispetto alle parole medie contenute in un commento sono riportati in @fig:discussion-words.
Anche in questo caso si può vedere che nel caso di \ac{ML} *fix* la distribuzione presenta valori più elevati e maggiore varianza.
Per cui non solo nei *fix* di \ac{ML} c'è maggiore discussione, ma la discussione è anche più *densa*.
Anche in questo caso sono stati svolti i test statistici.
In @tbl:test-discussion è possibile vedere come per entrambe le metriche considerate il *p-value* sia abbondantemente inferiore alla soglia di $0.05$ quindi abbiamo una conferma della diversità delle due distribuzioni riscontrata dal boxplot.
Inoltre, per entrambe le metriche, abbiamo un *effect size* medio.
| | Wilcoxon ranksum p-values | Cliff's delta |
|---------------------|:----------------:|:-------------:|
| commenti medi | 9.053e-75 | 0.425 |
| parole per commento | 2.889e-59 | 0.377 |
: Risultati dei test statistici per quanto riguarda il livello di discussione {#tbl:test-discussion}
Infine, per entrambe le metriche, sono stati analizzati alcuni casi estremi.
Nel caso della issue numero 96 del progetto *BrikerMan/Kashgari* la problematica riguarda un drastico calo di performance quando il fit viene eseguito con un metodo piuttosto che con un altro.
All'interno dei commenti, diversi *contributors* del progetto, si scambiano possibili architetture, *snippet* di codice e metriche per confrontare i diversi modelli generati.
In questo caso l'ampiezza della discussione è sicuramente dovuta alla difficoltà di individuare la problematica.
La issue numero 27 del progetto *ljvmiranda921/pyswarms* è una richiesta di aiuto da parte dell'autore per migliorare l'implementazione della ricerca per il tuning degli hyperparametri.
In questo caso la discussione si protrae per oltre trenta commenti ed è incentrata sui requisiti dell'implementazione e come implementarla nel rispetto delle linee guida del progetto.
Quest'intervento di modifica è stato il primo contributo dell'utente non solo su questo progetto, ma sull'intera community di GitHub.
Questa inesperienza può aver contribuito ad ampliare la discussione.
La stessa analisi è stata svolta anche per le issue che presentano un alto numero di parole medie per commento.
In questo caso un valore molto elevato della metrica è spesso riconducibile alla condivisione di blocchi di codice.
Ne sono un esempio la issue tratta precedentemente nel caso dei commenti, ma anche la issue 125 sempre del progetto *BrikerMan/Kashgari*.
Altri fattori che contribuiscono a spiegare questo dato sono la presenza di blocchi di errori (*mittagessen/kraken/206*) o messaggi di log utili ad inquadrare l'origine del problema (*robertmartin8/PyPortfolioOpt/177*).
\begin{tcolorbox}[colback=white, boxrule=0.3mm]
Le \emph{issue} di \ac{ML} sono caratterizzata da una maggiore discussione.
Un valore molto elevato di parole per commento può indicare uno scambio massiccio all'interno della discussione di \emph{snippet} di codice, di log d'errore e configurazioni dell'ambiente.
\end{tcolorbox}
\hypertarget{sec:rq5}{%
\section[RQ5: come varia il time-to-fix tra ML bug e altri bug?]{RQ5: come varia il time-to-fix tra machine learning bug e altri bug?}\label{sec:rq5}}
Anche in questo caso, osservando la @fig:day-to-fix, è possibile notare una netta differenza tra i *fix* di \ac{ML} e gli altri.
In particolare i bug di \ac{ML} necessitano, mediamente, di maggior tempo per essere risolti e sono caratterizzati da una varianza maggiore.
Inoltre è possibile vedere come la mediana non sia centrata, bensì spostata verso il basso.
Questo vuol dire che il $50\%$ basso dei *bug* di \ac{ML} viene comunque risolto in tempi brevi (due giorni circa), mentre l'altro $50\%$ può richiedere una quantità di tempo decisamente superiore.
![Giorni necessari per il fix](figures/day-to-fix.pdf){#fig:day-to-fix width=70%}
Un'ulteriore testimonianza del maggior tempo necessario per risolvere le problematiche legate al \ac{ML} ci viene data dagli outlier.
Nel caso di un problema generico, questo, viene considerato come *anomalo* se per essere risolto necessita di un tempo superiore ai cinque giorni.
Mentre nel caso dei *fix* di \ac{ML} per essere considerato outlier una *issue*, necessaria di un *time-to-fix* superiore ai trentacinque giorni.
Il maggior tempo necessario ad attuare la correzione indica che i *bug* di \ac{ML} sono più difficili da individuare e correggere rispetto a quelli generici.
Inoltre questo risultato contribuisce a spiegare il dato emerso dalla sezione precedente, in quanto per individuare la fonte del problema sembrerebbe essere necessaria una discussione più approfondita.
Per quanto riguarda i *fix* che hanno richiesto un tempo estremamente lungo la causa può dipendere anche da ulteriori fattori.
Nel caso del progetto *CamDavidsonPilon/lifelines* la *issue* numero 507 segnala una problematica di *overflow* durante le operazioni sul dataset.
Per stessa ammissione dell'autore del progetto la problematica è banale da risolvere, ma è stato comunque necessario attendere un paio di mesi affinché la correzione venisse portata sul branch principale.
Altre issue invece hanno necessitato di molto tempo per essere risolte in quanto venivano considerate a bassa priorità.
In questi casi generalmente viene fornito un *work around* che permette di tamponare la problematica.
La presenza di questo *work around* probabilmente riduce ulteriormente la priorità data alla *issue* il che dilata ulteriormente i tempi.
Un esempio di questo comportamento ci viene dato dalla *issue* 135 del progetto *robertmartin8/PyPortfolioOpt* che ha richiesto circa sette mesi per essere risolta o dalla *issue* 98 del progetto *mittagessen/kraken* che invece ha necessitato di quasi due anni.
Anche per quest'ultima *RQ* sono stati svolti i test statistici illustrati precedentemente.
Dai risultati riportati in @tbl:test-time-to-fix è possibile notare un *p-value* inferiore a $0.05$ e un *effect size* medio.
Questi risultati non solo confermano la differenza osservata nel boxplot, ma ci confermano che l'impatto sulla metrica non è trascurabile.
| | Wilcoxon ranksum p-values | Cliff's delta |
|------------|:----------------:|:-------------:|
| day-to-fix | 7.354e-53 | 0.355 |
: Risultati dei test statistici per quanto riguarda il time-to-fix {#tbl:test-time-to-fix}
\begin{tcolorbox}[colback=white, boxrule=0.3mm]
Le problematiche di \ac{ML} richiedono più tempo per essere risolte.
La bassa priorità di una \emph{issue} e la presenza di \emph{work around} sono fattori che contribuiscono a ritardare l'intervento di \emph{fix}.
\end{tcolorbox}
## Threats to validity
La *threats to validity* più critica per il lavoro svolto è di tipo *construct* e riguarda la classificazione delle *issue*.
La classificazione è avvenuta in modo automatico attraverso un modello *naïve Bayes*.
Il classificatore, sebbene sia caratterizzato da una *recall* molto elevata, presenta una *precision* discreta per cui è molto probabile che all'interno tra le *issue* di \ac{ML} siano state incluse anche *issue* generiche.
Inoltre, poiché la classificazione degli interventi di *issue fixing* dipende dalla classificazione degli *issue*, gli eventi di *misclassification* sono stati propagati anche su questa seconda classificazione.
Per quanto riguarda le *threat to validity* interne bisogna segnalare l'interpretazione data al *time-to-fix*.
Infatti in questo lavoro il dato del *time-to-fix* è stato calcolato come la differenza tra l'istante di chiusura e di apertura della *issue*.
Questa approssimazione è sicuramente semplicistica in quanto comprende altri sotto intervalli come *time-to-response*, *time-to-assign*, ecc.
Mentre per quanto riguarda le *threat to validity* esterne va sicuramente segnalato che i risultati di questo lavoro si generalizzano unicamente per i trenta progetti inclusi nel dataset.

38
src/chapter_5.md Normal file
View File

@ -0,0 +1,38 @@
# Conclusioni {#sec:conclusions}
La *RQ1* (@sec:rq1) ci ha permesso di inquadrare la natura dei progetti considerati per questo studio.
Attraverso l'analisi degli import si è mostrato come l'utilizzo di librerie di \ac{ML} vari a seconda del progetto.
Da questo dato si può comprendere che i progetti all'interno del dataset sono diversi tra di loro e che alcuni sono più incentrati sul \ac{ML} rispetto ad altri.
Si è anche visto che la percentuale di progetti con un numero di *source file* di \ac{ML} superiore al $45\%$ sia molto limitata.
Inoltre andando ad analizzare la porzione di sistema impattata dai cambiamenti si è visto come anche in questo caso il dato sia caratterizzato da una forte variabilità.
Le *RQ3*, *RQ4* e *RQ5* (da @sec:rq3) sono andate a valutare nello specifico le differenze in termini di entropia, discussione e *time-to-fix* tra gli interventi di *issue fixing* generici e quelli specifici del \ac{ML}.
Da queste analisi si evince che tra i due tipi di interventi ci sono sia similitudini che differenze.
Nel caso dell'entropia e della complessità del processo di cambiamento del software non sono emerse differenze rilevanti.
Questo ci porta a pensare che il processo di cambiamento non varia in base al tipo di intervento, ma sia costante.
Nel caso del livello di discussione e del *time-to-fix* sono emerse delle differenze confermate anche dai test statistici effettuati.
In entrambi i casi l'essere un *fix* legato al \ac{ML} ha spinto la metrica verso l'alto.
Nel caso dei messaggi scambiati non solo si è riscontrato un numero medio di messaggi più elevato, ma si è visto anche che questi mediamente sono più lunghi.
Questo dato potrebbe dipendere sia dal maggiore tempo richiesto per d'individuazione e correzione delle problematiche legate al \ac{ML}, sia da un maggiore interesse per queste problematiche rispetto alle altre.
In sintesi questo lavoro ha fatto emergere sia delle similitudini che delle differenze per quanto riguarda gli interventi di *fix* all'interno di progetti di \ac{ML}.
Le principali differenze sono state riscontrate per quanto riguarda il livello di discussione, decisamente più alto nel caso di *issue* di \ac{ML}, e il tempo necessario alla correzione dei difetti, anche in questo caso maggiore nel caso del \ac{ML}.
Non sono emerse differenze rilevanti invece per quanto riguarda l'entropia generata dai cambiamenti.
Infine si è visto come l'impatto delle componenti di \ac{ML} sull'architettura vada a riflettere la natura dei progetti.
## Sviluppi futuri
Nella *RQ2* sfortunatamente non è stato possibile svolgere un'analisi più approfondita per la carenza di dati.
Un possibile sviluppo futuro potrebbe consistere nella realizzazione di un classificatore *multi-label* in grado di individuare la fase in cui il problema si è manifestato.
In questo modo non solo sarebbe possibile conoscere la fase per ogni intervento di *fix*, ma anche definire delle nuove analisi.
Per esempio si potrebbe andare a ricercare differenze in termini di entropia, discussione e *time-to-fix* in base alla fase in cui si è presentata la *issue*.
Per quanto riguarda la valutazione dell'entropia si è scelto come intervallo temporale di riferimento il singolo commit.
Utilizzando questa configurazione non si è riscontrata nessuna differenza degna di nota.
Un possibile sviluppo futuro potrebbe consistere nell'andare a valutare l'entropia considerando dei riferimenti temporali più ampi e verificare in questo caso la presenza di differenze.
Infine un aspetto non considerato in questo lavoro riguarda i *contributors*.
Una prima analisi potrebbe andare a valutare se esiste una sovrapposizione o meno tra chi effettua interventi di *fix* generici e chi si occupa di quelli legati al \ac{ML}.
Inoltre si potrebbero andare a ricercare anche differenze in base al tipo di contributore (interno, esterno).

BIN
src/figures/comments.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
src/figures/day-to-fix.pdf Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
src/figures/imports.pdf Normal file

Binary file not shown.

Binary file not shown.

BIN
src/figures/words.pdf Normal file

Binary file not shown.

View File

@ -12,6 +12,21 @@ supervisor:
cosupervisor: cosupervisor:
title: Dott.ssa title: Dott.ssa
name: Vittoria Nardone name: Vittoria Nardone
abstract: |
Negli ultimi anni lo sviluppo di progetti di machine learning (ML) ha subito una forte crescita che si è riflessa anche nell'ambito della ricerca.
In letteratura sono presenti diversi lavori che vanno a comparare progetti di ML con progetti generici o che confrontano progetti di ML realizzati con diversi tool e framework.
In questa tesi si vuole indagare l'esistenza di differenze tra issue fixing di ML e issue generiche all'interno di progetti open source di ML realizzati in Python.
In particolare l'attenzione è rivolta:
- all'impatto dei cambiamenti sull'architettura del sistema.
- alla distribuzione delle issue lungo le vari fasi di un workflow di ML.
- all'entropia del cambiamento generata dai fix.
- al livello di discussione delle issue.
- al time-to-fix delle problematiche.
Questo studio mostra come non esistano differenze rilevanti in termini di entropia del cambiamento, ma sono presenti differenze significative per quanto riguarda il time-to-fix e il livello di discussione.
Inoltre si è visto che la diversa natura dei progetti si riflette sull'architettura dei sistemi considerati.
############# #############
babel: italian babel: italian
lang: it-IT lang: it-IT
@ -25,14 +40,34 @@ numbersections: true
eulerchapternumber: true eulerchapternumber: true
floatnumbering: true floatnumbering: true
link-citations: true link-citations: true
header-includes: |
\usepackage{tcolorbox}
############# #############
ac-onlyused: true ac-onlyused: true
ac-title: Acronimi ac-title: Acronimi
acronym: acronym:
- short: AI
long: Artificial Intelligence
- short: API - short: API
long: Application Program Interface long: Application Program Interface
- short: CERN
long: European Council for Nuclear Research
- short: DL
long: Deep Learning
- short: GPU
long: Graphics Processing Unit
- short: ML - short: ML
long: Machine Learning long: Machine Learning
- short: NLP
long: Natural Language Processing
- short: PR
long: Pull Request
- short: RQ
long: Research Question
- short: SATD
long: Self-Admitted Technical Debt
- short: SO
long: Stack Overflow
- short: VPS - short: VPS
long: Virtual Private Server long: Virtual Private Server
##### crossref ##### ##### crossref #####

8
util/.idea/.gitignore vendored Executable file
View File

@ -0,0 +1,8 @@
# Default ignored files
/shelf/
/workspace.xml
# Datasource local storage ignored files
/dataSources/
/dataSources.local.xml
# Editor-based HTTP Client requests
/httpRequests/

View File

@ -0,0 +1,162 @@
<component name="InspectionProjectProfileManager">
<profile version="1.0">
<option name="myName" value="Project Default" />
<inspection_tool class="PyPep8Inspection" enabled="true" level="WEAK WARNING" enabled_by_default="true">
<option name="ignoredErrors">
<list>
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
<option value="W29" />
<option value="E501" />
</list>
</option>
</inspection_tool>
<inspection_tool class="PyPep8NamingInspection" enabled="true" level="WEAK WARNING" enabled_by_default="true">
<option name="ignoredErrors">
<list>
<option value="N806" />
</list>
</option>
</inspection_tool>
</profile>
</component>

View File

@ -0,0 +1,6 @@
<component name="InspectionProjectProfileManager">
<settings>
<option name="USE_PROJECT_PROFILE" value="false" />
<version value="1.0" />
</settings>
</component>

4
util/.idea/misc.xml Executable file
View File

@ -0,0 +1,4 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.8" project-jdk-type="Python SDK" />
</project>

8
util/.idea/modules.xml Executable file
View File

@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/util.iml" filepath="$PROJECT_DIR$/.idea/util.iml" />
</modules>
</component>
</project>

11
util/.idea/util.iml Executable file
View File

@ -0,0 +1,11 @@
<?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" />
<orderEntry type="inheritedJdk" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
<component name="TestRunnerService">
<option name="PROJECT_TEST_RUNNER" value="Nosetests" />
</component>
</module>

6
util/.idea/vcs.xml Executable file
View File

@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$/.." vcs="Git" />
</component>
</project>

View File

@ -0,0 +1,126 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"collapsed": true
},
"outputs": [
{
"data": {
"text/plain": [
"model training 128\n",
"model evaluation 91\n",
"model deployment 75\n",
"data cleaning 59\n",
"model requirements 47\n",
"feature engineering 36\n",
"data collection 25\n",
"Name: classification, dtype: int64"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"\n",
"data = pd.read_csv('sampling_nb - sampling_nb.csv')\n",
"\n",
"data.drop(['second', 'url'], inplace=True, axis=1)\n",
"\n",
"data = data[~data['classification'].isin(['?', '', 'no pipeline', 'page not found', 'chinese', 'data labeling'])]\n",
"\n",
"data['classification'].value_counts()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"data": {
"text/plain": [
"classification L2 class\n",
"data cleaning DP-DF 8\n",
" DP-LD 1\n",
" DP-O 17\n",
" DP-P 3\n",
" DP-R 13\n",
" DP-TE 9\n",
" DP-TM 2\n",
" DP-UT 6\n",
"data collection DC-DC 13\n",
" DC-DF 4\n",
" DC-F 3\n",
" DC-NS 1\n",
" DC-O 1\n",
" DC-S 3\n",
"feature engineering FE-BC 8\n",
" FE-CP 8\n",
" FE-H 10\n",
" FE-O 4\n",
" FE-T 6\n",
"model deployment MD-CI 44\n",
" MD-LR 6\n",
" MD-O 10\n",
" MD-SM 14\n",
" ME-O 1\n",
"model evaluation ME-AR 30\n",
" ME-C 29\n",
" ME-O 20\n",
" ME-RQ 8\n",
" ME-TP 4\n",
"model requirements MR-AM 18\n",
" MR-FR 25\n",
" MR-NM 2\n",
" MR-O 2\n",
"model training MT-BL 28\n",
" MT-GPU 19\n",
" MT-O 49\n",
" MT-RU 10\n",
" MT-TT 16\n",
" loss 6\n",
"dtype: int64"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.groupby(['classification', 'L2 class']).size()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.5"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

Binary file not shown.

25
util/barplot-commit.py Executable file
View File

@ -0,0 +1,25 @@
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
if __name__ == '__main__':
data = pd.read_csv('commit.csv')
data['type'] = data['is_ml'].apply(lambda x: 'ML' if x else 'No ML')
g = sns.catplot(x="type", kind="count", data=data)\
.set(title='Istanze dei commit in base al tipo')\
.set(xlabel='tipo')
ax = g.facet_axis(0, 0)
for p in ax.patches:
ax.text(
p.get_x() + p.get_width() * 0.39,
p.get_height() + 10,
p.get_height(),
color='black', rotation='horizontal', size='large')
plt.tight_layout()
#plt.show()
plt.savefig('../src/figures/count-commit.pdf')

47
util/barplot-issues-labelled.py Executable file
View File

@ -0,0 +1,47 @@
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
if __name__ == '__main__':
data = pd.read_csv('sampling_all.csv')
data['label'] = data['Classification'].apply(lambda x: x.split(';')[0].strip())
data = data[~data['label'].isin(['?', 'chinese', 'page not found'])]
data['on_pipe'] = data['label'].apply(lambda x: 'No ML' if x == 'no pipeline' else 'ML')
g = sns.catplot(x="on_pipe", kind="count", data=data)\
.set(title='Istanze delle issues in base al tipo')\
.set(xlabel='tipo')
ax = g.facet_axis(0, 0)
for p in ax.patches:
ax.text(
p.get_x() + p.get_width() * 0.43,
p.get_height() + 3,
p.get_height(),
color='black', rotation='horizontal', size='large')
plt.tight_layout()
plt.savefig('../src/figures/count-type.pdf')
#plt.show()
exit()
plt.close()
data = data[data['label'] != 'no pipeline']
g = sns.catplot(y='label', kind='count', data=data, color='green')\
.set(title='Istanze delle issues in base alla fase') \
.set(ylabel='fase')
ax = g.facet_axis(0, 0)
for p in ax.patches:
ax.text(
p.get_width() + 0.25,
p.get_y() + p.get_height() / 2,
p.get_width(),
color='black', rotation='horizontal', size='large')
plt.tight_layout()
plt.savefig('../src/figures/count-phases.pdf')

49
util/cliffsDelta.py Executable file
View File

@ -0,0 +1,49 @@
from __future__ import division
def cliffsDelta(lst1, lst2, **dull):
"""Returns delta and true if there are more than 'dull' differences"""
if not dull:
dull = {'small': 0.147, 'medium': 0.33, 'large': 0.474} # effect sizes from (Hess and Kromrey, 2004)
m, n = len(lst1), len(lst2)
lst2 = sorted(lst2)
j = more = less = 0
for repeats, x in runs(sorted(lst1)):
while j <= (n - 1) and lst2[j] < x:
j += 1
more += j*repeats
while j <= (n - 1) and lst2[j] == x:
j += 1
less += (n - j)*repeats
d = (more - less) / (m*n)
size = lookup_size(d, dull)
return d, size
def lookup_size(delta: float, dull: dict) -> str:
"""
:type delta: float
:type dull: dict, a dictionary of small, medium, large thresholds.
"""
delta = abs(delta)
if delta < dull['small']:
return 'negligible'
if dull['small'] <= delta < dull['medium']:
return 'small'
if dull['medium'] <= delta < dull['large']:
return 'medium'
if delta >= dull['large']:
return 'large'
def runs(lst):
"""Iterator, chunks repeated values"""
for j, two in enumerate(lst):
if j == 0:
one, i = two, 0
if one != two:
yield j - i, one
i = j
one = two
yield j - i + 1, two

3322
util/commit.csv Executable file

File diff suppressed because one or more lines are too long

3136
util/commit_analysis.csv Executable file

File diff suppressed because one or more lines are too long

31
util/commit_files.csv Executable file

File diff suppressed because one or more lines are too long

25
util/count-phases-on-commit.py Executable file
View File

@ -0,0 +1,25 @@
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
if __name__ == '__main__':
data = pd.read_csv('commit_analysis.csv')
data.dropna(inplace=True)
x = data.groupby('phases').size().reset_index()
g = sns.catplot(y="phases", kind="count", data=data, color='green') \
.set(title='Istanze dei fix in base alla fase') \
.set(ylabel='fase')
ax = g.facet_axis(0, 0)
for p in ax.patches:
ax.text(
p.get_width() + 0.2,
p.get_y() + p.get_height() / 2,
p.get_width(),
color='black', rotation='horizontal', size='large')
plt.tight_layout()
plt.savefig('../src/figures/count-fix-phases.pdf')
#plt.show()

29
util/discussion.py Executable file
View File

@ -0,0 +1,29 @@
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import pearsonr
if __name__ == '__main__':
data = pd.read_csv('commit_analysis.csv')
data['type'] = data['is_ml'].apply(lambda x: 'ML' if x else 'No ML')
ylim = data['n_comments'].quantile(0.97)
sns.catplot(x='type', y='n_comments', kind='box', data=data) \
.set(title='Commenti in base al tipo di issue') \
.set(xlabel='tipo') \
.set(ylabel='numero di commenti') \
.set(ylim=(0, ylim))
plt.tight_layout()
plt.savefig('../src/figures/comments.pdf')
plt.close()
ylim = data['words_mean'].quantile(0.97)
sns.catplot(x='type', y='words_mean', kind='box', data=data) \
.set(title='Parole medie in un commento') \
.set(xlabel='tipo') \
.set(ylabel='parole medie') \
.set(ylim=(0, ylim))
plt.tight_layout()
plt.savefig('../src/figures/words.pdf')

28
util/entropy.py Executable file
View File

@ -0,0 +1,28 @@
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
if __name__ == '__main__':
data = pd.read_csv('commit_analysis.csv')
data['type'] = data['is_ml'].apply(lambda x: 'ML' if x else 'No ML')
ylim = data['file_entropy'].quantile(0.95)
sns.catplot(x='type', y='file_entropy', kind='box', data=data) \
.set(title='Entropia del cambiamento calcolata sui file') \
.set(xlabel='tipo') \
.set(ylabel='entropia') \
.set(ylim=(0, ylim))
plt.tight_layout()
plt.savefig('../src/figures/files-entropy.pdf')
plt.close()
ylim = data['line_entropy'].quantile(0.95)
sns.catplot(x='type', y='line_entropy', kind='box', data=data) \
.set(title='Entropia del cambiamento calcolata sulle linee') \
.set(xlabel='tipo') \
.set(ylabel='entropia') \
.set(ylim=(0, ylim))
plt.tight_layout()
plt.savefig('../src/figures/lines-entropy.pdf')

74
util/extreme_cases.ipynb Executable file

File diff suppressed because one or more lines are too long

35
util/files-and-dirs.py Executable file
View File

@ -0,0 +1,35 @@
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
if __name__ == '__main__':
data = pd.read_csv('commit_files.csv')
help_df = pd.DataFrame(columns=['project', 'tipo', 'files/dirs', 'value'])
for i, row in data.iterrows():
project = row['project']
help_df = help_df.append(
{'project': project, 'tipo': 'No ML', 'files/dirs': 'Files', 'value': row['no_ml_files_ratio']},
ignore_index=True
)
help_df = help_df.append(
{'project': project, 'tipo': 'ML', 'files/dirs': 'Files', 'value': row['ml_files_ratio']},
ignore_index=True
)
help_df = help_df.append(
{'project': project, 'tipo': 'No ML', 'files/dirs': 'Directories', 'value': row['no_ml_dirs_ratio']},
ignore_index=True
)
help_df = help_df.append(
{'project': project, 'tipo': 'ML', 'files/dirs': 'Directories', 'value': row['ml_dirs_ratio']},
ignore_index=True
)
plot = sns.boxplot(x='files/dirs', y='value', hue='tipo', data=help_df)
plot.set_title('Percentuali di files e directories modificate')
plot.set_ylabel('')
plt.tight_layout()
plt.savefig('../src/figures/files-and-directories.pdf')

44
util/import.py Executable file
View File

@ -0,0 +1,44 @@
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
def get(project, series) -> int:
try:
return series[(project, True)]
except Exception:
return 0
if __name__ == '__main__':
data = pd.read_csv('imports_data.csv')
total_files = data.groupby('project').size()
ml = data.groupby(['project', 'is_ml']).size()
ml_strict = data.groupby(['project', 'is_ml_strict']).size()
help_df = pd.DataFrame(columns=['project', 'type', 'value'])
for project in data['project'].unique():
tot_files = total_files[project]
help_df = help_df.append(
{'project': project, 'type': 'all', 'value': get(project, ml)/tot_files},
ignore_index=True
)
help_df = help_df.append(
{'project': project, 'type': 'wo_pandas_numpy_scipy', 'value': get(project, ml_strict) / tot_files},
ignore_index=True
)
colors = ['#cab2d6', '#6a3d9a']
sns.set_palette(sns.color_palette(colors))
sns.catplot(x='type', y='value', kind='box', data=help_df)\
.set(title='Percentuale di file con import di ML') \
.set(xlabel='Librerie ML') \
.set(ylabel='')
plt.tight_layout()
plt.savefig('../src/figures/imports.pdf')
#plt.show()

1804
util/imports_data.csv Executable file

File diff suppressed because it is too large Load Diff

126
util/l2.ipynb Executable file
View File

@ -0,0 +1,126 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"collapsed": true
},
"outputs": [
{
"data": {
"text/plain": [
"model training 128\n",
"model evaluation 91\n",
"model deployment 75\n",
"data cleaning 59\n",
"model requirements 47\n",
"feature engineering 36\n",
"data collection 25\n",
"Name: classification, dtype: int64"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import pandas as pd\n",
"\n",
"data = pd.read_csv('sampling_nb - sampling_nb.csv')\n",
"\n",
"data.drop(['second', 'url'], inplace=True, axis=1)\n",
"\n",
"data = data[~data['classification'].isin(['?', '', 'no pipeline', 'page not found', 'chinese', 'data labeling'])]\n",
"\n",
"data['classification'].value_counts()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [
{
"data": {
"text/plain": [
"classification L2 class\n",
"data cleaning DP-DF 8\n",
" DP-LD 1\n",
" DP-O 17\n",
" DP-P 3\n",
" DP-R 13\n",
" DP-TE 9\n",
" DP-TM 2\n",
" DP-UT 6\n",
"data collection DC-DC 13\n",
" DC-DF 4\n",
" DC-F 3\n",
" DC-NS 1\n",
" DC-O 1\n",
" DC-S 3\n",
"feature engineering FE-BC 8\n",
" FE-CP 8\n",
" FE-H 10\n",
" FE-O 4\n",
" FE-T 6\n",
"model deployment MD-CI 44\n",
" MD-LR 6\n",
" MD-O 10\n",
" MD-SM 14\n",
" ME-O 1\n",
"model evaluation ME-AR 30\n",
" ME-C 29\n",
" ME-O 20\n",
" ME-RQ 8\n",
" ME-TP 4\n",
"model requirements MR-AM 18\n",
" MR-FR 25\n",
" MR-NM 2\n",
" MR-O 2\n",
"model training MT-BL 28\n",
" MT-GPU 19\n",
" MT-O 49\n",
" MT-RU 10\n",
" MT-TT 16\n",
" loss 6\n",
"dtype: int64"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"data.groupby(['classification', 'L2 class']).size()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.5"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

376
util/sampling_all.csv Executable file
View File

@ -0,0 +1,376 @@
Project,Issue,Url,Labels,Classification,Is ML
davidsandberg/facenet,951,https://github.com/davidsandberg/facenet/issues/951,,no pipeline,False
deepfakes/faceswap,964,https://github.com/deepfakes/faceswap/issues/964,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,968,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/968,,no pipeline,True
Tianxiaomo/pytorch-YOLOv4,136,https://github.com/Tianxiaomo/pytorch-YOLOv4/pull/136,,model evaluation,True
mittagessen/kraken,146,https://github.com/mittagessen/kraken/issues/146,,no pipeline,False
1adrianb/face-alignment,148,https://github.com/1adrianb/face-alignment/issues/148,,no pipeline,False
Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB,82,https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/82,,model requirements,False
suragnair/alpha-zero-general,175,https://github.com/suragnair/alpha-zero-general/issues/175,,feature engineering;model training,True
deepfakes/faceswap,176,https://github.com/deepfakes/faceswap/issues/176,,no pipeline,True
BrikerMan/Kashgari,88,https://github.com/BrikerMan/Kashgari/issues/88,question,model requirements,True
BrikerMan/Kashgari,374,https://github.com/BrikerMan/Kashgari/pull/374,,model deployment,False
deepfakes/faceswap,443,https://github.com/deepfakes/faceswap/pull/443,,no pipeline,False
hanxiao/bert-as-service,561,https://github.com/hanxiao/bert-as-service/issues/561,,no pipeline,True
jhpyle/docassemble,325,https://github.com/jhpyle/docassemble/issues/325,,no pipeline,False
1adrianb/face-alignment,111,https://github.com/1adrianb/face-alignment/issues/111,,no pipeline,True
deepfakes/faceswap,7,https://github.com/deepfakes/faceswap/issues/7,dev;opencv,data cleaning,True
jrkerns/pylinac,67,https://github.com/jrkerns/pylinac/pull/67,,no pipeline,True
nextgenusfs/funannotate,180,https://github.com/nextgenusfs/funannotate/issues/180,,data cleaning,False
gboeing/osmnx,515,https://github.com/gboeing/osmnx/issues/515,,no pipeline,False
thtrieu/darkflow,876,https://github.com/thtrieu/darkflow/issues/876,,model training,True
regel/loudml,544,https://github.com/regel/loudml/issues/544,,no pipeline,False
davidsandberg/facenet,786,https://github.com/davidsandberg/facenet/issues/786,,no pipeline,False
davidsandberg/facenet,772,https://github.com/davidsandberg/facenet/issues/772,,feature engineering;model training;data collection,True
tianzhi0549/FCOS,230,https://github.com/tianzhi0549/FCOS/issues/230,,feature engineering;model training,True
regel/loudml,370,https://github.com/regel/loudml/issues/370,,model deployment,False
deepfakes/faceswap,431,https://github.com/deepfakes/faceswap/pull/431,,no pipeline,True
regel/loudml,334,https://github.com/regel/loudml/pull/334,dependencies,no pipeline,False
emedvedev/attention-ocr,143,https://github.com/emedvedev/attention-ocr/issues/143,,no pipeline,True
nextgenusfs/funannotate,290,https://github.com/nextgenusfs/funannotate/issues/290,,data cleaning,True
thtrieu/darkflow,1193,https://github.com/thtrieu/darkflow/issues/1193,,no pipeline,False
thtrieu/darkflow,332,https://github.com/thtrieu/darkflow/pull/332,,model requirements;model training,True
suragnair/alpha-zero-general,177,https://github.com/suragnair/alpha-zero-general/pull/177,,no pipeline,True
dpinney/omf,345,https://github.com/dpinney/omf/pull/345,,no pipeline,False
thtrieu/darkflow,1081,https://github.com/thtrieu/darkflow/issues/1081,,no pipeline,False
thtrieu/darkflow,330,https://github.com/thtrieu/darkflow/issues/330,,model training,True
Tianxiaomo/pytorch-YOLOv4,129,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/129,,chinese,False
nicodv/kmodes,105,https://github.com/nicodv/kmodes/issues/105,,data collection,False
deepfakes/faceswap,273,https://github.com/deepfakes/faceswap/issues/273,,data cleaning,False
tianzhi0549/FCOS,287,https://github.com/tianzhi0549/FCOS/issues/287,,model evaluation,True
Tianxiaomo/pytorch-YOLOv4,162,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/162,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,2,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/2,,no pipeline,True
davidsandberg/facenet,997,https://github.com/davidsandberg/facenet/issues/997,,no pipeline,False
hanxiao/bert-as-service,350,https://github.com/hanxiao/bert-as-service/issues/350,,model deployment,False
hanxiao/bert-as-service,157,https://github.com/hanxiao/bert-as-service/pull/157,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,761,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/761,,model training;data cleaning,False
dpinney/omf,218,https://github.com/dpinney/omf/issues/218,,no pipeline,False
CamDavidsonPilon/lifelines,177,https://github.com/CamDavidsonPilon/lifelines/pull/177,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,641,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/641,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,360,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/360,,no pipeline,True
SeanNaren/deepspeech.pytorch,5,https://github.com/SeanNaren/deepspeech.pytorch/pull/5,,no pipeline,False
regel/loudml,82,https://github.com/regel/loudml/pull/82,,no pipeline,False
gboeing/osmnx,156,https://github.com/gboeing/osmnx/issues/156,,no pipeline,True
SeanNaren/deepspeech.pytorch,275,https://github.com/SeanNaren/deepspeech.pytorch/issues/275,stale,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,949,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/949,,model evaluation,False
davidsandberg/facenet,206,https://github.com/davidsandberg/facenet/issues/206,,model training;data cleaning,True
davidsandberg/facenet,683,https://github.com/davidsandberg/facenet/issues/683,,no pipeline,False
thtrieu/darkflow,938,https://github.com/thtrieu/darkflow/issues/938,,no pipeline,False
CamDavidsonPilon/lifelines,764,https://github.com/CamDavidsonPilon/lifelines/issues/764,next minor release 🤞,no pipeline,False
Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB,47,https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/47,,chinese,True
emedvedev/attention-ocr,171,https://github.com/emedvedev/attention-ocr/issues/171,,no pipeline,True
deepfakes/faceswap,818,https://github.com/deepfakes/faceswap/pull/818,,no pipeline,False
deepfakes/faceswap,123,https://github.com/deepfakes/faceswap/issues/123,code to integrate,model requirements;data cleaning,True
SeanNaren/deepspeech.pytorch,420,https://github.com/SeanNaren/deepspeech.pytorch/issues/420,,data cleaning,True
deeppomf/DeepCreamPy,16,https://github.com/deeppomf/DeepCreamPy/issues/16,,page not found,False
thtrieu/darkflow,431,https://github.com/thtrieu/darkflow/issues/431,,feature engineering;model training,True
ljvmiranda921/pyswarms,384,https://github.com/ljvmiranda921/pyswarms/pull/384,,no pipeline,True
thtrieu/darkflow,234,https://github.com/thtrieu/darkflow/issues/234,,no pipeline,True
CamDavidsonPilon/lifelines,320,https://github.com/CamDavidsonPilon/lifelines/pull/320,,no pipeline,False
jantic/DeOldify,237,https://github.com/jantic/DeOldify/issues/237,,no pipeline,False
thtrieu/darkflow,424,https://github.com/thtrieu/darkflow/issues/424,,no pipeline,False
1adrianb/face-alignment,78,https://github.com/1adrianb/face-alignment/issues/78,invalid,no pipeline,False
jantic/DeOldify,265,https://github.com/jantic/DeOldify/issues/265,,no pipeline,True
junyanz/pytorch-CycleGAN-and-pix2pix,265,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/265,,no pipeline,False
robertmartin8/PyPortfolioOpt,18,https://github.com/robertmartin8/PyPortfolioOpt/issues/18,,no pipeline,False
ZQPei/deep_sort_pytorch,124,https://github.com/ZQPei/deep_sort_pytorch/issues/124,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,956,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/956,,no pipeline,False
nextgenusfs/funannotate,237,https://github.com/nextgenusfs/funannotate/issues/237,,no pipeline,True
hanxiao/bert-as-service,242,https://github.com/hanxiao/bert-as-service/issues/242,,no pipeline,False
CamDavidsonPilon/lifelines,867,https://github.com/CamDavidsonPilon/lifelines/issues/867,enhancement,no pipeline,False
afeinstein20/eleanor,27,https://github.com/afeinstein20/eleanor/pull/27,,no pipeline,False
davidsandberg/facenet,891,https://github.com/davidsandberg/facenet/issues/891,,feature engineering;model training,True
jdb78/pytorch-forecasting,327,https://github.com/jdb78/pytorch-forecasting/pull/327,documentation,no pipeline,False
tianzhi0549/FCOS,64,https://github.com/tianzhi0549/FCOS/pull/64,,no pipeline,False
CamDavidsonPilon/lifelines,944,https://github.com/CamDavidsonPilon/lifelines/pull/944,,no pipeline,False
thtrieu/darkflow,889,https://github.com/thtrieu/darkflow/issues/889,,feature engineering;model training,True
SeanNaren/deepspeech.pytorch,345,https://github.com/SeanNaren/deepspeech.pytorch/pull/345,,no pipeline,False
namisan/mt-dnn,105,https://github.com/namisan/mt-dnn/pull/105,,no pipeline,False
BrikerMan/Kashgari,308,https://github.com/BrikerMan/Kashgari/pull/308,,no pipeline,False
mittagessen/kraken,95,https://github.com/mittagessen/kraken/issues/95,,no pipeline,False
deepfakes/faceswap,221,https://github.com/deepfakes/faceswap/issues/221,,model requirements,True
gboeing/osmnx,169,https://github.com/gboeing/osmnx/issues/169,question,no pipeline,True
ljvmiranda921/pyswarms,431,https://github.com/ljvmiranda921/pyswarms/pull/431,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,425,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/425,,no pipeline,False
mravanelli/pytorch-kaldi,120,https://github.com/mravanelli/pytorch-kaldi/issues/120,,model requirements;data cleaning,True
CamDavidsonPilon/lifelines,1059,https://github.com/CamDavidsonPilon/lifelines/issues/1059,docs,no pipeline,False
nextgenusfs/funannotate,158,https://github.com/nextgenusfs/funannotate/issues/158,,no pipeline,False
BrikerMan/Kashgari,342,https://github.com/BrikerMan/Kashgari/issues/342,wontfix,no pipeline,False
davidsandberg/facenet,440,https://github.com/davidsandberg/facenet/issues/440,,no pipeline,False
namisan/mt-dnn,91,https://github.com/namisan/mt-dnn/issues/91,,no pipeline,False
CamDavidsonPilon/lifelines,515,https://github.com/CamDavidsonPilon/lifelines/issues/515,docs,no pipeline,False
deeppomf/DeepCreamPy,226,https://github.com/deeppomf/DeepCreamPy/issues/226,,page not found,False
CamDavidsonPilon/lifelines,391,https://github.com/CamDavidsonPilon/lifelines/issues/391,enhancement;next minor release 🤞,no pipeline,False
davidsandberg/facenet,813,https://github.com/davidsandberg/facenet/issues/813,,model requirements,True
nicodv/kmodes,23,https://github.com/nicodv/kmodes/issues/23,bug,no pipeline,False
ljvmiranda921/pyswarms,427,https://github.com/ljvmiranda921/pyswarms/issues/427,,model training,True
jdb78/pytorch-forecasting,163,https://github.com/jdb78/pytorch-forecasting/issues/163,question,model deployment,True
junyanz/pytorch-CycleGAN-and-pix2pix,206,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/206,,data collection;model training,True
junyanz/pytorch-CycleGAN-and-pix2pix,601,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/601,,no pipeline,True
Tianxiaomo/pytorch-YOLOv4,119,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/119,,no pipeline,True
hanxiao/bert-as-service,513,https://github.com/hanxiao/bert-as-service/issues/513,,no pipeline,False
Tianxiaomo/pytorch-YOLOv4,275,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/275,,model training,True
regel/loudml,37,https://github.com/regel/loudml/issues/37,,no pipeline,True
SeanNaren/deepspeech.pytorch,522,https://github.com/SeanNaren/deepspeech.pytorch/pull/522,,model training,True
BrikerMan/Kashgari,254,https://github.com/BrikerMan/Kashgari/pull/254,,no pipeline,False
deepfakes/faceswap,491,https://github.com/deepfakes/faceswap/issues/491,feature;feedback wanted,data cleaning,False
junyanz/pytorch-CycleGAN-and-pix2pix,1156,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1156,,data cleaning;model training,True
CamDavidsonPilon/lifelines,804,https://github.com/CamDavidsonPilon/lifelines/issues/804,docs,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,798,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/798,,no pipeline,False
suragnair/alpha-zero-general,8,https://github.com/suragnair/alpha-zero-general/pull/8,,no pipeline,True
regel/loudml,163,https://github.com/regel/loudml/issues/163,,no pipeline,False
hanxiao/bert-as-service,337,https://github.com/hanxiao/bert-as-service/issues/337,,no pipeline,False
robertmartin8/PyPortfolioOpt,169,https://github.com/robertmartin8/PyPortfolioOpt/pull/169,,no pipeline,False
jdb78/pytorch-forecasting,394,https://github.com/jdb78/pytorch-forecasting/pull/394,,no pipeline,False
davidsandberg/facenet,1166,https://github.com/davidsandberg/facenet/issues/1166,,model deployment,False
CamDavidsonPilon/lifelines,318,https://github.com/CamDavidsonPilon/lifelines/pull/318,,no pipeline,False
jantic/DeOldify,278,https://github.com/jantic/DeOldify/issues/278,,no pipeline,False
deepfakes/faceswap,457,https://github.com/deepfakes/faceswap/pull/457,,no pipeline,False
CamDavidsonPilon/lifelines,594,https://github.com/CamDavidsonPilon/lifelines/pull/594,,no pipeline,False
jrkerns/pylinac,89,https://github.com/jrkerns/pylinac/issues/89,,no pipeline,False
ljvmiranda921/pyswarms,292,https://github.com/ljvmiranda921/pyswarms/issues/292,,no pipeline,False
CamDavidsonPilon/lifelines,919,https://github.com/CamDavidsonPilon/lifelines/pull/919,,no pipeline,False
Tianxiaomo/pytorch-YOLOv4,178,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/178,,no pipeline,True
robertmartin8/PyPortfolioOpt,294,https://github.com/robertmartin8/PyPortfolioOpt/issues/294,bug,no pipeline,False
BrikerMan/Kashgari,350,https://github.com/BrikerMan/Kashgari/pull/350,,model requirements,True
junyanz/pytorch-CycleGAN-and-pix2pix,27,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/27,,feature engineering,True
junyanz/pytorch-CycleGAN-and-pix2pix,1046,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1046,,feature engineering;model training,False
jrkerns/pylinac,267,https://github.com/jrkerns/pylinac/issues/267,,no pipeline,False
SeanNaren/deepspeech.pytorch,80,https://github.com/SeanNaren/deepspeech.pytorch/pull/80,,model deployment,True
dpinney/omf,235,https://github.com/dpinney/omf/issues/235,PNNL,no pipeline,True
thtrieu/darkflow,538,https://github.com/thtrieu/darkflow/issues/538,,model training,True
ljvmiranda921/pyswarms,12,https://github.com/ljvmiranda921/pyswarms/pull/12,,no pipeline,False
regel/loudml,36,https://github.com/regel/loudml/issues/36,,no pipeline,True
deepfakes/faceswap,639,https://github.com/deepfakes/faceswap/issues/639,,model training,True
junyanz/pytorch-CycleGAN-and-pix2pix,305,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/305,,model training,True
junyanz/pytorch-CycleGAN-and-pix2pix,1234,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1234,,model training;data collection,False
ZQPei/deep_sort_pytorch,67,https://github.com/ZQPei/deep_sort_pytorch/issues/67,,no pipeline,False
thtrieu/darkflow,1189,https://github.com/thtrieu/darkflow/issues/1189,,model training,True
thtrieu/darkflow,771,https://github.com/thtrieu/darkflow/issues/771,,model training,True
CamDavidsonPilon/lifelines,619,https://github.com/CamDavidsonPilon/lifelines/issues/619,docs;enhancement,no pipeline,False
jantic/DeOldify,298,https://github.com/jantic/DeOldify/issues/298,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,915,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/915,,no pipeline,False
BrikerMan/Kashgari,339,https://github.com/BrikerMan/Kashgari/issues/339,question,chinese,False
Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB,18,https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/18,,no pipeline,False
deepfakes/faceswap,169,https://github.com/deepfakes/faceswap/issues/169,,no pipeline,True
jhpyle/docassemble,23,https://github.com/jhpyle/docassemble/issues/23,,no pipeline,False
tianzhi0549/FCOS,107,https://github.com/tianzhi0549/FCOS/issues/107,,no pipeline,False
jantic/DeOldify,250,https://github.com/jantic/DeOldify/issues/250,,no pipeline,False
Tianxiaomo/pytorch-YOLOv4,223,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/223,,?,True
junyanz/pytorch-CycleGAN-and-pix2pix,73,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/73,,no pipeline,False
BrikerMan/Kashgari,62,https://github.com/BrikerMan/Kashgari/issues/62,question,chinese,False
robertmartin8/PyPortfolioOpt,158,https://github.com/robertmartin8/PyPortfolioOpt/issues/158,enhancement,no pipeline,False
CamDavidsonPilon/lifelines,357,https://github.com/CamDavidsonPilon/lifelines/pull/357,,no pipeline,False
Tianxiaomo/pytorch-YOLOv4,249,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/249,,no pipeline,True
SeanNaren/deepspeech.pytorch,197,https://github.com/SeanNaren/deepspeech.pytorch/issues/197,,no pipeline,True
deepfakes/faceswap,90,https://github.com/deepfakes/faceswap/pull/90,,no pipeline,False
thtrieu/darkflow,466,https://github.com/thtrieu/darkflow/issues/466,,model training,True
junyanz/pytorch-CycleGAN-and-pix2pix,675,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/675,,no pipeline,False
davidsandberg/facenet,417,https://github.com/davidsandberg/facenet/issues/417,,model evaluation,True
gboeing/osmnx,601,https://github.com/gboeing/osmnx/issues/601,question,no pipeline,False
regel/loudml,95,https://github.com/regel/loudml/issues/95,help wanted,no pipeline,False
davidsandberg/facenet,480,https://github.com/davidsandberg/facenet/issues/480,,model deployment,False
davidsandberg/facenet,175,https://github.com/davidsandberg/facenet/issues/175,,model training;data cleaning,True
robertmartin8/PyPortfolioOpt,58,https://github.com/robertmartin8/PyPortfolioOpt/pull/58,,no pipeline,False
nextgenusfs/funannotate,119,https://github.com/nextgenusfs/funannotate/pull/119,,no pipeline,False
CamDavidsonPilon/lifelines,1186,https://github.com/CamDavidsonPilon/lifelines/issues/1186,,no pipeline,True
deeppomf/DeepCreamPy,118,https://github.com/deeppomf/DeepCreamPy/issues/118,,page not found,True
hanxiao/bert-as-service,203,https://github.com/hanxiao/bert-as-service/pull/203,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,839,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/839,,no pipeline,False
hanxiao/bert-as-service,236,https://github.com/hanxiao/bert-as-service/pull/236,,no pipeline,False
CamDavidsonPilon/lifelines,1025,https://github.com/CamDavidsonPilon/lifelines/issues/1025,enhancement,no pipeline,False
deeppomf/DeepCreamPy,32,https://github.com/deeppomf/DeepCreamPy/issues/32,,page not found,False
jrkerns/pylinac,104,https://github.com/jrkerns/pylinac/issues/104,,no pipeline,False
davidsandberg/facenet,1086,https://github.com/davidsandberg/facenet/issues/1086,,no pipeline,True
deepfakes/faceswap,580,https://github.com/deepfakes/faceswap/issues/580,,no pipeline,False
nextgenusfs/funannotate,326,https://github.com/nextgenusfs/funannotate/issues/326,,model training,True
nextgenusfs/funannotate,215,https://github.com/nextgenusfs/funannotate/issues/215,,no pipeline,True
regel/loudml,388,https://github.com/regel/loudml/issues/388,,no pipeline,True
regel/loudml,137,https://github.com/regel/loudml/issues/137,CentOS,no pipeline,False
davidsandberg/facenet,1087,https://github.com/davidsandberg/facenet/issues/1087,,model evaluation,True
CamDavidsonPilon/lifelines,1197,https://github.com/CamDavidsonPilon/lifelines/pull/1197,,no pipeline,False
hanxiao/bert-as-service,257,https://github.com/hanxiao/bert-as-service/issues/257,,no pipeline,False
SeanNaren/deepspeech.pytorch,282,https://github.com/SeanNaren/deepspeech.pytorch/issues/282,stale,model evaluation,True
davidsandberg/facenet,171,https://github.com/davidsandberg/facenet/issues/171,,no pipeline,False
SeanNaren/deepspeech.pytorch,391,https://github.com/SeanNaren/deepspeech.pytorch/pull/391,stale,no pipeline,False
emedvedev/attention-ocr,85,https://github.com/emedvedev/attention-ocr/issues/85,,feature engineering,True
jrkerns/pylinac,47,https://github.com/jrkerns/pylinac/issues/47,,no pipeline,True
Tianxiaomo/pytorch-YOLOv4,74,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/74,,chinese,True
davidsandberg/facenet,902,https://github.com/davidsandberg/facenet/issues/902,,model deployment,True
afeinstein20/eleanor,130,https://github.com/afeinstein20/eleanor/issues/130,,no pipeline,False
davidsandberg/facenet,848,https://github.com/davidsandberg/facenet/pull/848,,no pipeline,False
mittagessen/kraken,239,https://github.com/mittagessen/kraken/issues/239,,feature engineering,True
afeinstein20/eleanor,57,https://github.com/afeinstein20/eleanor/issues/57,,no pipeline,False
gboeing/osmnx,201,https://github.com/gboeing/osmnx/issues/201,installation,no pipeline,False
suragnair/alpha-zero-general,132,https://github.com/suragnair/alpha-zero-general/pull/132,,no pipeline,False
CamDavidsonPilon/lifelines,630,https://github.com/CamDavidsonPilon/lifelines/issues/630,,feature engineering,False
regel/loudml,301,https://github.com/regel/loudml/pull/301,dependencies,no pipeline,False
jantic/DeOldify,99,https://github.com/jantic/DeOldify/pull/99,,no pipeline,False
thtrieu/darkflow,950,https://github.com/thtrieu/darkflow/issues/950,,model training,True
deepfakes/faceswap,756,https://github.com/deepfakes/faceswap/pull/756,,no pipeline,True
davidsandberg/facenet,890,https://github.com/davidsandberg/facenet/issues/890,,model training,True
mittagessen/kraken,156,https://github.com/mittagessen/kraken/issues/156,,no pipeline,False
ljvmiranda921/pyswarms,378,https://github.com/ljvmiranda921/pyswarms/pull/378,,no pipeline,False
davidsandberg/facenet,105,https://github.com/davidsandberg/facenet/issues/105,,model training,True
davidsandberg/facenet,612,https://github.com/davidsandberg/facenet/issues/612,,no pipeline,True
CamDavidsonPilon/lifelines,881,https://github.com/CamDavidsonPilon/lifelines/issues/881,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,158,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/158,,no pipeline,True
gboeing/osmnx,623,https://github.com/gboeing/osmnx/issues/623,bug,no pipeline,False
suragnair/alpha-zero-general,37,https://github.com/suragnair/alpha-zero-general/issues/37,,no pipeline,True
ljvmiranda921/pyswarms,440,https://github.com/ljvmiranda921/pyswarms/pull/440,,no pipeline,True
deepfakes/faceswap,38,https://github.com/deepfakes/faceswap/issues/38,bug;performance,no pipeline,False
suragnair/alpha-zero-general,217,https://github.com/suragnair/alpha-zero-general/issues/217,,no pipeline,False
jrkerns/pylinac,281,https://github.com/jrkerns/pylinac/pull/281,,no pipeline,False
1adrianb/face-alignment,230,https://github.com/1adrianb/face-alignment/pull/230,,no pipeline,False
mittagessen/kraken,30,https://github.com/mittagessen/kraken/issues/30,,no pipeline,True
davidsandberg/facenet,398,https://github.com/davidsandberg/facenet/issues/398,,model evaluation,False
SeanNaren/deepspeech.pytorch,152,https://github.com/SeanNaren/deepspeech.pytorch/pull/152,,no pipeline,False
tianzhi0549/FCOS,49,https://github.com/tianzhi0549/FCOS/issues/49,,no pipeline,True
BrikerMan/Kashgari,218,https://github.com/BrikerMan/Kashgari/issues/218,question,chinese,True
mravanelli/pytorch-kaldi,54,https://github.com/mravanelli/pytorch-kaldi/issues/54,,model training;model deployment,True
jdb78/pytorch-forecasting,227,https://github.com/jdb78/pytorch-forecasting/pull/227,dependencies,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,598,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/598,,no pipeline,True
gboeing/osmnx,592,https://github.com/gboeing/osmnx/pull/592,,no pipeline,False
deepfakes/faceswap,567,https://github.com/deepfakes/faceswap/issues/567,,no pipeline,False
mravanelli/pytorch-kaldi,223,https://github.com/mravanelli/pytorch-kaldi/issues/223,stalled,feature engineering,True
nextgenusfs/funannotate,327,https://github.com/nextgenusfs/funannotate/issues/327,,no pipeline,True
SeanNaren/deepspeech.pytorch,561,https://github.com/SeanNaren/deepspeech.pytorch/issues/561,stale,no pipeline,False
thtrieu/darkflow,512,https://github.com/thtrieu/darkflow/issues/512,,model evaluation,True
nextgenusfs/funannotate,409,https://github.com/nextgenusfs/funannotate/issues/409,,model training,True
tianzhi0549/FCOS,285,https://github.com/tianzhi0549/FCOS/issues/285,,model evaluation,True
junyanz/pytorch-CycleGAN-and-pix2pix,482,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/482,,no pipeline,False
Tianxiaomo/pytorch-YOLOv4,15,https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/15,,chinese,False
thtrieu/darkflow,1047,https://github.com/thtrieu/darkflow/issues/1047,,no pipeline,False
gboeing/osmnx,206,https://github.com/gboeing/osmnx/pull/206,,no pipeline,False
jrkerns/pylinac,185,https://github.com/jrkerns/pylinac/issues/185,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,1187,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1187,,no pipeline,True
davidsandberg/facenet,1078,https://github.com/davidsandberg/facenet/pull/1078,,no pipeline,False
davidsandberg/facenet,483,https://github.com/davidsandberg/facenet/issues/483,,no pipeline,True
jhpyle/docassemble,283,https://github.com/jhpyle/docassemble/issues/283,,no pipeline,False
CamDavidsonPilon/lifelines,282,https://github.com/CamDavidsonPilon/lifelines/issues/282,,model deployment,False
deepfakes/faceswap,80,https://github.com/deepfakes/faceswap/issues/80,,no pipeline,True
1adrianb/face-alignment,45,https://github.com/1adrianb/face-alignment/issues/45,,model training;model evaluation,True
thtrieu/darkflow,969,https://github.com/thtrieu/darkflow/issues/969,,feature engineering,True
hanxiao/bert-as-service,373,https://github.com/hanxiao/bert-as-service/issues/373,,no pipeline,True
hanxiao/bert-as-service,310,https://github.com/hanxiao/bert-as-service/issues/310,,data cleaning,False
dpinney/omf,57,https://github.com/dpinney/omf/issues/57,,no pipeline,False
jantic/DeOldify,30,https://github.com/jantic/DeOldify/issues/30,,no pipeline,False
ljvmiranda921/pyswarms,197,https://github.com/ljvmiranda921/pyswarms/pull/197,,no pipeline,False
namisan/mt-dnn,156,https://github.com/namisan/mt-dnn/issues/156,,no pipeline,True
BrikerMan/Kashgari,413,https://github.com/BrikerMan/Kashgari/pull/413,,no pipeline,False
Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB,109,https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/109,,no pipeline,False
deepfakes/faceswap,130,https://github.com/deepfakes/faceswap/pull/130,,no pipeline,False
gboeing/osmnx,273,https://github.com/gboeing/osmnx/issues/273,enhancement,no pipeline,False
jhpyle/docassemble,320,https://github.com/jhpyle/docassemble/pull/320,,no pipeline,False
jhpyle/docassemble,158,https://github.com/jhpyle/docassemble/issues/158,,no pipeline,False
jhpyle/docassemble,38,https://github.com/jhpyle/docassemble/issues/38,,no pipeline,False
davidsandberg/facenet,407,https://github.com/davidsandberg/facenet/issues/407,,no pipeline,False
dpinney/omf,338,https://github.com/dpinney/omf/issues/338,,no pipeline,False
CamDavidsonPilon/lifelines,350,https://github.com/CamDavidsonPilon/lifelines/issues/350,convergence issue,data cleaning,True
jdb78/pytorch-forecasting,352,https://github.com/jdb78/pytorch-forecasting/issues/352,question,no pipeline,True
gboeing/osmnx,431,https://github.com/gboeing/osmnx/pull/431,,no pipeline,False
jhpyle/docassemble,24,https://github.com/jhpyle/docassemble/issues/24,,no pipeline,False
nextgenusfs/funannotate,257,https://github.com/nextgenusfs/funannotate/issues/257,,no pipeline,False
SeanNaren/deepspeech.pytorch,517,https://github.com/SeanNaren/deepspeech.pytorch/issues/517,,no pipeline,True
thtrieu/darkflow,78,https://github.com/thtrieu/darkflow/pull/78,,no pipeline,False
ljvmiranda921/pyswarms,272,https://github.com/ljvmiranda921/pyswarms/pull/272,,no pipeline,False
deepfakes/faceswap,356,https://github.com/deepfakes/faceswap/issues/356,,data cleaning;feature engineering,False
jdb78/pytorch-forecasting,354,https://github.com/jdb78/pytorch-forecasting/issues/354,dependencies,no pipeline,True
gboeing/osmnx,584,https://github.com/gboeing/osmnx/pull/584,,no pipeline,True
ZQPei/deep_sort_pytorch,15,https://github.com/ZQPei/deep_sort_pytorch/issues/15,,chinese,False
dpinney/omf,81,https://github.com/dpinney/omf/issues/81,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,1006,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1006,,no pipeline,False
deepfakes/faceswap,861,https://github.com/deepfakes/faceswap/issues/861,,no pipeline,True
afeinstein20/eleanor,198,https://github.com/afeinstein20/eleanor/pull/198,,model evaluation,True
nextgenusfs/funannotate,537,https://github.com/nextgenusfs/funannotate/issues/537,,no pipeline,True
gboeing/osmnx,123,https://github.com/gboeing/osmnx/issues/123,question,no pipeline,True
thtrieu/darkflow,428,https://github.com/thtrieu/darkflow/issues/428,,data collection,True
gboeing/osmnx,495,https://github.com/gboeing/osmnx/pull/495,,no pipeline,False
nextgenusfs/funannotate,439,https://github.com/nextgenusfs/funannotate/issues/439,,no pipeline,True
deepfakes/faceswap,375,https://github.com/deepfakes/faceswap/issues/375,,no pipeline,True
emedvedev/attention-ocr,141,https://github.com/emedvedev/attention-ocr/issues/141,,data collection ,True
gboeing/osmnx,58,https://github.com/gboeing/osmnx/issues/58,bug,no pipeline,False
davidsandberg/facenet,99,https://github.com/davidsandberg/facenet/issues/99,,no pipeline,True
deepfakes/faceswap,502,https://github.com/deepfakes/faceswap/pull/502,,no pipeline,False
namisan/mt-dnn,88,https://github.com/namisan/mt-dnn/pull/88,,no pipeline,False
1adrianb/face-alignment,37,https://github.com/1adrianb/face-alignment/issues/37,,no pipeline,True
thtrieu/darkflow,959,https://github.com/thtrieu/darkflow/pull/959,,no pipeline,False
hanxiao/bert-as-service,160,https://github.com/hanxiao/bert-as-service/issues/160,,no pipeline,True
hanxiao/bert-as-service,213,https://github.com/hanxiao/bert-as-service/issues/213,discussion;feel free to contribute;help wanted,model requirements,True
tianzhi0549/FCOS,165,https://github.com/tianzhi0549/FCOS/pull/165,,no pipeline,False
deepfakes/faceswap,820,https://github.com/deepfakes/faceswap/issues/820,,no pipeline,True
jdb78/pytorch-forecasting,43,https://github.com/jdb78/pytorch-forecasting/pull/43,,no pipeline,True
tianzhi0549/FCOS,46,https://github.com/tianzhi0549/FCOS/issues/46,,no pipeline,True
junyanz/pytorch-CycleGAN-and-pix2pix,128,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/128,,data cleaning,False
deeppomf/DeepCreamPy,119,https://github.com/deeppomf/DeepCreamPy/issues/119,,page not found,True
CamDavidsonPilon/lifelines,913,https://github.com/CamDavidsonPilon/lifelines/issues/913,,no pipeline,False
regel/loudml,60,https://github.com/regel/loudml/issues/60,bug,no pipeline,False
gboeing/osmnx,369,https://github.com/gboeing/osmnx/pull/369,,no pipeline,True
afeinstein20/eleanor,223,https://github.com/afeinstein20/eleanor/issues/223,,no pipeline,False
CamDavidsonPilon/lifelines,248,https://github.com/CamDavidsonPilon/lifelines/issues/248,,no pipeline,False
dpinney/omf,321,https://github.com/dpinney/omf/issues/321,,no pipeline,False
ljvmiranda921/pyswarms,394,https://github.com/ljvmiranda921/pyswarms/issues/394,,no pipeline,True
deepfakes/faceswap,183,https://github.com/deepfakes/faceswap/pull/183,,no pipeline,False
davidsandberg/facenet,49,https://github.com/davidsandberg/facenet/issues/49,,feature engineering,False
ZQPei/deep_sort_pytorch,153,https://github.com/ZQPei/deep_sort_pytorch/issues/153,,no pipeline,True
jdb78/pytorch-forecasting,147,https://github.com/jdb78/pytorch-forecasting/pull/147,dependencies,no pipeline,False
SeanNaren/deepspeech.pytorch,188,https://github.com/SeanNaren/deepspeech.pytorch/issues/188,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,209,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/209,,model evaluation,True
mravanelli/pytorch-kaldi,245,https://github.com/mravanelli/pytorch-kaldi/issues/245,,no pipeline,True
nextgenusfs/funannotate,44,https://github.com/nextgenusfs/funannotate/pull/44,,no pipeline,False
Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB,131,https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/131,,no pipeline,True
jdb78/pytorch-forecasting,72,https://github.com/jdb78/pytorch-forecasting/pull/72,,no pipeline,False
CamDavidsonPilon/lifelines,117,https://github.com/CamDavidsonPilon/lifelines/issues/117,,no pipeline,False
nicodv/kmodes,12,https://github.com/nicodv/kmodes/pull/12,,no pipeline,False
deepfakes/faceswap,806,https://github.com/deepfakes/faceswap/issues/806,,no pipeline,True
afeinstein20/eleanor,112,https://github.com/afeinstein20/eleanor/issues/112,,no pipeline,False
hanxiao/bert-as-service,412,https://github.com/hanxiao/bert-as-service/issues/412,,no pipeline,False
gboeing/osmnx,522,https://github.com/gboeing/osmnx/pull/522,enhancement,no pipeline,True
jhpyle/docassemble,258,https://github.com/jhpyle/docassemble/pull/258,,no pipeline,False
CamDavidsonPilon/lifelines,447,https://github.com/CamDavidsonPilon/lifelines/issues/447,,model evaluation,True
nextgenusfs/funannotate,188,https://github.com/nextgenusfs/funannotate/issues/188,,no pipeline,False
robertmartin8/PyPortfolioOpt,62,https://github.com/robertmartin8/PyPortfolioOpt/issues/62,packaging,no pipeline,False
dpinney/omf,292,https://github.com/dpinney/omf/issues/292,NotAnIssue,no pipeline,False
thtrieu/darkflow,105,https://github.com/thtrieu/darkflow/issues/105,,no pipeline,True
junyanz/pytorch-CycleGAN-and-pix2pix,132,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/132,,no pipeline,False
thtrieu/darkflow,484,https://github.com/thtrieu/darkflow/issues/484,bug,no pipeline,False
hanxiao/bert-as-service,587,https://github.com/hanxiao/bert-as-service/issues/587,,no pipeline,True
namisan/mt-dnn,98,https://github.com/namisan/mt-dnn/issues/98,,model deployment,True
thtrieu/darkflow,501,https://github.com/thtrieu/darkflow/issues/501,,data collection;model training,False
thtrieu/darkflow,337,https://github.com/thtrieu/darkflow/issues/337,,model requirements,True
mravanelli/pytorch-kaldi,86,https://github.com/mravanelli/pytorch-kaldi/issues/86,,no pipeline,True
emedvedev/attention-ocr,177,https://github.com/emedvedev/attention-ocr/pull/177,,no pipeline,False
nicodv/kmodes,36,https://github.com/nicodv/kmodes/issues/36,easy;enhancement,no pipeline,False
jhpyle/docassemble,66,https://github.com/jhpyle/docassemble/issues/66,,no pipeline,False
regel/loudml,336,https://github.com/regel/loudml/pull/336,dependencies,no pipeline,False
mravanelli/pytorch-kaldi,108,https://github.com/mravanelli/pytorch-kaldi/issues/108,,no pipeline,False
gboeing/osmnx,530,https://github.com/gboeing/osmnx/issues/530,,no pipeline,True
jdb78/pytorch-forecasting,32,https://github.com/jdb78/pytorch-forecasting/pull/32,,no pipeline,False
deeppomf/DeepCreamPy,156,https://github.com/deeppomf/DeepCreamPy/issues/156,,page not found,False
thtrieu/darkflow,973,https://github.com/thtrieu/darkflow/issues/973,,no pipeline,True
CamDavidsonPilon/lifelines,7,https://github.com/CamDavidsonPilon/lifelines/issues/7,bug,no pipeline,False
thtrieu/darkflow,109,https://github.com/thtrieu/darkflow/issues/109,,no pipeline,False
robertmartin8/PyPortfolioOpt,247,https://github.com/robertmartin8/PyPortfolioOpt/pull/247,,no pipeline,False
SeanNaren/deepspeech.pytorch,111,https://github.com/SeanNaren/deepspeech.pytorch/issues/111,,model evaluation,True
robertmartin8/PyPortfolioOpt,258,https://github.com/robertmartin8/PyPortfolioOpt/issues/258,question,no pipeline,True
thtrieu/darkflow,278,https://github.com/thtrieu/darkflow/issues/278,,model requirements,False
ljvmiranda921/pyswarms,145,https://github.com/ljvmiranda921/pyswarms/pull/145,,no pipeline,False
dpinney/omf,217,https://github.com/dpinney/omf/issues/217,,no pipeline,False
mittagessen/kraken,161,https://github.com/mittagessen/kraken/issues/161,,model training,True
junyanz/pytorch-CycleGAN-and-pix2pix,233,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/233,,no pipeline,True
ZQPei/deep_sort_pytorch,1,https://github.com/ZQPei/deep_sort_pytorch/issues/1,,chinese,True
SeanNaren/deepspeech.pytorch,412,https://github.com/SeanNaren/deepspeech.pytorch/issues/412,,no pipeline,True
regel/loudml,275,https://github.com/regel/loudml/pull/275,dependencies,no pipeline,False
CamDavidsonPilon/lifelines,173,https://github.com/CamDavidsonPilon/lifelines/issues/173,,no pipeline,False
junyanz/pytorch-CycleGAN-and-pix2pix,334,https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/334,,data cleaning,True
dpinney/omf,216,https://github.com/dpinney/omf/issues/216,,no pipeline,True
davidsandberg/facenet,127,https://github.com/davidsandberg/facenet/issues/127,,model training,True
jantic/DeOldify,273,https://github.com/jantic/DeOldify/issues/273,,no pipeline,True
deepfakes/faceswap,414,https://github.com/deepfakes/faceswap/issues/414,,model evaluation,False
thtrieu/darkflow,161,https://github.com/thtrieu/darkflow/issues/161,,no pipeline,True
Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB,227,https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/227,,chinese,False
CamDavidsonPilon/lifelines,406,https://github.com/CamDavidsonPilon/lifelines/issues/406,,no pipeline,False
CamDavidsonPilon/lifelines,843,https://github.com/CamDavidsonPilon/lifelines/pull/843,,no pipeline,True
BrikerMan/Kashgari,167,https://github.com/BrikerMan/Kashgari/issues/167,bug;wontfix,model deployment,True
davidsandberg/facenet,34,https://github.com/davidsandberg/facenet/issues/34,,model deployment,True
CamDavidsonPilon/lifelines,901,https://github.com/CamDavidsonPilon/lifelines/pull/901,,no pipeline,False
1 Project Issue Url Labels Classification Is ML
2 davidsandberg/facenet 951 https://github.com/davidsandberg/facenet/issues/951 no pipeline False
3 deepfakes/faceswap 964 https://github.com/deepfakes/faceswap/issues/964 no pipeline False
4 junyanz/pytorch-CycleGAN-and-pix2pix 968 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/968 no pipeline True
5 Tianxiaomo/pytorch-YOLOv4 136 https://github.com/Tianxiaomo/pytorch-YOLOv4/pull/136 model evaluation True
6 mittagessen/kraken 146 https://github.com/mittagessen/kraken/issues/146 no pipeline False
7 1adrianb/face-alignment 148 https://github.com/1adrianb/face-alignment/issues/148 no pipeline False
8 Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB 82 https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/82 model requirements False
9 suragnair/alpha-zero-general 175 https://github.com/suragnair/alpha-zero-general/issues/175 feature engineering;model training True
10 deepfakes/faceswap 176 https://github.com/deepfakes/faceswap/issues/176 no pipeline True
11 BrikerMan/Kashgari 88 https://github.com/BrikerMan/Kashgari/issues/88 question model requirements True
12 BrikerMan/Kashgari 374 https://github.com/BrikerMan/Kashgari/pull/374 model deployment False
13 deepfakes/faceswap 443 https://github.com/deepfakes/faceswap/pull/443 no pipeline False
14 hanxiao/bert-as-service 561 https://github.com/hanxiao/bert-as-service/issues/561 no pipeline True
15 jhpyle/docassemble 325 https://github.com/jhpyle/docassemble/issues/325 no pipeline False
16 1adrianb/face-alignment 111 https://github.com/1adrianb/face-alignment/issues/111 no pipeline True
17 deepfakes/faceswap 7 https://github.com/deepfakes/faceswap/issues/7 dev;opencv data cleaning True
18 jrkerns/pylinac 67 https://github.com/jrkerns/pylinac/pull/67 no pipeline True
19 nextgenusfs/funannotate 180 https://github.com/nextgenusfs/funannotate/issues/180 data cleaning False
20 gboeing/osmnx 515 https://github.com/gboeing/osmnx/issues/515 no pipeline False
21 thtrieu/darkflow 876 https://github.com/thtrieu/darkflow/issues/876 model training True
22 regel/loudml 544 https://github.com/regel/loudml/issues/544 no pipeline False
23 davidsandberg/facenet 786 https://github.com/davidsandberg/facenet/issues/786 no pipeline False
24 davidsandberg/facenet 772 https://github.com/davidsandberg/facenet/issues/772 feature engineering;model training;data collection True
25 tianzhi0549/FCOS 230 https://github.com/tianzhi0549/FCOS/issues/230 feature engineering;model training True
26 regel/loudml 370 https://github.com/regel/loudml/issues/370 model deployment False
27 deepfakes/faceswap 431 https://github.com/deepfakes/faceswap/pull/431 no pipeline True
28 regel/loudml 334 https://github.com/regel/loudml/pull/334 dependencies no pipeline False
29 emedvedev/attention-ocr 143 https://github.com/emedvedev/attention-ocr/issues/143 no pipeline True
30 nextgenusfs/funannotate 290 https://github.com/nextgenusfs/funannotate/issues/290 data cleaning True
31 thtrieu/darkflow 1193 https://github.com/thtrieu/darkflow/issues/1193 no pipeline False
32 thtrieu/darkflow 332 https://github.com/thtrieu/darkflow/pull/332 model requirements;model training True
33 suragnair/alpha-zero-general 177 https://github.com/suragnair/alpha-zero-general/pull/177 no pipeline True
34 dpinney/omf 345 https://github.com/dpinney/omf/pull/345 no pipeline False
35 thtrieu/darkflow 1081 https://github.com/thtrieu/darkflow/issues/1081 no pipeline False
36 thtrieu/darkflow 330 https://github.com/thtrieu/darkflow/issues/330 model training True
37 Tianxiaomo/pytorch-YOLOv4 129 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/129 chinese False
38 nicodv/kmodes 105 https://github.com/nicodv/kmodes/issues/105 data collection False
39 deepfakes/faceswap 273 https://github.com/deepfakes/faceswap/issues/273 data cleaning False
40 tianzhi0549/FCOS 287 https://github.com/tianzhi0549/FCOS/issues/287 model evaluation True
41 Tianxiaomo/pytorch-YOLOv4 162 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/162 no pipeline False
42 junyanz/pytorch-CycleGAN-and-pix2pix 2 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/2 no pipeline True
43 davidsandberg/facenet 997 https://github.com/davidsandberg/facenet/issues/997 no pipeline False
44 hanxiao/bert-as-service 350 https://github.com/hanxiao/bert-as-service/issues/350 model deployment False
45 hanxiao/bert-as-service 157 https://github.com/hanxiao/bert-as-service/pull/157 no pipeline False
46 junyanz/pytorch-CycleGAN-and-pix2pix 761 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/761 model training;data cleaning False
47 dpinney/omf 218 https://github.com/dpinney/omf/issues/218 no pipeline False
48 CamDavidsonPilon/lifelines 177 https://github.com/CamDavidsonPilon/lifelines/pull/177 no pipeline False
49 junyanz/pytorch-CycleGAN-and-pix2pix 641 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/641 no pipeline False
50 junyanz/pytorch-CycleGAN-and-pix2pix 360 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/360 no pipeline True
51 SeanNaren/deepspeech.pytorch 5 https://github.com/SeanNaren/deepspeech.pytorch/pull/5 no pipeline False
52 regel/loudml 82 https://github.com/regel/loudml/pull/82 no pipeline False
53 gboeing/osmnx 156 https://github.com/gboeing/osmnx/issues/156 no pipeline True
54 SeanNaren/deepspeech.pytorch 275 https://github.com/SeanNaren/deepspeech.pytorch/issues/275 stale no pipeline False
55 junyanz/pytorch-CycleGAN-and-pix2pix 949 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/949 model evaluation False
56 davidsandberg/facenet 206 https://github.com/davidsandberg/facenet/issues/206 model training;data cleaning True
57 davidsandberg/facenet 683 https://github.com/davidsandberg/facenet/issues/683 no pipeline False
58 thtrieu/darkflow 938 https://github.com/thtrieu/darkflow/issues/938 no pipeline False
59 CamDavidsonPilon/lifelines 764 https://github.com/CamDavidsonPilon/lifelines/issues/764 next minor release 🤞 no pipeline False
60 Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB 47 https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/47 chinese True
61 emedvedev/attention-ocr 171 https://github.com/emedvedev/attention-ocr/issues/171 no pipeline True
62 deepfakes/faceswap 818 https://github.com/deepfakes/faceswap/pull/818 no pipeline False
63 deepfakes/faceswap 123 https://github.com/deepfakes/faceswap/issues/123 code to integrate model requirements;data cleaning True
64 SeanNaren/deepspeech.pytorch 420 https://github.com/SeanNaren/deepspeech.pytorch/issues/420 data cleaning True
65 deeppomf/DeepCreamPy 16 https://github.com/deeppomf/DeepCreamPy/issues/16 page not found False
66 thtrieu/darkflow 431 https://github.com/thtrieu/darkflow/issues/431 feature engineering;model training True
67 ljvmiranda921/pyswarms 384 https://github.com/ljvmiranda921/pyswarms/pull/384 no pipeline True
68 thtrieu/darkflow 234 https://github.com/thtrieu/darkflow/issues/234 no pipeline True
69 CamDavidsonPilon/lifelines 320 https://github.com/CamDavidsonPilon/lifelines/pull/320 no pipeline False
70 jantic/DeOldify 237 https://github.com/jantic/DeOldify/issues/237 no pipeline False
71 thtrieu/darkflow 424 https://github.com/thtrieu/darkflow/issues/424 no pipeline False
72 1adrianb/face-alignment 78 https://github.com/1adrianb/face-alignment/issues/78 invalid no pipeline False
73 jantic/DeOldify 265 https://github.com/jantic/DeOldify/issues/265 no pipeline True
74 junyanz/pytorch-CycleGAN-and-pix2pix 265 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/265 no pipeline False
75 robertmartin8/PyPortfolioOpt 18 https://github.com/robertmartin8/PyPortfolioOpt/issues/18 no pipeline False
76 ZQPei/deep_sort_pytorch 124 https://github.com/ZQPei/deep_sort_pytorch/issues/124 no pipeline False
77 junyanz/pytorch-CycleGAN-and-pix2pix 956 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/956 no pipeline False
78 nextgenusfs/funannotate 237 https://github.com/nextgenusfs/funannotate/issues/237 no pipeline True
79 hanxiao/bert-as-service 242 https://github.com/hanxiao/bert-as-service/issues/242 no pipeline False
80 CamDavidsonPilon/lifelines 867 https://github.com/CamDavidsonPilon/lifelines/issues/867 enhancement no pipeline False
81 afeinstein20/eleanor 27 https://github.com/afeinstein20/eleanor/pull/27 no pipeline False
82 davidsandberg/facenet 891 https://github.com/davidsandberg/facenet/issues/891 feature engineering;model training True
83 jdb78/pytorch-forecasting 327 https://github.com/jdb78/pytorch-forecasting/pull/327 documentation no pipeline False
84 tianzhi0549/FCOS 64 https://github.com/tianzhi0549/FCOS/pull/64 no pipeline False
85 CamDavidsonPilon/lifelines 944 https://github.com/CamDavidsonPilon/lifelines/pull/944 no pipeline False
86 thtrieu/darkflow 889 https://github.com/thtrieu/darkflow/issues/889 feature engineering;model training True
87 SeanNaren/deepspeech.pytorch 345 https://github.com/SeanNaren/deepspeech.pytorch/pull/345 no pipeline False
88 namisan/mt-dnn 105 https://github.com/namisan/mt-dnn/pull/105 no pipeline False
89 BrikerMan/Kashgari 308 https://github.com/BrikerMan/Kashgari/pull/308 no pipeline False
90 mittagessen/kraken 95 https://github.com/mittagessen/kraken/issues/95 no pipeline False
91 deepfakes/faceswap 221 https://github.com/deepfakes/faceswap/issues/221 model requirements True
92 gboeing/osmnx 169 https://github.com/gboeing/osmnx/issues/169 question no pipeline True
93 ljvmiranda921/pyswarms 431 https://github.com/ljvmiranda921/pyswarms/pull/431 no pipeline False
94 junyanz/pytorch-CycleGAN-and-pix2pix 425 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/425 no pipeline False
95 mravanelli/pytorch-kaldi 120 https://github.com/mravanelli/pytorch-kaldi/issues/120 model requirements;data cleaning True
96 CamDavidsonPilon/lifelines 1059 https://github.com/CamDavidsonPilon/lifelines/issues/1059 docs no pipeline False
97 nextgenusfs/funannotate 158 https://github.com/nextgenusfs/funannotate/issues/158 no pipeline False
98 BrikerMan/Kashgari 342 https://github.com/BrikerMan/Kashgari/issues/342 wontfix no pipeline False
99 davidsandberg/facenet 440 https://github.com/davidsandberg/facenet/issues/440 no pipeline False
100 namisan/mt-dnn 91 https://github.com/namisan/mt-dnn/issues/91 no pipeline False
101 CamDavidsonPilon/lifelines 515 https://github.com/CamDavidsonPilon/lifelines/issues/515 docs no pipeline False
102 deeppomf/DeepCreamPy 226 https://github.com/deeppomf/DeepCreamPy/issues/226 page not found False
103 CamDavidsonPilon/lifelines 391 https://github.com/CamDavidsonPilon/lifelines/issues/391 enhancement;next minor release 🤞 no pipeline False
104 davidsandberg/facenet 813 https://github.com/davidsandberg/facenet/issues/813 model requirements True
105 nicodv/kmodes 23 https://github.com/nicodv/kmodes/issues/23 bug no pipeline False
106 ljvmiranda921/pyswarms 427 https://github.com/ljvmiranda921/pyswarms/issues/427 model training True
107 jdb78/pytorch-forecasting 163 https://github.com/jdb78/pytorch-forecasting/issues/163 question model deployment True
108 junyanz/pytorch-CycleGAN-and-pix2pix 206 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/206 data collection;model training True
109 junyanz/pytorch-CycleGAN-and-pix2pix 601 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/601 no pipeline True
110 Tianxiaomo/pytorch-YOLOv4 119 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/119 no pipeline True
111 hanxiao/bert-as-service 513 https://github.com/hanxiao/bert-as-service/issues/513 no pipeline False
112 Tianxiaomo/pytorch-YOLOv4 275 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/275 model training True
113 regel/loudml 37 https://github.com/regel/loudml/issues/37 no pipeline True
114 SeanNaren/deepspeech.pytorch 522 https://github.com/SeanNaren/deepspeech.pytorch/pull/522 model training True
115 BrikerMan/Kashgari 254 https://github.com/BrikerMan/Kashgari/pull/254 no pipeline False
116 deepfakes/faceswap 491 https://github.com/deepfakes/faceswap/issues/491 feature;feedback wanted data cleaning False
117 junyanz/pytorch-CycleGAN-and-pix2pix 1156 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1156 data cleaning;model training True
118 CamDavidsonPilon/lifelines 804 https://github.com/CamDavidsonPilon/lifelines/issues/804 docs no pipeline False
119 junyanz/pytorch-CycleGAN-and-pix2pix 798 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/798 no pipeline False
120 suragnair/alpha-zero-general 8 https://github.com/suragnair/alpha-zero-general/pull/8 no pipeline True
121 regel/loudml 163 https://github.com/regel/loudml/issues/163 no pipeline False
122 hanxiao/bert-as-service 337 https://github.com/hanxiao/bert-as-service/issues/337 no pipeline False
123 robertmartin8/PyPortfolioOpt 169 https://github.com/robertmartin8/PyPortfolioOpt/pull/169 no pipeline False
124 jdb78/pytorch-forecasting 394 https://github.com/jdb78/pytorch-forecasting/pull/394 no pipeline False
125 davidsandberg/facenet 1166 https://github.com/davidsandberg/facenet/issues/1166 model deployment False
126 CamDavidsonPilon/lifelines 318 https://github.com/CamDavidsonPilon/lifelines/pull/318 no pipeline False
127 jantic/DeOldify 278 https://github.com/jantic/DeOldify/issues/278 no pipeline False
128 deepfakes/faceswap 457 https://github.com/deepfakes/faceswap/pull/457 no pipeline False
129 CamDavidsonPilon/lifelines 594 https://github.com/CamDavidsonPilon/lifelines/pull/594 no pipeline False
130 jrkerns/pylinac 89 https://github.com/jrkerns/pylinac/issues/89 no pipeline False
131 ljvmiranda921/pyswarms 292 https://github.com/ljvmiranda921/pyswarms/issues/292 no pipeline False
132 CamDavidsonPilon/lifelines 919 https://github.com/CamDavidsonPilon/lifelines/pull/919 no pipeline False
133 Tianxiaomo/pytorch-YOLOv4 178 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/178 no pipeline True
134 robertmartin8/PyPortfolioOpt 294 https://github.com/robertmartin8/PyPortfolioOpt/issues/294 bug no pipeline False
135 BrikerMan/Kashgari 350 https://github.com/BrikerMan/Kashgari/pull/350 model requirements True
136 junyanz/pytorch-CycleGAN-and-pix2pix 27 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/27 feature engineering True
137 junyanz/pytorch-CycleGAN-and-pix2pix 1046 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1046 feature engineering;model training False
138 jrkerns/pylinac 267 https://github.com/jrkerns/pylinac/issues/267 no pipeline False
139 SeanNaren/deepspeech.pytorch 80 https://github.com/SeanNaren/deepspeech.pytorch/pull/80 model deployment True
140 dpinney/omf 235 https://github.com/dpinney/omf/issues/235 PNNL no pipeline True
141 thtrieu/darkflow 538 https://github.com/thtrieu/darkflow/issues/538 model training True
142 ljvmiranda921/pyswarms 12 https://github.com/ljvmiranda921/pyswarms/pull/12 no pipeline False
143 regel/loudml 36 https://github.com/regel/loudml/issues/36 no pipeline True
144 deepfakes/faceswap 639 https://github.com/deepfakes/faceswap/issues/639 model training True
145 junyanz/pytorch-CycleGAN-and-pix2pix 305 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/305 model training True
146 junyanz/pytorch-CycleGAN-and-pix2pix 1234 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1234 model training;data collection False
147 ZQPei/deep_sort_pytorch 67 https://github.com/ZQPei/deep_sort_pytorch/issues/67 no pipeline False
148 thtrieu/darkflow 1189 https://github.com/thtrieu/darkflow/issues/1189 model training True
149 thtrieu/darkflow 771 https://github.com/thtrieu/darkflow/issues/771 model training True
150 CamDavidsonPilon/lifelines 619 https://github.com/CamDavidsonPilon/lifelines/issues/619 docs;enhancement no pipeline False
151 jantic/DeOldify 298 https://github.com/jantic/DeOldify/issues/298 no pipeline False
152 junyanz/pytorch-CycleGAN-and-pix2pix 915 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/915 no pipeline False
153 BrikerMan/Kashgari 339 https://github.com/BrikerMan/Kashgari/issues/339 question chinese False
154 Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB 18 https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/18 no pipeline False
155 deepfakes/faceswap 169 https://github.com/deepfakes/faceswap/issues/169 no pipeline True
156 jhpyle/docassemble 23 https://github.com/jhpyle/docassemble/issues/23 no pipeline False
157 tianzhi0549/FCOS 107 https://github.com/tianzhi0549/FCOS/issues/107 no pipeline False
158 jantic/DeOldify 250 https://github.com/jantic/DeOldify/issues/250 no pipeline False
159 Tianxiaomo/pytorch-YOLOv4 223 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/223 ? True
160 junyanz/pytorch-CycleGAN-and-pix2pix 73 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/73 no pipeline False
161 BrikerMan/Kashgari 62 https://github.com/BrikerMan/Kashgari/issues/62 question chinese False
162 robertmartin8/PyPortfolioOpt 158 https://github.com/robertmartin8/PyPortfolioOpt/issues/158 enhancement no pipeline False
163 CamDavidsonPilon/lifelines 357 https://github.com/CamDavidsonPilon/lifelines/pull/357 no pipeline False
164 Tianxiaomo/pytorch-YOLOv4 249 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/249 no pipeline True
165 SeanNaren/deepspeech.pytorch 197 https://github.com/SeanNaren/deepspeech.pytorch/issues/197 no pipeline True
166 deepfakes/faceswap 90 https://github.com/deepfakes/faceswap/pull/90 no pipeline False
167 thtrieu/darkflow 466 https://github.com/thtrieu/darkflow/issues/466 model training True
168 junyanz/pytorch-CycleGAN-and-pix2pix 675 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/675 no pipeline False
169 davidsandberg/facenet 417 https://github.com/davidsandberg/facenet/issues/417 model evaluation True
170 gboeing/osmnx 601 https://github.com/gboeing/osmnx/issues/601 question no pipeline False
171 regel/loudml 95 https://github.com/regel/loudml/issues/95 help wanted no pipeline False
172 davidsandberg/facenet 480 https://github.com/davidsandberg/facenet/issues/480 model deployment False
173 davidsandberg/facenet 175 https://github.com/davidsandberg/facenet/issues/175 model training;data cleaning True
174 robertmartin8/PyPortfolioOpt 58 https://github.com/robertmartin8/PyPortfolioOpt/pull/58 no pipeline False
175 nextgenusfs/funannotate 119 https://github.com/nextgenusfs/funannotate/pull/119 no pipeline False
176 CamDavidsonPilon/lifelines 1186 https://github.com/CamDavidsonPilon/lifelines/issues/1186 no pipeline True
177 deeppomf/DeepCreamPy 118 https://github.com/deeppomf/DeepCreamPy/issues/118 page not found True
178 hanxiao/bert-as-service 203 https://github.com/hanxiao/bert-as-service/pull/203 no pipeline False
179 junyanz/pytorch-CycleGAN-and-pix2pix 839 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/839 no pipeline False
180 hanxiao/bert-as-service 236 https://github.com/hanxiao/bert-as-service/pull/236 no pipeline False
181 CamDavidsonPilon/lifelines 1025 https://github.com/CamDavidsonPilon/lifelines/issues/1025 enhancement no pipeline False
182 deeppomf/DeepCreamPy 32 https://github.com/deeppomf/DeepCreamPy/issues/32 page not found False
183 jrkerns/pylinac 104 https://github.com/jrkerns/pylinac/issues/104 no pipeline False
184 davidsandberg/facenet 1086 https://github.com/davidsandberg/facenet/issues/1086 no pipeline True
185 deepfakes/faceswap 580 https://github.com/deepfakes/faceswap/issues/580 no pipeline False
186 nextgenusfs/funannotate 326 https://github.com/nextgenusfs/funannotate/issues/326 model training True
187 nextgenusfs/funannotate 215 https://github.com/nextgenusfs/funannotate/issues/215 no pipeline True
188 regel/loudml 388 https://github.com/regel/loudml/issues/388 no pipeline True
189 regel/loudml 137 https://github.com/regel/loudml/issues/137 CentOS no pipeline False
190 davidsandberg/facenet 1087 https://github.com/davidsandberg/facenet/issues/1087 model evaluation True
191 CamDavidsonPilon/lifelines 1197 https://github.com/CamDavidsonPilon/lifelines/pull/1197 no pipeline False
192 hanxiao/bert-as-service 257 https://github.com/hanxiao/bert-as-service/issues/257 no pipeline False
193 SeanNaren/deepspeech.pytorch 282 https://github.com/SeanNaren/deepspeech.pytorch/issues/282 stale model evaluation True
194 davidsandberg/facenet 171 https://github.com/davidsandberg/facenet/issues/171 no pipeline False
195 SeanNaren/deepspeech.pytorch 391 https://github.com/SeanNaren/deepspeech.pytorch/pull/391 stale no pipeline False
196 emedvedev/attention-ocr 85 https://github.com/emedvedev/attention-ocr/issues/85 feature engineering True
197 jrkerns/pylinac 47 https://github.com/jrkerns/pylinac/issues/47 no pipeline True
198 Tianxiaomo/pytorch-YOLOv4 74 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/74 chinese True
199 davidsandberg/facenet 902 https://github.com/davidsandberg/facenet/issues/902 model deployment True
200 afeinstein20/eleanor 130 https://github.com/afeinstein20/eleanor/issues/130 no pipeline False
201 davidsandberg/facenet 848 https://github.com/davidsandberg/facenet/pull/848 no pipeline False
202 mittagessen/kraken 239 https://github.com/mittagessen/kraken/issues/239 feature engineering True
203 afeinstein20/eleanor 57 https://github.com/afeinstein20/eleanor/issues/57 no pipeline False
204 gboeing/osmnx 201 https://github.com/gboeing/osmnx/issues/201 installation no pipeline False
205 suragnair/alpha-zero-general 132 https://github.com/suragnair/alpha-zero-general/pull/132 no pipeline False
206 CamDavidsonPilon/lifelines 630 https://github.com/CamDavidsonPilon/lifelines/issues/630 feature engineering False
207 regel/loudml 301 https://github.com/regel/loudml/pull/301 dependencies no pipeline False
208 jantic/DeOldify 99 https://github.com/jantic/DeOldify/pull/99 no pipeline False
209 thtrieu/darkflow 950 https://github.com/thtrieu/darkflow/issues/950 model training True
210 deepfakes/faceswap 756 https://github.com/deepfakes/faceswap/pull/756 no pipeline True
211 davidsandberg/facenet 890 https://github.com/davidsandberg/facenet/issues/890 model training True
212 mittagessen/kraken 156 https://github.com/mittagessen/kraken/issues/156 no pipeline False
213 ljvmiranda921/pyswarms 378 https://github.com/ljvmiranda921/pyswarms/pull/378 no pipeline False
214 davidsandberg/facenet 105 https://github.com/davidsandberg/facenet/issues/105 model training True
215 davidsandberg/facenet 612 https://github.com/davidsandberg/facenet/issues/612 no pipeline True
216 CamDavidsonPilon/lifelines 881 https://github.com/CamDavidsonPilon/lifelines/issues/881 no pipeline False
217 junyanz/pytorch-CycleGAN-and-pix2pix 158 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/158 no pipeline True
218 gboeing/osmnx 623 https://github.com/gboeing/osmnx/issues/623 bug no pipeline False
219 suragnair/alpha-zero-general 37 https://github.com/suragnair/alpha-zero-general/issues/37 no pipeline True
220 ljvmiranda921/pyswarms 440 https://github.com/ljvmiranda921/pyswarms/pull/440 no pipeline True
221 deepfakes/faceswap 38 https://github.com/deepfakes/faceswap/issues/38 bug;performance no pipeline False
222 suragnair/alpha-zero-general 217 https://github.com/suragnair/alpha-zero-general/issues/217 no pipeline False
223 jrkerns/pylinac 281 https://github.com/jrkerns/pylinac/pull/281 no pipeline False
224 1adrianb/face-alignment 230 https://github.com/1adrianb/face-alignment/pull/230 no pipeline False
225 mittagessen/kraken 30 https://github.com/mittagessen/kraken/issues/30 no pipeline True
226 davidsandberg/facenet 398 https://github.com/davidsandberg/facenet/issues/398 model evaluation False
227 SeanNaren/deepspeech.pytorch 152 https://github.com/SeanNaren/deepspeech.pytorch/pull/152 no pipeline False
228 tianzhi0549/FCOS 49 https://github.com/tianzhi0549/FCOS/issues/49 no pipeline True
229 BrikerMan/Kashgari 218 https://github.com/BrikerMan/Kashgari/issues/218 question chinese True
230 mravanelli/pytorch-kaldi 54 https://github.com/mravanelli/pytorch-kaldi/issues/54 model training;model deployment True
231 jdb78/pytorch-forecasting 227 https://github.com/jdb78/pytorch-forecasting/pull/227 dependencies no pipeline False
232 junyanz/pytorch-CycleGAN-and-pix2pix 598 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/598 no pipeline True
233 gboeing/osmnx 592 https://github.com/gboeing/osmnx/pull/592 no pipeline False
234 deepfakes/faceswap 567 https://github.com/deepfakes/faceswap/issues/567 no pipeline False
235 mravanelli/pytorch-kaldi 223 https://github.com/mravanelli/pytorch-kaldi/issues/223 stalled feature engineering True
236 nextgenusfs/funannotate 327 https://github.com/nextgenusfs/funannotate/issues/327 no pipeline True
237 SeanNaren/deepspeech.pytorch 561 https://github.com/SeanNaren/deepspeech.pytorch/issues/561 stale no pipeline False
238 thtrieu/darkflow 512 https://github.com/thtrieu/darkflow/issues/512 model evaluation True
239 nextgenusfs/funannotate 409 https://github.com/nextgenusfs/funannotate/issues/409 model training True
240 tianzhi0549/FCOS 285 https://github.com/tianzhi0549/FCOS/issues/285 model evaluation True
241 junyanz/pytorch-CycleGAN-and-pix2pix 482 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/482 no pipeline False
242 Tianxiaomo/pytorch-YOLOv4 15 https://github.com/Tianxiaomo/pytorch-YOLOv4/issues/15 chinese False
243 thtrieu/darkflow 1047 https://github.com/thtrieu/darkflow/issues/1047 no pipeline False
244 gboeing/osmnx 206 https://github.com/gboeing/osmnx/pull/206 no pipeline False
245 jrkerns/pylinac 185 https://github.com/jrkerns/pylinac/issues/185 no pipeline False
246 junyanz/pytorch-CycleGAN-and-pix2pix 1187 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1187 no pipeline True
247 davidsandberg/facenet 1078 https://github.com/davidsandberg/facenet/pull/1078 no pipeline False
248 davidsandberg/facenet 483 https://github.com/davidsandberg/facenet/issues/483 no pipeline True
249 jhpyle/docassemble 283 https://github.com/jhpyle/docassemble/issues/283 no pipeline False
250 CamDavidsonPilon/lifelines 282 https://github.com/CamDavidsonPilon/lifelines/issues/282 model deployment False
251 deepfakes/faceswap 80 https://github.com/deepfakes/faceswap/issues/80 no pipeline True
252 1adrianb/face-alignment 45 https://github.com/1adrianb/face-alignment/issues/45 model training;model evaluation True
253 thtrieu/darkflow 969 https://github.com/thtrieu/darkflow/issues/969 feature engineering True
254 hanxiao/bert-as-service 373 https://github.com/hanxiao/bert-as-service/issues/373 no pipeline True
255 hanxiao/bert-as-service 310 https://github.com/hanxiao/bert-as-service/issues/310 data cleaning False
256 dpinney/omf 57 https://github.com/dpinney/omf/issues/57 no pipeline False
257 jantic/DeOldify 30 https://github.com/jantic/DeOldify/issues/30 no pipeline False
258 ljvmiranda921/pyswarms 197 https://github.com/ljvmiranda921/pyswarms/pull/197 no pipeline False
259 namisan/mt-dnn 156 https://github.com/namisan/mt-dnn/issues/156 no pipeline True
260 BrikerMan/Kashgari 413 https://github.com/BrikerMan/Kashgari/pull/413 no pipeline False
261 Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB 109 https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/109 no pipeline False
262 deepfakes/faceswap 130 https://github.com/deepfakes/faceswap/pull/130 no pipeline False
263 gboeing/osmnx 273 https://github.com/gboeing/osmnx/issues/273 enhancement no pipeline False
264 jhpyle/docassemble 320 https://github.com/jhpyle/docassemble/pull/320 no pipeline False
265 jhpyle/docassemble 158 https://github.com/jhpyle/docassemble/issues/158 no pipeline False
266 jhpyle/docassemble 38 https://github.com/jhpyle/docassemble/issues/38 no pipeline False
267 davidsandberg/facenet 407 https://github.com/davidsandberg/facenet/issues/407 no pipeline False
268 dpinney/omf 338 https://github.com/dpinney/omf/issues/338 no pipeline False
269 CamDavidsonPilon/lifelines 350 https://github.com/CamDavidsonPilon/lifelines/issues/350 convergence issue data cleaning True
270 jdb78/pytorch-forecasting 352 https://github.com/jdb78/pytorch-forecasting/issues/352 question no pipeline True
271 gboeing/osmnx 431 https://github.com/gboeing/osmnx/pull/431 no pipeline False
272 jhpyle/docassemble 24 https://github.com/jhpyle/docassemble/issues/24 no pipeline False
273 nextgenusfs/funannotate 257 https://github.com/nextgenusfs/funannotate/issues/257 no pipeline False
274 SeanNaren/deepspeech.pytorch 517 https://github.com/SeanNaren/deepspeech.pytorch/issues/517 no pipeline True
275 thtrieu/darkflow 78 https://github.com/thtrieu/darkflow/pull/78 no pipeline False
276 ljvmiranda921/pyswarms 272 https://github.com/ljvmiranda921/pyswarms/pull/272 no pipeline False
277 deepfakes/faceswap 356 https://github.com/deepfakes/faceswap/issues/356 data cleaning;feature engineering False
278 jdb78/pytorch-forecasting 354 https://github.com/jdb78/pytorch-forecasting/issues/354 dependencies no pipeline True
279 gboeing/osmnx 584 https://github.com/gboeing/osmnx/pull/584 no pipeline True
280 ZQPei/deep_sort_pytorch 15 https://github.com/ZQPei/deep_sort_pytorch/issues/15 chinese False
281 dpinney/omf 81 https://github.com/dpinney/omf/issues/81 no pipeline False
282 junyanz/pytorch-CycleGAN-and-pix2pix 1006 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1006 no pipeline False
283 deepfakes/faceswap 861 https://github.com/deepfakes/faceswap/issues/861 no pipeline True
284 afeinstein20/eleanor 198 https://github.com/afeinstein20/eleanor/pull/198 model evaluation True
285 nextgenusfs/funannotate 537 https://github.com/nextgenusfs/funannotate/issues/537 no pipeline True
286 gboeing/osmnx 123 https://github.com/gboeing/osmnx/issues/123 question no pipeline True
287 thtrieu/darkflow 428 https://github.com/thtrieu/darkflow/issues/428 data collection True
288 gboeing/osmnx 495 https://github.com/gboeing/osmnx/pull/495 no pipeline False
289 nextgenusfs/funannotate 439 https://github.com/nextgenusfs/funannotate/issues/439 no pipeline True
290 deepfakes/faceswap 375 https://github.com/deepfakes/faceswap/issues/375 no pipeline True
291 emedvedev/attention-ocr 141 https://github.com/emedvedev/attention-ocr/issues/141 data collection True
292 gboeing/osmnx 58 https://github.com/gboeing/osmnx/issues/58 bug no pipeline False
293 davidsandberg/facenet 99 https://github.com/davidsandberg/facenet/issues/99 no pipeline True
294 deepfakes/faceswap 502 https://github.com/deepfakes/faceswap/pull/502 no pipeline False
295 namisan/mt-dnn 88 https://github.com/namisan/mt-dnn/pull/88 no pipeline False
296 1adrianb/face-alignment 37 https://github.com/1adrianb/face-alignment/issues/37 no pipeline True
297 thtrieu/darkflow 959 https://github.com/thtrieu/darkflow/pull/959 no pipeline False
298 hanxiao/bert-as-service 160 https://github.com/hanxiao/bert-as-service/issues/160 no pipeline True
299 hanxiao/bert-as-service 213 https://github.com/hanxiao/bert-as-service/issues/213 discussion;feel free to contribute;help wanted model requirements True
300 tianzhi0549/FCOS 165 https://github.com/tianzhi0549/FCOS/pull/165 no pipeline False
301 deepfakes/faceswap 820 https://github.com/deepfakes/faceswap/issues/820 no pipeline True
302 jdb78/pytorch-forecasting 43 https://github.com/jdb78/pytorch-forecasting/pull/43 no pipeline True
303 tianzhi0549/FCOS 46 https://github.com/tianzhi0549/FCOS/issues/46 no pipeline True
304 junyanz/pytorch-CycleGAN-and-pix2pix 128 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/128 data cleaning False
305 deeppomf/DeepCreamPy 119 https://github.com/deeppomf/DeepCreamPy/issues/119 page not found True
306 CamDavidsonPilon/lifelines 913 https://github.com/CamDavidsonPilon/lifelines/issues/913 no pipeline False
307 regel/loudml 60 https://github.com/regel/loudml/issues/60 bug no pipeline False
308 gboeing/osmnx 369 https://github.com/gboeing/osmnx/pull/369 no pipeline True
309 afeinstein20/eleanor 223 https://github.com/afeinstein20/eleanor/issues/223 no pipeline False
310 CamDavidsonPilon/lifelines 248 https://github.com/CamDavidsonPilon/lifelines/issues/248 no pipeline False
311 dpinney/omf 321 https://github.com/dpinney/omf/issues/321 no pipeline False
312 ljvmiranda921/pyswarms 394 https://github.com/ljvmiranda921/pyswarms/issues/394 no pipeline True
313 deepfakes/faceswap 183 https://github.com/deepfakes/faceswap/pull/183 no pipeline False
314 davidsandberg/facenet 49 https://github.com/davidsandberg/facenet/issues/49 feature engineering False
315 ZQPei/deep_sort_pytorch 153 https://github.com/ZQPei/deep_sort_pytorch/issues/153 no pipeline True
316 jdb78/pytorch-forecasting 147 https://github.com/jdb78/pytorch-forecasting/pull/147 dependencies no pipeline False
317 SeanNaren/deepspeech.pytorch 188 https://github.com/SeanNaren/deepspeech.pytorch/issues/188 no pipeline False
318 junyanz/pytorch-CycleGAN-and-pix2pix 209 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/209 model evaluation True
319 mravanelli/pytorch-kaldi 245 https://github.com/mravanelli/pytorch-kaldi/issues/245 no pipeline True
320 nextgenusfs/funannotate 44 https://github.com/nextgenusfs/funannotate/pull/44 no pipeline False
321 Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB 131 https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/131 no pipeline True
322 jdb78/pytorch-forecasting 72 https://github.com/jdb78/pytorch-forecasting/pull/72 no pipeline False
323 CamDavidsonPilon/lifelines 117 https://github.com/CamDavidsonPilon/lifelines/issues/117 no pipeline False
324 nicodv/kmodes 12 https://github.com/nicodv/kmodes/pull/12 no pipeline False
325 deepfakes/faceswap 806 https://github.com/deepfakes/faceswap/issues/806 no pipeline True
326 afeinstein20/eleanor 112 https://github.com/afeinstein20/eleanor/issues/112 no pipeline False
327 hanxiao/bert-as-service 412 https://github.com/hanxiao/bert-as-service/issues/412 no pipeline False
328 gboeing/osmnx 522 https://github.com/gboeing/osmnx/pull/522 enhancement no pipeline True
329 jhpyle/docassemble 258 https://github.com/jhpyle/docassemble/pull/258 no pipeline False
330 CamDavidsonPilon/lifelines 447 https://github.com/CamDavidsonPilon/lifelines/issues/447 model evaluation True
331 nextgenusfs/funannotate 188 https://github.com/nextgenusfs/funannotate/issues/188 no pipeline False
332 robertmartin8/PyPortfolioOpt 62 https://github.com/robertmartin8/PyPortfolioOpt/issues/62 packaging no pipeline False
333 dpinney/omf 292 https://github.com/dpinney/omf/issues/292 NotAnIssue no pipeline False
334 thtrieu/darkflow 105 https://github.com/thtrieu/darkflow/issues/105 no pipeline True
335 junyanz/pytorch-CycleGAN-and-pix2pix 132 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/pull/132 no pipeline False
336 thtrieu/darkflow 484 https://github.com/thtrieu/darkflow/issues/484 bug no pipeline False
337 hanxiao/bert-as-service 587 https://github.com/hanxiao/bert-as-service/issues/587 no pipeline True
338 namisan/mt-dnn 98 https://github.com/namisan/mt-dnn/issues/98 model deployment True
339 thtrieu/darkflow 501 https://github.com/thtrieu/darkflow/issues/501 data collection;model training False
340 thtrieu/darkflow 337 https://github.com/thtrieu/darkflow/issues/337 model requirements True
341 mravanelli/pytorch-kaldi 86 https://github.com/mravanelli/pytorch-kaldi/issues/86 no pipeline True
342 emedvedev/attention-ocr 177 https://github.com/emedvedev/attention-ocr/pull/177 no pipeline False
343 nicodv/kmodes 36 https://github.com/nicodv/kmodes/issues/36 easy;enhancement no pipeline False
344 jhpyle/docassemble 66 https://github.com/jhpyle/docassemble/issues/66 no pipeline False
345 regel/loudml 336 https://github.com/regel/loudml/pull/336 dependencies no pipeline False
346 mravanelli/pytorch-kaldi 108 https://github.com/mravanelli/pytorch-kaldi/issues/108 no pipeline False
347 gboeing/osmnx 530 https://github.com/gboeing/osmnx/issues/530 no pipeline True
348 jdb78/pytorch-forecasting 32 https://github.com/jdb78/pytorch-forecasting/pull/32 no pipeline False
349 deeppomf/DeepCreamPy 156 https://github.com/deeppomf/DeepCreamPy/issues/156 page not found False
350 thtrieu/darkflow 973 https://github.com/thtrieu/darkflow/issues/973 no pipeline True
351 CamDavidsonPilon/lifelines 7 https://github.com/CamDavidsonPilon/lifelines/issues/7 bug no pipeline False
352 thtrieu/darkflow 109 https://github.com/thtrieu/darkflow/issues/109 no pipeline False
353 robertmartin8/PyPortfolioOpt 247 https://github.com/robertmartin8/PyPortfolioOpt/pull/247 no pipeline False
354 SeanNaren/deepspeech.pytorch 111 https://github.com/SeanNaren/deepspeech.pytorch/issues/111 model evaluation True
355 robertmartin8/PyPortfolioOpt 258 https://github.com/robertmartin8/PyPortfolioOpt/issues/258 question no pipeline True
356 thtrieu/darkflow 278 https://github.com/thtrieu/darkflow/issues/278 model requirements False
357 ljvmiranda921/pyswarms 145 https://github.com/ljvmiranda921/pyswarms/pull/145 no pipeline False
358 dpinney/omf 217 https://github.com/dpinney/omf/issues/217 no pipeline False
359 mittagessen/kraken 161 https://github.com/mittagessen/kraken/issues/161 model training True
360 junyanz/pytorch-CycleGAN-and-pix2pix 233 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/233 no pipeline True
361 ZQPei/deep_sort_pytorch 1 https://github.com/ZQPei/deep_sort_pytorch/issues/1 chinese True
362 SeanNaren/deepspeech.pytorch 412 https://github.com/SeanNaren/deepspeech.pytorch/issues/412 no pipeline True
363 regel/loudml 275 https://github.com/regel/loudml/pull/275 dependencies no pipeline False
364 CamDavidsonPilon/lifelines 173 https://github.com/CamDavidsonPilon/lifelines/issues/173 no pipeline False
365 junyanz/pytorch-CycleGAN-and-pix2pix 334 https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/334 data cleaning True
366 dpinney/omf 216 https://github.com/dpinney/omf/issues/216 no pipeline True
367 davidsandberg/facenet 127 https://github.com/davidsandberg/facenet/issues/127 model training True
368 jantic/DeOldify 273 https://github.com/jantic/DeOldify/issues/273 no pipeline True
369 deepfakes/faceswap 414 https://github.com/deepfakes/faceswap/issues/414 model evaluation False
370 thtrieu/darkflow 161 https://github.com/thtrieu/darkflow/issues/161 no pipeline True
371 Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB 227 https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB/issues/227 chinese False
372 CamDavidsonPilon/lifelines 406 https://github.com/CamDavidsonPilon/lifelines/issues/406 no pipeline False
373 CamDavidsonPilon/lifelines 843 https://github.com/CamDavidsonPilon/lifelines/pull/843 no pipeline True
374 BrikerMan/Kashgari 167 https://github.com/BrikerMan/Kashgari/issues/167 bug;wontfix model deployment True
375 davidsandberg/facenet 34 https://github.com/davidsandberg/facenet/issues/34 model deployment True
376 CamDavidsonPilon/lifelines 901 https://github.com/CamDavidsonPilon/lifelines/pull/901 no pipeline False

1141
util/sampling_nb - sampling_nb.csv Executable file

File diff suppressed because it is too large Load Diff

20
util/tests.py Executable file
View File

@ -0,0 +1,20 @@
import pandas as pd
from scipy.stats import ranksums
from cliffsDelta import cliffsDelta
def evaluate(feature: str):
print(f'====={feature}=====')
print(ranksums(ml_data[feature], no_ml_data[feature]))
print(cliffsDelta(ml_data[feature], no_ml_data[feature]))
if __name__ == '__main__':
data = pd.read_csv('commit_analysis.csv')
ml_data = data[data['is_ml']]
no_ml_data = data[~data['is_ml']]
evaluate('file_entropy')
evaluate('line_entropy')
evaluate('n_comments')
evaluate('words_mean')
evaluate('day_to_fix')

16
util/time-to-fix.py Executable file
View File

@ -0,0 +1,16 @@
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
if __name__ == '__main__':
data = pd.read_csv('commit_analysis.csv')
data['type'] = data['is_ml'].apply(lambda x: 'ML' if x else 'No ML')
ylim = data['day_to_fix'].quantile(0.95)
sns.catplot(x='type', y='day_to_fix', kind='box', data=data) \
.set(title='Giorni necessari per un fix') \
.set(xlabel='tipo') \
.set(ylabel='giorni') \
.set(ylim=(0, ylim))
plt.tight_layout()
plt.savefig('../src/figures/day-to-fix.pdf')