Fix stato dell'arte
This commit is contained in:
parent
63b72b6d2e
commit
bd36b6a5b7
@ -7,7 +7,7 @@
|
|||||||
urldate = {2021-06-03},
|
urldate = {2021-06-03},
|
||||||
abstract = {In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (na\"ive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve higher accuracy levels.Na\"ive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\"ive Bayes is not (necessarily) a Bayesian method.},
|
abstract = {In statistics, naive Bayes classifiers are a family of simple "probabilistic classifiers" based on applying Bayes' theorem with strong (na\"ive) independence assumptions between the features (see Bayes classifier). They are among the simplest Bayesian network models, but coupled with kernel density estimation, they can achieve higher accuracy levels.Na\"ive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. In the statistics and computer science literature, naive Bayes models are known under a variety of names, including simple Bayes and independence Bayes. All these names reference the use of Bayes' theorem in the classifier's decision rule, but na\"ive Bayes is not (necessarily) a Bayesian method.},
|
||||||
annotation = {Page Version ID: 1024247473},
|
annotation = {Page Version ID: 1024247473},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/5T4T73X4/index.html},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/5T4T73X4/index.html},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -23,7 +23,7 @@
|
|||||||
doi = {10.1109/ESEM.2019.8870187},
|
doi = {10.1109/ESEM.2019.8870187},
|
||||||
abstract = {Method: We conduct an empirical study of ML-related developer posts on Stack Overflow. We perform in-depth quantitative and qualitative analyses focusing on a series of research questions related to the challenges of developing ML applications and the directions to address them. Results: Our findings include: (1) ML questions suffer from a much higher percentage of unanswered questions on Stack Overflow than other domains; (2) there is a lack of ML experts in the Stack Overflow QA community; (3) the data preprocessing and model deployment phases are where most of the challenges lay; and (4) addressing most of these challenges require more ML implementation knowledge than ML conceptual knowledge. Conclusions: Our findings suggest that most challenges are under the data preparation and model deployment phases, i.e., early and late stages. Also, the implementation aspect of ML shows much higher difficulty level among developers than the conceptual aspect.},
|
abstract = {Method: We conduct an empirical study of ML-related developer posts on Stack Overflow. We perform in-depth quantitative and qualitative analyses focusing on a series of research questions related to the challenges of developing ML applications and the directions to address them. Results: Our findings include: (1) ML questions suffer from a much higher percentage of unanswered questions on Stack Overflow than other domains; (2) there is a lack of ML experts in the Stack Overflow QA community; (3) the data preprocessing and model deployment phases are where most of the challenges lay; and (4) addressing most of these challenges require more ML implementation knowledge than ML conceptual knowledge. Conclusions: Our findings suggest that most challenges are under the data preparation and model deployment phases, i.e., early and late stages. Also, the implementation aspect of ML shows much higher difficulty level among developers than the conceptual aspect.},
|
||||||
eventtitle = {2019 {{ACM}}/{{IEEE International Symposium}} on {{Empirical Software Engineering}} and {{Measurement}} ({{ESEM}})},
|
eventtitle = {2019 {{ACM}}/{{IEEE International Symposium}} on {{Empirical Software Engineering}} and {{Measurement}} ({{ESEM}})},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Alshangiti_2019_Why is Developing Machine Learning Applications Challenging.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Alshangiti_2019_Why is Developing Machine Learning Applications Challenging.pdf},
|
||||||
isbn = {978-1-72812-968-6},
|
isbn = {978-1-72812-968-6},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -40,7 +40,7 @@
|
|||||||
doi = {10.1109/ICSE-SEIP.2019.00042},
|
doi = {10.1109/ICSE-SEIP.2019.00042},
|
||||||
abstract = {Recent advances in machine learning have stimulated widespread interest within the Information Technology sector on integrating AI capabilities into software and services. This goal has forced organizations to evolve their development processes. We report on a study that we conducted on observing software teams at Microsoft as they develop AI-based applications. We consider a nine-stage workflow process informed by prior experiences developing AI applications (e.g., search and NLP) and data science tools (e.g. application diagnostics and bug reporting). We found that various Microsoft teams have united this workflow into preexisting, well-evolved, Agile-like software engineering processes, providing insights about several essential engineering challenges that organizations may face in creating large-scale AI solutions for the marketplace. We collected some best practices from Microsoft teams to address these challenges. In addition, we have identified three aspects of the AI domain that make it fundamentally different from prior software application domains: 1) discovering, managing, and versioning the data needed for machine learning applications is much more complex and difficult than other types of software engineering, 2) model customization and model reuse require very different skills than are typically found in software teams, and 3) AI components are more difficult to handle as distinct modules than traditional software components \textemdash{} models may be ``entangled'' in complex ways and experience non-monotonic error behavior. We believe that the lessons learned by Microsoft teams will be valuable to other organizations.},
|
abstract = {Recent advances in machine learning have stimulated widespread interest within the Information Technology sector on integrating AI capabilities into software and services. This goal has forced organizations to evolve their development processes. We report on a study that we conducted on observing software teams at Microsoft as they develop AI-based applications. We consider a nine-stage workflow process informed by prior experiences developing AI applications (e.g., search and NLP) and data science tools (e.g. application diagnostics and bug reporting). We found that various Microsoft teams have united this workflow into preexisting, well-evolved, Agile-like software engineering processes, providing insights about several essential engineering challenges that organizations may face in creating large-scale AI solutions for the marketplace. We collected some best practices from Microsoft teams to address these challenges. In addition, we have identified three aspects of the AI domain that make it fundamentally different from prior software application domains: 1) discovering, managing, and versioning the data needed for machine learning applications is much more complex and difficult than other types of software engineering, 2) model customization and model reuse require very different skills than are typically found in software teams, and 3) AI components are more difficult to handle as distinct modules than traditional software components \textemdash{} models may be ``entangled'' in complex ways and experience non-monotonic error behavior. We believe that the lessons learned by Microsoft teams will be valuable to other organizations.},
|
||||||
eventtitle = {2019 {{IEEE}}/{{ACM}} 41st {{International Conference}} on {{Software Engineering}}: {{Software Engineering}} in {{Practice}} ({{ICSE}}-{{SEIP}})},
|
eventtitle = {2019 {{IEEE}}/{{ACM}} 41st {{International Conference}} on {{Software Engineering}}: {{Software Engineering}} in {{Practice}} ({{ICSE}}-{{SEIP}})},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Amershi_2019_Software Engineering for Machine Learning.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Amershi_2019_Software Engineering for Machine Learning.pdf},
|
||||||
isbn = {978-1-72811-760-7},
|
isbn = {978-1-72811-760-7},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -57,7 +57,7 @@
|
|||||||
doi = {10.1109/MSR.2019.00052},
|
doi = {10.1109/MSR.2019.00052},
|
||||||
abstract = {Machine learning, a branch of Artificial Intelligence, is now popular in software engineering community and is successfully used for problems like bug prediction, and software development effort estimation. Developers' understanding of machine learning, however, is not clear, and we require investigation to understand what educators should focus on, and how different online programming discussion communities can be more helpful. We conduct a study on Stack Overflow (SO) machine learning related posts using the SOTorrent dataset. We found that some machine learning topics are significantly more discussed than others, and others need more attention. We also found that topic generation with Latent Dirichlet Allocation (LDA) can suggest more appropriate tags that can make a machine learning post more visible and thus can help in receiving immediate feedback from sites like SO.},
|
abstract = {Machine learning, a branch of Artificial Intelligence, is now popular in software engineering community and is successfully used for problems like bug prediction, and software development effort estimation. Developers' understanding of machine learning, however, is not clear, and we require investigation to understand what educators should focus on, and how different online programming discussion communities can be more helpful. We conduct a study on Stack Overflow (SO) machine learning related posts using the SOTorrent dataset. We found that some machine learning topics are significantly more discussed than others, and others need more attention. We also found that topic generation with Latent Dirichlet Allocation (LDA) can suggest more appropriate tags that can make a machine learning post more visible and thus can help in receiving immediate feedback from sites like SO.},
|
||||||
eventtitle = {2019 {{IEEE}}/{{ACM}} 16th {{International Conference}} on {{Mining Software Repositories}} ({{MSR}})},
|
eventtitle = {2019 {{IEEE}}/{{ACM}} 16th {{International Conference}} on {{Mining Software Repositories}} ({{MSR}})},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Bangash_2019_What do Developers Know About Machine Learning.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Bangash_2019_What do Developers Know About Machine Learning.pdf},
|
||||||
isbn = {978-1-72813-412-3},
|
isbn = {978-1-72813-412-3},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -73,7 +73,7 @@
|
|||||||
archiveprefix = {arXiv},
|
archiveprefix = {arXiv},
|
||||||
eprint = {1606.04984},
|
eprint = {1606.04984},
|
||||||
eprinttype = {arxiv},
|
eprinttype = {arxiv},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Borges_2016_Understanding the Factors that Impact the Popularity of GitHub Repositories.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Borges_2016_Understanding the Factors that Impact the Popularity of GitHub Repositories.pdf},
|
||||||
keywords = {Computer Science - Social and Information Networks,Computer Science - Software Engineering},
|
keywords = {Computer Science - Social and Information Networks,Computer Science - Software Engineering},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -85,7 +85,7 @@
|
|||||||
url = {https://medium.com/analytics-vidhya/text-classification-using-word-embeddings-and-deep-learning-in-python-classifying-tweets-from-6fe644fcfc81},
|
url = {https://medium.com/analytics-vidhya/text-classification-using-word-embeddings-and-deep-learning-in-python-classifying-tweets-from-6fe644fcfc81},
|
||||||
urldate = {2021-05-21},
|
urldate = {2021-05-21},
|
||||||
abstract = {The purpose of this article is to help a reader understand how to leverage word embeddings and deep learning when creating a text\ldots},
|
abstract = {The purpose of this article is to help a reader understand how to leverage word embeddings and deep learning when creating a text\ldots},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/BDS956UP/text-classification-using-word-embeddings-and-deep-learning-in-python-classifying-tweets-from-6.html},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/BDS956UP/text-classification-using-word-embeddings-and-deep-learning-in-python-classifying-tweets-from-6.html},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
organization = {{Medium}}
|
organization = {{Medium}}
|
||||||
}
|
}
|
||||||
@ -101,7 +101,7 @@
|
|||||||
doi = {10.1145/3106237.3106285},
|
doi = {10.1145/3106237.3106285},
|
||||||
abstract = {Bug reports document unexpected software behaviors experienced by users. To be effective, they should allow bug triagers to easily understand and reproduce the potential reported bugs, by clearly describing the Observed Behavior (OB), the Steps to Reproduce (S2R), and the Expected Behavior (EB). Unfortunately, while considered extremely useful, reporters often miss such pieces of information in bug reports and, to date, there is no effective way to automatically check and enforce their presence. We manually analyzed nearly 3k bug reports to understand to what extent OB, EB, and S2R are reported in bug reports and what discourse patterns reporters use to describe such information. We found that (i) while most reports contain OB (i.e., 93.5\%), only 35.2\% and 51.4\% explicitly describe EB and S2R, respectively; and (ii) reporters recurrently use 154 discourse patterns to describe such content. Based on these findings, we designed and evaluated an automated approach to detect the absence (or presence) of EB and S2R in bug descriptions. With its best setting, our approach is able to detect missing EB (S2R) with 85.9\% (69.2\%) average precision and 93.2\% (83\%) average recall. Our approach intends to improve bug descriptions quality by alerting reporters about missing EB and S2R at reporting time.},
|
abstract = {Bug reports document unexpected software behaviors experienced by users. To be effective, they should allow bug triagers to easily understand and reproduce the potential reported bugs, by clearly describing the Observed Behavior (OB), the Steps to Reproduce (S2R), and the Expected Behavior (EB). Unfortunately, while considered extremely useful, reporters often miss such pieces of information in bug reports and, to date, there is no effective way to automatically check and enforce their presence. We manually analyzed nearly 3k bug reports to understand to what extent OB, EB, and S2R are reported in bug reports and what discourse patterns reporters use to describe such information. We found that (i) while most reports contain OB (i.e., 93.5\%), only 35.2\% and 51.4\% explicitly describe EB and S2R, respectively; and (ii) reporters recurrently use 154 discourse patterns to describe such content. Based on these findings, we designed and evaluated an automated approach to detect the absence (or presence) of EB and S2R in bug descriptions. With its best setting, our approach is able to detect missing EB (S2R) with 85.9\% (69.2\%) average precision and 93.2\% (83\%) average recall. Our approach intends to improve bug descriptions quality by alerting reporters about missing EB and S2R at reporting time.},
|
||||||
eventtitle = {{{ESEC}}/{{FSE}}'17: {{Joint Meeting}} of the {{European Software Engineering Conference}} and the {{ACM SIGSOFT Symposium}} on the {{Foundations}} of {{Software Engineering}}},
|
eventtitle = {{{ESEC}}/{{FSE}}'17: {{Joint Meeting}} of the {{European Software Engineering Conference}} and the {{ACM SIGSOFT Symposium}} on the {{Foundations}} of {{Software Engineering}}},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/ACM/Chaparro_2017_Detecting missing information in bug descriptions.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/ACM/Chaparro_2017_Detecting missing information in bug descriptions.pdf},
|
||||||
isbn = {978-1-4503-5105-8},
|
isbn = {978-1-4503-5105-8},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -114,8 +114,7 @@
|
|||||||
pages = {2722--2730},
|
pages = {2722--2730},
|
||||||
url = {https://openaccess.thecvf.com/content_iccv_2015/html/Chen_DeepDriving_Learning_Affordance_ICCV_2015_paper.html},
|
url = {https://openaccess.thecvf.com/content_iccv_2015/html/Chen_DeepDriving_Learning_Affordance_ICCV_2015_paper.html},
|
||||||
urldate = {2021-06-09},
|
urldate = {2021-06-09},
|
||||||
eventtitle = {Proceedings of the {{IEEE International Conference}} on {{Computer Vision}}},
|
eventtitle = {Proceedings of the {{IEEE International Conference}} on {{Computer Vision}}}
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/4ZE2KTJC/Chen_DeepDriving_Learning_Affordance_ICCV_2015_paper.html}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@article{deboom2016representationlearningvery,
|
@article{deboom2016representationlearningvery,
|
||||||
@ -132,7 +131,7 @@
|
|||||||
archiveprefix = {arXiv},
|
archiveprefix = {arXiv},
|
||||||
eprint = {1607.00570},
|
eprint = {1607.00570},
|
||||||
eprinttype = {arxiv},
|
eprinttype = {arxiv},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/De Boom_2016_Representation learning for very short texts using weighted word embedding.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/De Boom_2016_Representation learning for very short texts using weighted word embedding.pdf},
|
||||||
keywords = {Computer Science - Computation and Language,Computer Science - Information Retrieval},
|
keywords = {Computer Science - Computation and Language,Computer Science - Information Retrieval},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -164,7 +163,7 @@
|
|||||||
issn = {1382-3256, 1573-7616},
|
issn = {1382-3256, 1573-7616},
|
||||||
doi = {10.1007/s10664-020-09916-6},
|
doi = {10.1007/s10664-020-09916-6},
|
||||||
abstract = {Many AI researchers are publishing code, data and other resources that accompany their papers in GitHub repositories. In this paper, we refer to these repositories as academic AI repositories. Our preliminary study shows that highly cited papers are more likely to have popular academic AI repositories (and vice versa). Hence, in this study, we perform an empirical study on academic AI repositories to highlight good software engineering practices of popular academic AI repositories for AI researchers. We collect 1,149 academic AI repositories, in which we label the top 20\% repositories that have the most number of stars as popular, and we label the bottom 70\% repositories as unpopular. The remaining 10\% repositories are set as a gap between popular and unpopular academic AI repositories. We propose 21 features to characterize the software engineering practices of academic AI repositories. Our experimental results show that popular and unpopular academic AI repositories are statistically significantly different in 11 of the studied features\textemdash indicating that the two groups of repositories have significantly different software engineering practices. Furthermore, we find that the number of links to other GitHub repositories in the README file, the number of images in the README file and the inclusion of a license are the most important features for differentiating the two groups of academic AI repositories. Our dataset and code are made publicly available to share with the community.},
|
abstract = {Many AI researchers are publishing code, data and other resources that accompany their papers in GitHub repositories. In this paper, we refer to these repositories as academic AI repositories. Our preliminary study shows that highly cited papers are more likely to have popular academic AI repositories (and vice versa). Hence, in this study, we perform an empirical study on academic AI repositories to highlight good software engineering practices of popular academic AI repositories for AI researchers. We collect 1,149 academic AI repositories, in which we label the top 20\% repositories that have the most number of stars as popular, and we label the bottom 70\% repositories as unpopular. The remaining 10\% repositories are set as a gap between popular and unpopular academic AI repositories. We propose 21 features to characterize the software engineering practices of academic AI repositories. Our experimental results show that popular and unpopular academic AI repositories are statistically significantly different in 11 of the studied features\textemdash indicating that the two groups of repositories have significantly different software engineering practices. Furthermore, we find that the number of links to other GitHub repositories in the README file, the number of images in the README file and the inclusion of a license are the most important features for differentiating the two groups of academic AI repositories. Our dataset and code are made publicly available to share with the community.},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Fan_2021_What makes a popular academic AI repository.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Fan_2021_What makes a popular academic AI repository.pdf},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
number = {1}
|
number = {1}
|
||||||
}
|
}
|
||||||
@ -176,7 +175,7 @@
|
|||||||
date = {2019-09-05},
|
date = {2019-09-05},
|
||||||
publisher = {{"O'Reilly Media, Inc."}},
|
publisher = {{"O'Reilly Media, Inc."}},
|
||||||
abstract = {Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how.By using concrete examples, minimal theory, and two production-ready Python frameworks\textemdash Scikit-Learn and TensorFlow\textemdash author Aur\'elien G\'eron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems. You'll learn a range of techniques, starting with simple linear regression and progressing to deep neural networks. With exercises in each chapter to help you apply what you've learned, all you need is programming experience to get started.Explore the machine learning landscape, particularly neural netsUse Scikit-Learn to track an example machine-learning project end-to-endExplore several training models, including support vector machines, decision trees, random forests, and ensemble methodsUse the TensorFlow library to build and train neural netsDive into neural net architectures, including convolutional nets, recurrent nets, and deep reinforcement learningLearn techniques for training and scaling deep neural nets},
|
abstract = {Through a series of recent breakthroughs, deep learning has boosted the entire field of machine learning. Now, even programmers who know close to nothing about this technology can use simple, efficient tools to implement programs capable of learning from data. This practical book shows you how.By using concrete examples, minimal theory, and two production-ready Python frameworks\textemdash Scikit-Learn and TensorFlow\textemdash author Aur\'elien G\'eron helps you gain an intuitive understanding of the concepts and tools for building intelligent systems. You'll learn a range of techniques, starting with simple linear regression and progressing to deep neural networks. With exercises in each chapter to help you apply what you've learned, all you need is programming experience to get started.Explore the machine learning landscape, particularly neural netsUse Scikit-Learn to track an example machine-learning project end-to-endExplore several training models, including support vector machines, decision trees, random forests, and ensemble methodsUse the TensorFlow library to build and train neural netsDive into neural net architectures, including convolutional nets, recurrent nets, and deep reinforcement learningLearn techniques for training and scaling deep neural nets},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/O'Reilly Media, Inc./Geron_2019_Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/O'Reilly Media, Inc./Geron_2019_Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow.pdf},
|
||||||
isbn = {978-1-4920-3259-5},
|
isbn = {978-1-4920-3259-5},
|
||||||
keywords = {Computers / Computer Vision & Pattern Recognition,Computers / Data Processing,Computers / Intelligence (AI) & Semantics,Computers / Natural Language Processing,Computers / Neural Networks,Computers / Programming Languages / Python},
|
keywords = {Computers / Computer Vision & Pattern Recognition,Computers / Data Processing,Computers / Intelligence (AI) & Semantics,Computers / Natural Language Processing,Computers / Neural Networks,Computers / Programming Languages / Python},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
@ -195,7 +194,7 @@
|
|||||||
doi = {10.1145/3379597.3387473},
|
doi = {10.1145/3379597.3387473},
|
||||||
abstract = {In the last few years, artificial intelligence (AI) and machine learning (ML) have become ubiquitous terms. These powerful techniques have escaped obscurity in academic communities with the recent onslaught of AI \& ML tools, frameworks, and libraries that make these techniques accessible to a wider audience of developers. As a result, applying AI \& ML to solve existing and emergent problems is an increasingly popular practice. However, little is known about this domain from the software engineering perspective. Many AI \& ML tools and applications are open source, hosted on platforms such as GitHub that provide rich tools for large-scale distributed software development. Despite widespread use and popularity, these repositories have never been examined as a community to identify unique properties, development patterns, and trends.},
|
abstract = {In the last few years, artificial intelligence (AI) and machine learning (ML) have become ubiquitous terms. These powerful techniques have escaped obscurity in academic communities with the recent onslaught of AI \& ML tools, frameworks, and libraries that make these techniques accessible to a wider audience of developers. As a result, applying AI \& ML to solve existing and emergent problems is an increasingly popular practice. However, little is known about this domain from the software engineering perspective. Many AI \& ML tools and applications are open source, hosted on platforms such as GitHub that provide rich tools for large-scale distributed software development. Despite widespread use and popularity, these repositories have never been examined as a community to identify unique properties, development patterns, and trends.},
|
||||||
eventtitle = {{{MSR}} '20: 17th {{International Conference}} on {{Mining Software Repositories}}},
|
eventtitle = {{{MSR}} '20: 17th {{International Conference}} on {{Mining Software Repositories}}},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/ACM/Gonzalez_2020_The State of the ML-universe.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/ACM/Gonzalez_2020_The State of the ML-universe.pdf},
|
||||||
isbn = {978-1-4503-7517-7},
|
isbn = {978-1-4503-7517-7},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -211,7 +210,7 @@
|
|||||||
doi = {10.1109/ICSME46990.2020.00058},
|
doi = {10.1109/ICSME46990.2020.00058},
|
||||||
abstract = {The role of machine learning frameworks in soft\- ware applications has exploded in recent years. Similar to non-machine learning frameworks, those frameworks need to evolve to incorporate new features, optimizations, etc., yet their evolution is impacted by the interdisciplinary development teams needed to develop them: scientists and developers. One concrete way in which this shows is through the use of multiple pro\- gramming languages in their code base, enabling the scientists to write optimized low-level code while developers can integrate the latter into a robust framework. Since multi-language code bases have been shown to impact the development process, this paper empirically compares ten large open-source multi-language machine learning frameworks and ten large open-source multi\- language traditional systems in terms of the volume of pull requests, their acceptance ratio i.e., the percentage of accepted pull requests among all the received pull requests, review process duration i.e., period taken to accept or reject a pull request, and bug-proneness. We find that multi-language pull request contributions present a challenge for both machine learning and traditional systems. Our main findings show that in both machine learning and traditional systems, multi-language pull requests are likely to be less accepted than mono-language pull requests; it also takes longer for both multi- and mono-language pull requests to be rejected than accepted. Machine learning frameworks take longer to accept/reject a multi-language pull request than traditional systems. Finally, we find that mono\- language pull requests in machine learning frameworks are more bug-prone than traditional systems.},
|
abstract = {The role of machine learning frameworks in soft\- ware applications has exploded in recent years. Similar to non-machine learning frameworks, those frameworks need to evolve to incorporate new features, optimizations, etc., yet their evolution is impacted by the interdisciplinary development teams needed to develop them: scientists and developers. One concrete way in which this shows is through the use of multiple pro\- gramming languages in their code base, enabling the scientists to write optimized low-level code while developers can integrate the latter into a robust framework. Since multi-language code bases have been shown to impact the development process, this paper empirically compares ten large open-source multi-language machine learning frameworks and ten large open-source multi\- language traditional systems in terms of the volume of pull requests, their acceptance ratio i.e., the percentage of accepted pull requests among all the received pull requests, review process duration i.e., period taken to accept or reject a pull request, and bug-proneness. We find that multi-language pull request contributions present a challenge for both machine learning and traditional systems. Our main findings show that in both machine learning and traditional systems, multi-language pull requests are likely to be less accepted than mono-language pull requests; it also takes longer for both multi- and mono-language pull requests to be rejected than accepted. Machine learning frameworks take longer to accept/reject a multi-language pull request than traditional systems. Finally, we find that mono\- language pull requests in machine learning frameworks are more bug-prone than traditional systems.},
|
||||||
eventtitle = {2020 {{IEEE International Conference}} on {{Software Maintenance}} and {{Evolution}} ({{ICSME}})},
|
eventtitle = {2020 {{IEEE International Conference}} on {{Software Maintenance}} and {{Evolution}} ({{ICSME}})},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Grichi_2020_On the Impact of Multi-language Development in Machine Learning Frameworks.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Grichi_2020_On the Impact of Multi-language Development in Machine Learning Frameworks.pdf},
|
||||||
isbn = {978-1-72815-619-4},
|
isbn = {978-1-72815-619-4},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -227,7 +226,7 @@
|
|||||||
doi = {10.1109/ICSME46990.2020.00116},
|
doi = {10.1109/ICSME46990.2020.00116},
|
||||||
abstract = {Deep Learning techniques have been prevalent in various domains, and more and more open source projects in GitHub rely on deep learning libraries to implement their algorithms. To that end, they should always keep pace with the latest versions of deep learning libraries to make the best use of deep learning libraries. Aptly managing the versions of deep learning libraries can help projects avoid crashes or security issues caused by deep learning libraries. Unfortunately, very few studies have been done on the dependency networks of deep learning libraries. In this paper, we take the first step to perform an exploratory study on the dependency networks of deep learning libraries, namely, Tensorflow, PyTorch, and Theano. We study the project purposes, application domains, dependency degrees, update behaviors and reasons as well as version distributions of deep learning projects that depend on Tensorflow, PyTorch, and Theano. Our study unveils some commonalities in various aspects (e.g., purposes, application domains, dependency degrees) of deep learning libraries and reveals some discrepancies as for the update behaviors, update reasons, and the version distributions. Our findings highlight some directions for researchers and also provide suggestions for deep learning developers and users.},
|
abstract = {Deep Learning techniques have been prevalent in various domains, and more and more open source projects in GitHub rely on deep learning libraries to implement their algorithms. To that end, they should always keep pace with the latest versions of deep learning libraries to make the best use of deep learning libraries. Aptly managing the versions of deep learning libraries can help projects avoid crashes or security issues caused by deep learning libraries. Unfortunately, very few studies have been done on the dependency networks of deep learning libraries. In this paper, we take the first step to perform an exploratory study on the dependency networks of deep learning libraries, namely, Tensorflow, PyTorch, and Theano. We study the project purposes, application domains, dependency degrees, update behaviors and reasons as well as version distributions of deep learning projects that depend on Tensorflow, PyTorch, and Theano. Our study unveils some commonalities in various aspects (e.g., purposes, application domains, dependency degrees) of deep learning libraries and reveals some discrepancies as for the update behaviors, update reasons, and the version distributions. Our findings highlight some directions for researchers and also provide suggestions for deep learning developers and users.},
|
||||||
eventtitle = {2020 {{IEEE International Conference}} on {{Software Maintenance}} and {{Evolution}} ({{ICSME}})},
|
eventtitle = {2020 {{IEEE International Conference}} on {{Software Maintenance}} and {{Evolution}} ({{ICSME}})},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Han_2020_An Empirical Study of the Dependency Networks of Deep Learning Libraries2.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Han_2020_An Empirical Study of the Dependency Networks of Deep Learning Libraries2.pdf},
|
||||||
isbn = {978-1-72815-619-4},
|
isbn = {978-1-72815-619-4},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -242,7 +241,7 @@
|
|||||||
pages = {2694--2747},
|
pages = {2694--2747},
|
||||||
issn = {1382-3256, 1573-7616},
|
issn = {1382-3256, 1573-7616},
|
||||||
doi = {10.1007/s10664-020-09819-6},
|
doi = {10.1007/s10664-020-09819-6},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Han_2020_What do Programmers Discuss about Deep Learning Frameworks.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Han_2020_What do Programmers Discuss about Deep Learning Frameworks.pdf},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
number = {4}
|
number = {4}
|
||||||
}
|
}
|
||||||
@ -258,7 +257,6 @@
|
|||||||
archiveprefix = {arXiv},
|
archiveprefix = {arXiv},
|
||||||
eprint = {1412.5567},
|
eprint = {1412.5567},
|
||||||
eprinttype = {arxiv},
|
eprinttype = {arxiv},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Hannun_2014_Deep Speech.pdf;/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/JH3YQS9Z/1412.html},
|
|
||||||
keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning,Computer Science - Neural and Evolutionary Computing},
|
keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning,Computer Science - Neural and Evolutionary Computing},
|
||||||
primaryclass = {cs}
|
primaryclass = {cs}
|
||||||
}
|
}
|
||||||
@ -269,7 +267,7 @@
|
|||||||
date = {2012-04-19},
|
date = {2012-04-19},
|
||||||
publisher = {{Manning Publications}},
|
publisher = {{Manning Publications}},
|
||||||
abstract = {SummaryMachine Learning in Action is unique book that blends the foundational theories of machine learning with the practical realities of building tools for everyday data analysis. You'll use the flexible Python programming language to build programs that implement algorithms for data classification, forecasting, recommendations, and higher-level features like summarization and simplification.About the BookA machine is said to learn when its performance improves with experience. Learning requires algorithms and programs that capture data and ferret out the interestingor useful patterns. Once the specialized domain of analysts and mathematicians, machine learning is becoming a skill needed by many.Machine Learning in Action is a clearly written tutorial for developers. It avoids academic language and takes you straight to the techniques you'll use in your day-to-day work. Many (Python) examples present the core algorithms of statistical data processing, data analysis, and data visualization in code you can reuse. You'll understand the concepts and how they fit in with tactical tasks like classification, forecasting, recommendations, and higher-level features like summarization and simplification.Readers need no prior experience with machine learning or statistical processing. Familiarity with Python is helpful. Purchase of the print book comes with an offer of a free PDF, ePub, and Kindle eBook from Manning. Also available is all code from the book. What's InsideA no-nonsense introductionExamples showing common ML tasksEveryday data analysisImplementing classic algorithms like Apriori and AdaboosTable of ContentsPART 1 CLASSIFICATIONMachine learning basicsClassifying with k-Nearest NeighborsSplitting datasets one feature at a time: decision treesClassifying with probability theory: na\"ive BayesLogistic regressionSupport vector machinesImproving classification with the AdaBoost meta algorithmPART 2 FORECASTING NUMERIC VALUES WITH REGRESSIONPredicting numeric values: regressionTree-based regressionPART 3 UNSUPERVISED LEARNINGGrouping unlabeled items using k-means clusteringAssociation analysis with the Apriori algorithmEfficiently finding frequent itemsets with FP-growthPART 4 ADDITIONAL TOOLSUsing principal component analysis to simplify dataSimplifying data with the singular value decompositionBig data and MapReduce},
|
abstract = {SummaryMachine Learning in Action is unique book that blends the foundational theories of machine learning with the practical realities of building tools for everyday data analysis. You'll use the flexible Python programming language to build programs that implement algorithms for data classification, forecasting, recommendations, and higher-level features like summarization and simplification.About the BookA machine is said to learn when its performance improves with experience. Learning requires algorithms and programs that capture data and ferret out the interestingor useful patterns. Once the specialized domain of analysts and mathematicians, machine learning is becoming a skill needed by many.Machine Learning in Action is a clearly written tutorial for developers. It avoids academic language and takes you straight to the techniques you'll use in your day-to-day work. Many (Python) examples present the core algorithms of statistical data processing, data analysis, and data visualization in code you can reuse. You'll understand the concepts and how they fit in with tactical tasks like classification, forecasting, recommendations, and higher-level features like summarization and simplification.Readers need no prior experience with machine learning or statistical processing. Familiarity with Python is helpful. Purchase of the print book comes with an offer of a free PDF, ePub, and Kindle eBook from Manning. Also available is all code from the book. What's InsideA no-nonsense introductionExamples showing common ML tasksEveryday data analysisImplementing classic algorithms like Apriori and AdaboosTable of ContentsPART 1 CLASSIFICATIONMachine learning basicsClassifying with k-Nearest NeighborsSplitting datasets one feature at a time: decision treesClassifying with probability theory: na\"ive BayesLogistic regressionSupport vector machinesImproving classification with the AdaBoost meta algorithmPART 2 FORECASTING NUMERIC VALUES WITH REGRESSIONPredicting numeric values: regressionTree-based regressionPART 3 UNSUPERVISED LEARNINGGrouping unlabeled items using k-means clusteringAssociation analysis with the Apriori algorithmEfficiently finding frequent itemsets with FP-growthPART 4 ADDITIONAL TOOLSUsing principal component analysis to simplify dataSimplifying data with the singular value decompositionBig data and MapReduce},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/Manning Publications/Harrington_2012_Machine Learning in Action.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/Manning Publications/Harrington_2012_Machine Learning in Action.pdf},
|
||||||
isbn = {978-1-61729-018-3},
|
isbn = {978-1-61729-018-3},
|
||||||
keywords = {Computers / Computer Science,Computers / Data Processing,Computers / Databases / Data Mining,Computers / Intelligence (AI) & Semantics,Computers / Mathematical & Statistical Software,Computers / Programming / Algorithms,Computers / Programming / Open Source,Computers / Programming Languages / Python},
|
keywords = {Computers / Computer Science,Computers / Data Processing,Computers / Databases / Data Mining,Computers / Intelligence (AI) & Semantics,Computers / Mathematical & Statistical Software,Computers / Programming / Algorithms,Computers / Programming / Open Source,Computers / Programming Languages / Python},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
@ -287,7 +285,7 @@
|
|||||||
doi = {10.1109/ICSE.2009.5070510},
|
doi = {10.1109/ICSE.2009.5070510},
|
||||||
abstract = {Predicting the incidence of faults in code has been commonly associated with measuring complexity. In this paper, we propose complexity metrics that are based on the code change process instead of on the code. We conjecture that a complex code change process negatively affects its product, i.e., the software system. We validate our hypothesis empirically through a case study using data derived from the change history for six large open source projects. Our case study shows that our change complexity metrics are better predictors of fault potential in comparison to other well-known historical predictors of faults, i.e., prior modifications and prior faults.},
|
abstract = {Predicting the incidence of faults in code has been commonly associated with measuring complexity. In this paper, we propose complexity metrics that are based on the code change process instead of on the code. We conjecture that a complex code change process negatively affects its product, i.e., the software system. We validate our hypothesis empirically through a case study using data derived from the change history for six large open source projects. Our case study shows that our change complexity metrics are better predictors of fault potential in comparison to other well-known historical predictors of faults, i.e., prior modifications and prior faults.},
|
||||||
eventtitle = {2009 {{IEEE}} 31st {{International Conference}} on {{Software Engineering}}},
|
eventtitle = {2009 {{IEEE}} 31st {{International Conference}} on {{Software Engineering}}},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/IEEE/Hassan_2009_Predicting faults using the complexity of code changes.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/IEEE/Hassan_2009_Predicting faults using the complexity of code changes.pdf},
|
||||||
isbn = {978-1-4244-3453-4},
|
isbn = {978-1-4244-3453-4},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
@ -299,8 +297,7 @@
|
|||||||
pages = {770--778},
|
pages = {770--778},
|
||||||
url = {https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html},
|
url = {https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html},
|
||||||
urldate = {2021-06-09},
|
urldate = {2021-06-09},
|
||||||
eventtitle = {Proceedings of the {{IEEE Conference}} on {{Computer Vision}} and {{Pattern Recognition}}},
|
eventtitle = {Proceedings of the {{IEEE Conference}} on {{Computer Vision}} and {{Pattern Recognition}}}
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/B6WAY3GX/He_Deep_Residual_Learning_CVPR_2016_paper.html}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@article{hirschberg2015advancesnaturallanguage,
|
@article{hirschberg2015advancesnaturallanguage,
|
||||||
@ -316,7 +313,6 @@
|
|||||||
abstract = {Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area.},
|
abstract = {Natural language processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area.},
|
||||||
eprint = {26185244},
|
eprint = {26185244},
|
||||||
eprinttype = {pmid},
|
eprinttype = {pmid},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/Z36CCHGE/261.html},
|
|
||||||
langid = {english},
|
langid = {english},
|
||||||
number = {6245}
|
number = {6245}
|
||||||
}
|
}
|
||||||
@ -331,7 +327,7 @@
|
|||||||
archiveprefix = {arXiv},
|
archiveprefix = {arXiv},
|
||||||
eprint = {1910.11015},
|
eprint = {1910.11015},
|
||||||
eprinttype = {arxiv},
|
eprinttype = {arxiv},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Humbatova_2019_Taxonomy of Real Faults in Deep Learning Systems.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Humbatova_2019_Taxonomy of Real Faults in Deep Learning Systems.pdf},
|
||||||
keywords = {Computer Science - Artificial Intelligence,Computer Science - Machine Learning,Computer Science - Software Engineering},
|
keywords = {Computer Science - Artificial Intelligence,Computer Science - Machine Learning,Computer Science - Software Engineering},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
primaryclass = {cs}
|
primaryclass = {cs}
|
||||||
@ -347,7 +343,6 @@
|
|||||||
archiveprefix = {arXiv},
|
archiveprefix = {arXiv},
|
||||||
eprint = {1504.01716},
|
eprint = {1504.01716},
|
||||||
eprinttype = {arxiv},
|
eprinttype = {arxiv},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/F6WDYMDF/1504.html},
|
|
||||||
keywords = {Computer Science - Computer Vision and Pattern Recognition,Computer Science - Robotics},
|
keywords = {Computer Science - Computer Vision and Pattern Recognition,Computer Science - Robotics},
|
||||||
primaryclass = {cs}
|
primaryclass = {cs}
|
||||||
}
|
}
|
||||||
@ -364,7 +359,6 @@
|
|||||||
issn = {1546-170X},
|
issn = {1546-170X},
|
||||||
doi = {10.1038/s41591-020-0842-3},
|
doi = {10.1038/s41591-020-0842-3},
|
||||||
abstract = {Skin conditions affect 1.9 billion people. Because of a shortage of dermatologists, most cases are seen instead by general practitioners with lower diagnostic accuracy. We present a deep learning system (DLS) to provide a differential diagnosis of skin conditions using 16,114 de-identified cases (photographs and clinical data) from a teledermatology practice serving 17 sites. The DLS distinguishes between 26 common skin conditions, representing 80\% of cases seen in primary care, while also providing a secondary prediction covering 419 skin conditions. On 963 validation cases, where a rotating panel of three board-certified dermatologists defined the reference standard, the DLS was non-inferior to six other dermatologists and superior to six primary care physicians (PCPs) and six nurse practitioners (NPs) (top-1 accuracy: 0.66 DLS, 0.63 dermatologists, 0.44 PCPs and 0.40 NPs). These results highlight the potential of the DLS to assist general practitioners in diagnosing skin conditions.},
|
abstract = {Skin conditions affect 1.9 billion people. Because of a shortage of dermatologists, most cases are seen instead by general practitioners with lower diagnostic accuracy. We present a deep learning system (DLS) to provide a differential diagnosis of skin conditions using 16,114 de-identified cases (photographs and clinical data) from a teledermatology practice serving 17 sites. The DLS distinguishes between 26 common skin conditions, representing 80\% of cases seen in primary care, while also providing a secondary prediction covering 419 skin conditions. On 963 validation cases, where a rotating panel of three board-certified dermatologists defined the reference standard, the DLS was non-inferior to six other dermatologists and superior to six primary care physicians (PCPs) and six nurse practitioners (NPs) (top-1 accuracy: 0.66 DLS, 0.63 dermatologists, 0.44 PCPs and 0.40 NPs). These results highlight the potential of the DLS to assist general practitioners in diagnosing skin conditions.},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/CGRLE4YH/s41591 020 0842 3.html},
|
|
||||||
issue = {6},
|
issue = {6},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
number = {6},
|
number = {6},
|
||||||
@ -385,7 +379,7 @@
|
|||||||
archiveprefix = {arXiv},
|
archiveprefix = {arXiv},
|
||||||
eprint = {2101.03730},
|
eprint = {2101.03730},
|
||||||
eprinttype = {arxiv},
|
eprinttype = {arxiv},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Liu_2021_An Exploratory Study on the Introduction and Removal of Different Types of.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Liu_2021_An Exploratory Study on the Introduction and Removal of Different Types of.pdf},
|
||||||
keywords = {Computer Science - Software Engineering},
|
keywords = {Computer Science - Software Engineering},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
number = {2}
|
number = {2}
|
||||||
@ -396,7 +390,7 @@
|
|||||||
url = {https://ieeexplore.ieee.org/abstract/document/6248110/},
|
url = {https://ieeexplore.ieee.org/abstract/document/6248110/},
|
||||||
urldate = {2021-06-09},
|
urldate = {2021-06-09},
|
||||||
abstract = {Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible, wide and deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.},
|
abstract = {Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible, wide and deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks.},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/2R4ZFR6C/6248110.html},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/2R4ZFR6C/6248110.html},
|
||||||
langid = {american}
|
langid = {american}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -404,7 +398,7 @@
|
|||||||
title = {Natural {{Language Toolkit}} \textemdash{} {{NLTK}} 3.5 Documentation},
|
title = {Natural {{Language Toolkit}} \textemdash{} {{NLTK}} 3.5 Documentation},
|
||||||
url = {https://www.nltk.org/},
|
url = {https://www.nltk.org/},
|
||||||
urldate = {2021-03-30},
|
urldate = {2021-03-30},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/VKI2452L/www.nltk.org.html}
|
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/VKI2452L/www.nltk.org.html}
|
||||||
}
|
}
|
||||||
|
|
||||||
@online{navlanilatentsemanticindexing,
|
@online{navlanilatentsemanticindexing,
|
||||||
@ -413,7 +407,7 @@
|
|||||||
url = {https://machinelearninggeek.com/latent-semantic-indexing-using-scikit-learn/},
|
url = {https://machinelearninggeek.com/latent-semantic-indexing-using-scikit-learn/},
|
||||||
urldate = {2021-05-17},
|
urldate = {2021-05-17},
|
||||||
abstract = {In this tutorial, we will focus on Latent Semantic Indexing or Latent Semantic Analysis and perform topic modeling using Scikit-learn.},
|
abstract = {In this tutorial, we will focus on Latent Semantic Indexing or Latent Semantic Analysis and perform topic modeling using Scikit-learn.},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/MB9PJVXP/latent-semantic-indexing-using-scikit-learn.html},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/MB9PJVXP/latent-semantic-indexing-using-scikit-learn.html},
|
||||||
langid = {american}
|
langid = {american}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -428,7 +422,6 @@
|
|||||||
archiveprefix = {arXiv},
|
archiveprefix = {arXiv},
|
||||||
eprint = {1804.03999},
|
eprint = {1804.03999},
|
||||||
eprinttype = {arxiv},
|
eprinttype = {arxiv},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/LBH6UYV8/1804.html},
|
|
||||||
keywords = {Computer Science - Computer Vision and Pattern Recognition},
|
keywords = {Computer Science - Computer Vision and Pattern Recognition},
|
||||||
primaryclass = {cs}
|
primaryclass = {cs}
|
||||||
}
|
}
|
||||||
@ -444,7 +437,7 @@
|
|||||||
doi = {10.1109/TSE.1975.6312866},
|
doi = {10.1109/TSE.1975.6312866},
|
||||||
abstract = {The Source Code Control System (SCCS) is a software tool designed to help programming projects control changes to source code. It provides facilities for storing, updating, and retrieving all versions of modules, for controlling updating privileges for identifying load modules by version number, and for recording who made each software change, when and where it was made, and why. This paper discusses the SCCS approach to source code control, shows how it is used and explains how it is implemented.},
|
abstract = {The Source Code Control System (SCCS) is a software tool designed to help programming projects control changes to source code. It provides facilities for storing, updating, and retrieving all versions of modules, for controlling updating privileges for identifying load modules by version number, and for recording who made each software change, when and where it was made, and why. This paper discusses the SCCS approach to source code control, shows how it is used and explains how it is implemented.},
|
||||||
eventtitle = {{{IEEE Transactions}} on {{Software Engineering}}},
|
eventtitle = {{{IEEE Transactions}} on {{Software Engineering}}},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/8KN2BXLY/6312866.html},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/8KN2BXLY/6312866.html},
|
||||||
keywords = {Configuration management,Control systems,Documentation,Laboratories,Libraries,Process control,program maintenance,Software,software control,software project management},
|
keywords = {Configuration management,Control systems,Documentation,Laboratories,Libraries,Process control,program maintenance,Software,software control,software project management},
|
||||||
number = {4}
|
number = {4}
|
||||||
}
|
}
|
||||||
@ -457,7 +450,7 @@
|
|||||||
volume = {45},
|
volume = {45},
|
||||||
pages = {19},
|
pages = {19},
|
||||||
abstract = {The market for mobile apps is getting bigger and bigger, and it is expected to be worth over 100 Billion dollars in 2020. To have a chance to succeed in such a competitive environment, developers need to build and maintain high-quality apps, continuously astonishing their users with the coolest new features. Mobile app marketplaces allow users to release reviews. Despite reviews are aimed at recommending apps among users, they also contain precious information for developers, reporting bugs and suggesting new features. To exploit such a source of information, developers are supposed to manually read user reviews, something not doable when hundreds of them are collected per day. To help developers dealing with such a task, we developed CLAP (Crowd Listener for releAse Planning), a web application able to (i) categorize user reviews based on the information they carry out, (ii) cluster together related reviews, and (iii) prioritize the clusters of reviews to be implemented when planning the subsequent app release. We evaluated all the steps behind CLAP, showing its high accuracy in categorizing and clustering reviews and the meaningfulness of the recommended prioritizations. Also, given the availability of CLAP as a working tool, we assessed its applicability in industrial environments.},
|
abstract = {The market for mobile apps is getting bigger and bigger, and it is expected to be worth over 100 Billion dollars in 2020. To have a chance to succeed in such a competitive environment, developers need to build and maintain high-quality apps, continuously astonishing their users with the coolest new features. Mobile app marketplaces allow users to release reviews. Despite reviews are aimed at recommending apps among users, they also contain precious information for developers, reporting bugs and suggesting new features. To exploit such a source of information, developers are supposed to manually read user reviews, something not doable when hundreds of them are collected per day. To help developers dealing with such a task, we developed CLAP (Crowd Listener for releAse Planning), a web application able to (i) categorize user reviews based on the information they carry out, (ii) cluster together related reviews, and (iii) prioritize the clusters of reviews to be implemented when planning the subsequent app release. We evaluated all the steps behind CLAP, showing its high accuracy in categorizing and clustering reviews and the meaningfulness of the recommended prioritizations. Also, given the availability of CLAP as a working tool, we assessed its applicability in industrial environments.},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/undefined/Scalabrino_2019_Listening to the Crowd for the Release Planning of Mobile Apps.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/undefined/Scalabrino_2019_Listening to the Crowd for the Release Planning of Mobile Apps.pdf},
|
||||||
langid = {english},
|
langid = {english},
|
||||||
number = {1}
|
number = {1}
|
||||||
}
|
}
|
||||||
@ -473,7 +466,7 @@
|
|||||||
doi = {10.1002/j.1538-7305.1948.tb01338.x},
|
doi = {10.1002/j.1538-7305.1948.tb01338.x},
|
||||||
abstract = {The recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist1 and Hartley2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information.},
|
abstract = {The recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication. A basis for such a theory is contained in the important papers of Nyquist1 and Hartley2 on this subject. In the present paper we will extend the theory to include a number of new factors, in particular the effect of noise in the channel, and the savings possible due to the statistical structure of the original message and due to the nature of the final destination of the information.},
|
||||||
eventtitle = {The {{Bell System Technical Journal}}},
|
eventtitle = {The {{Bell System Technical Journal}}},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/06-zotero/storage/ZLGCL7V5/6773024.html},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/06-zotero/storage/ZLGCL7V5/6773024.html},
|
||||||
number = {3}
|
number = {3}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -488,7 +481,7 @@
|
|||||||
doi = {10.1145/3213846.3213866},
|
doi = {10.1145/3213846.3213866},
|
||||||
abstract = {Deep learning applications become increasingly popular in important domains such as self-driving systems and facial identity systems. Defective deep learning applications may lead to catastrophic consequences. Although recent research e orts were made on testing and debugging deep learning applications, the characteristics of deep learning defects have never been studied. To ll this gap, we studied deep learning applications built on top of TensorFlow and collected program bugs related to TensorFlow from StackOverow QA pages and Github projects. We extracted information from QA pages, commit messages, pull request messages, and issue discussions to examine the root causes and symptoms of these bugs. We also studied the strategies deployed by TensorFlow users for bug detection and localization. These ndings help researchers and TensorFlow users to gain a better understanding of coding defects in TensorFlow programs and point out a new direction for future research.},
|
abstract = {Deep learning applications become increasingly popular in important domains such as self-driving systems and facial identity systems. Defective deep learning applications may lead to catastrophic consequences. Although recent research e orts were made on testing and debugging deep learning applications, the characteristics of deep learning defects have never been studied. To ll this gap, we studied deep learning applications built on top of TensorFlow and collected program bugs related to TensorFlow from StackOverow QA pages and Github projects. We extracted information from QA pages, commit messages, pull request messages, and issue discussions to examine the root causes and symptoms of these bugs. We also studied the strategies deployed by TensorFlow users for bug detection and localization. These ndings help researchers and TensorFlow users to gain a better understanding of coding defects in TensorFlow programs and point out a new direction for future research.},
|
||||||
eventtitle = {{{ISSTA}} '18: {{International Symposium}} on {{Software Testing}} and {{Analysis}}},
|
eventtitle = {{{ISSTA}} '18: {{International Symposium}} on {{Software Testing}} and {{Analysis}}},
|
||||||
file = {/home/norangebit/Documenti/10-personal/12-organizzation/07-zotero-attachments/ACM/Zhang_2018_An empirical study on TensorFlow program bugs.pdf},
|
file = {/home/norangebit/Documenti/10-personal/12-organization/07-zotero-attachments/ACM/Zhang_2018_An empirical study on TensorFlow program bugs.pdf},
|
||||||
isbn = {978-1-4503-5699-2},
|
isbn = {978-1-4503-5699-2},
|
||||||
langid = {english}
|
langid = {english}
|
||||||
}
|
}
|
||||||
|
@ -9,11 +9,11 @@ Tra questi sicuramente possiamo annoverare: riconoscimento di immagini, diagnosi
|
|||||||
|
|
||||||
La crescente produzione di software basato sul \acl{ML} ha generato un forte impulso anche per quanto riguarda la ricerca.
|
La crescente produzione di software basato sul \acl{ML} ha generato un forte impulso anche per quanto riguarda la ricerca.
|
||||||
L'attenzione non è stata puntata unicamente sullo studio di nuovi modelli e architetture, ma anche sul processo di sviluppo di questi prodotti per andare a valutare i vari problemi da un punto di vista ingegneristico.
|
L'attenzione non è stata puntata unicamente sullo studio di nuovi modelli e architetture, ma anche sul processo di sviluppo di questi prodotti per andare a valutare i vari problemi da un punto di vista ingegneristico.
|
||||||
In letteratura non mancano studi atti ad evidenziare le differenze tra progetti di \ac{ML} e progetti classici.
|
In letteratura non mancano studi atti ad evidenziare le differenze tra progetti di \ac{ML} e progetti classici [@gonzalez2020statemluniverse10].
|
||||||
Ne tanto meno confronti dei progetti rispetto alle dipendenze e alle librerie utilizzate.
|
Ne tanto meno confronti dei progetti rispetto alle dipendenze e alle librerie utilizzate [@han2020empiricalstudydependency].
|
||||||
|
|
||||||
Molti studi sono, invece, incentrati sulle problematiche legate allo sviluppo di applicazioni di \acl{ML}.
|
Molti studi sono, invece, incentrati sulle problematiche legate allo sviluppo di applicazioni di \acl{ML}.
|
||||||
In alcuni casi l'analisi è stata svolta per librerie specifiche, in altri casi il focus è stato puntato sulle discussioni di *StackOverflow*.
|
In alcuni casi l'analisi è stata svolta per librerie specifiche, in altri casi il focus è stato puntato sulle discussioni di \ac{SO}.
|
||||||
In altri casi ancora l'attenzione è stata rivolta su problematiche specifiche come quella del \ac{SATD} [@liu2021exploratorystudyintroduction].
|
In altri casi ancora l'attenzione è stata rivolta su problematiche specifiche come quella del \ac{SATD} [@liu2021exploratorystudyintroduction].
|
||||||
|
|
||||||
Anche il seguente lavoro si concentra sui difetti riscontrati all'interno delle applicazioni di \acl{ML}.
|
Anche il seguente lavoro si concentra sui difetti riscontrati all'interno delle applicazioni di \acl{ML}.
|
||||||
@ -29,7 +29,7 @@ Infine si vuole capire se le *issues* sono trattate tutte allo stesso modo per q
|
|||||||
## Struttura della tesi
|
## Struttura della tesi
|
||||||
|
|
||||||
Nella sezione [-@sec:related-works] viene svolta una panoramica sullo stato dell'arte.
|
Nella sezione [-@sec:related-works] viene svolta una panoramica sullo stato dell'arte.
|
||||||
Nella sezione [-@sec:methodology] vengono presentate le *research question*, viene descritta la procedura utilizzata per la raccolta dei commit e delle issues e come queste sono state classificate.
|
Nella sezione [-@sec:methodology] vengono presentate le \ac{RQ}, viene descritta la procedura utilizzata per la raccolta dei commit e delle issues e come queste sono state classificate.
|
||||||
Inoltre viene illustrata la metodologia di analisi impiegata per lo studio di ogni *\ac{RQ}*.
|
Inoltre viene illustrata la metodologia di analisi impiegata per lo studio di ogni *\ac{RQ}*.
|
||||||
I risultati delle analisi e una discussione qualitativa su alcuni *casi estremi* sono riportati nella sezione [-@sec:results].
|
I risultati delle analisi e una discussione qualitativa su alcuni *casi estremi* sono riportati nella sezione [-@sec:results].
|
||||||
Infine la sezione [-@sec:conclusions] chiude questa tesi.
|
Infine la sezione [-@sec:conclusions] chiude questa tesi.
|
||||||
|
@ -18,8 +18,8 @@ Infatti questo è stato il primo anno in cui sono stati creati più progetti leg
|
|||||||
Una seconda analisi ha permesso di capire come varia la partecipazione ai vari progetti.
|
Una seconda analisi ha permesso di capire come varia la partecipazione ai vari progetti.
|
||||||
Per poter svolgere questa analisi i contributori sono stati divisi in:
|
Per poter svolgere questa analisi i contributori sono stati divisi in:
|
||||||
|
|
||||||
- esterni: i loro contributi sono limitati ad aprire *issues* e commentare le discussioni.
|
- *esterni*: i loro contributi sono limitati ad aprire *issues* e commentare le discussioni.
|
||||||
- interni: oltre a svolgere i compiti precedentemente elencati devono anche aver chiuso delle issues o eseguito dei commit sul progetto.
|
- *interni*: oltre a svolgere i compiti precedentemente elencati devono anche aver chiuso delle issues o eseguito dei commit sul progetto.
|
||||||
|
|
||||||
In base a questa divisione si è visto come il tools di \acl{ML} hanno un numero di contributori interni superiore rispetto ai progetti generici.
|
In base a questa divisione si è visto come il tools di \acl{ML} hanno un numero di contributori interni superiore rispetto ai progetti generici.
|
||||||
Quest'ultimi pero hanno una maggiore partecipazione esterna.
|
Quest'ultimi pero hanno una maggiore partecipazione esterna.
|
||||||
@ -55,7 +55,7 @@ Anche per quanto riguarda la classificazione rispetto al dominio applicativo la
|
|||||||
Infatti, indipendentemente dalla libreria utilizzata, il progetti più frequenti sono quelli che hanno a che fare con video e immagini e con il \ac{NLP}.
|
Infatti, indipendentemente dalla libreria utilizzata, il progetti più frequenti sono quelli che hanno a che fare con video e immagini e con il \ac{NLP}.
|
||||||
|
|
||||||
Un'ulteriore \ac{RQ} è andata a valutare il tipo di dipendenza, facendo distinzione tra dipendenze dirette e indirette.
|
Un'ulteriore \ac{RQ} è andata a valutare il tipo di dipendenza, facendo distinzione tra dipendenze dirette e indirette.
|
||||||
Per tutte è tre le librerie si è visto che è molto più probabile avere una dipendenza diretta che indiretta.
|
Per tutte è tre le librerie si è visto che è più probabile avere una dipendenza diretta che indiretta.
|
||||||
`PyTorch` è la libreria che più frequentemente è importata direttamente, mentre `Theano` ha una probabilità di essere importata direttamente quasi uguale a quella di essere importata indirettamente.
|
`PyTorch` è la libreria che più frequentemente è importata direttamente, mentre `Theano` ha una probabilità di essere importata direttamente quasi uguale a quella di essere importata indirettamente.
|
||||||
|
|
||||||
Un ulteriore analisi è stata condotta per individuare quanto frequentemente i progetti aggiornano le loro dipendenze o eseguono dei downgrade.
|
Un ulteriore analisi è stata condotta per individuare quanto frequentemente i progetti aggiornano le loro dipendenze o eseguono dei downgrade.
|
||||||
@ -65,7 +65,7 @@ Nel caso dei progetti che dipendono da `TensorFlow` la maggior parte dei downgra
|
|||||||
Sempre analizzando la versione della libreria utilizzata si è visto che i progetti basati su `Theano` sono quelli che utilizzano più frequentemente l'ultima versione disponibile della libreria.
|
Sempre analizzando la versione della libreria utilizzata si è visto che i progetti basati su `Theano` sono quelli che utilizzano più frequentemente l'ultima versione disponibile della libreria.
|
||||||
|
|
||||||
In un altro lavoro di Han *et al.* [@han2020whatprogrammersdiscuss] il focus si sposta sugli argomenti di discussione e su come questi variano in base al framework utilizzato.
|
In un altro lavoro di Han *et al.* [@han2020whatprogrammersdiscuss] il focus si sposta sugli argomenti di discussione e su come questi variano in base al framework utilizzato.
|
||||||
In questo caso all'interno dei dataset non sono rientrati unicamente i dati recuperati da GitHub, ma anche le discussioni su *StackOverflow*.
|
In questo caso all'interno dei dataset non sono rientrati unicamente i dati recuperati da GitHub, ma anche le discussioni su \ac{SO}.
|
||||||
|
|
||||||
Questo studio ha permesso di evidenziare differenze e similitudini per quanto riguarda le discussioni che si generano intorno ai tre framework di interesse.
|
Questo studio ha permesso di evidenziare differenze e similitudini per quanto riguarda le discussioni che si generano intorno ai tre framework di interesse.
|
||||||
In particolare emerge che le fasi più discusse sono quelle di *model training* e di *preliminary preparation*.
|
In particolare emerge che le fasi più discusse sono quelle di *model training* e di *preliminary preparation*.
|
||||||
@ -82,7 +82,7 @@ Mentre `Theano` presenta molte diversità sia per quanto riguarda gli impieghi c
|
|||||||
|
|
||||||
Lo studio di Grichi *et al.* [@grichi2020impactmultilanguagedevelopment] si concentra sui sistemi *multi-linguaggio*.
|
Lo studio di Grichi *et al.* [@grichi2020impactmultilanguagedevelopment] si concentra sui sistemi *multi-linguaggio*.
|
||||||
In questo caso si vuole capire se i sistemi di \ac{ML} sono più soggetti all'essere realizzati attraverso linguaggi diversi.
|
In questo caso si vuole capire se i sistemi di \ac{ML} sono più soggetti all'essere realizzati attraverso linguaggi diversi.
|
||||||
Inoltre analizzando le pull request realizzate in più linguaggi si vuole capire se queste sono accettate con la stessa frequenza di quelle mono linguaggio e se la presenza di difetti è equivalente.
|
Inoltre analizzando le pull request realizzate in più linguaggi si vuole capire se queste sono accettate con la stessa frequenza di quelle *mono-linguaggio* e se la presenza di difetti è equivalente.
|
||||||
|
|
||||||
L'analisi è stata svolta su 27 progetti open source hostati su GitHub.
|
L'analisi è stata svolta su 27 progetti open source hostati su GitHub.
|
||||||
I progetti sono poi stati classificati in tre categorie:
|
I progetti sono poi stati classificati in tre categorie:
|
||||||
@ -91,7 +91,7 @@ I progetti sono poi stati classificati in tre categorie:
|
|||||||
- Cat II: include 10 sistemi generici *multi-linguaggio*.
|
- Cat II: include 10 sistemi generici *multi-linguaggio*.
|
||||||
- Cat III: include 7 sistemi di \acl{ML} *mono-linguaggio*.
|
- Cat III: include 7 sistemi di \acl{ML} *mono-linguaggio*.
|
||||||
|
|
||||||
Successivamente sono state scaricate le \acl{PR} di ogni progetto considerato.
|
Successivamente sono state scaricate le \ac{PR} di ogni progetto considerato.
|
||||||
Le \ac{PR}s sono state categorizzate per individuare le quelle accettate e quelle rifiutate.
|
Le \ac{PR}s sono state categorizzate per individuare le quelle accettate e quelle rifiutate.
|
||||||
Inoltre le \acl{PR} sono state categorizzate anche il base al numero di linguaggi utilizzati.
|
Inoltre le \acl{PR} sono state categorizzate anche il base al numero di linguaggi utilizzati.
|
||||||
In questo modo è stato possibile individuare le \ac{PR} *mono-linguaggio* e quelle *multi-linguaggio*.
|
In questo modo è stato possibile individuare le \ac{PR} *mono-linguaggio* e quelle *multi-linguaggio*.
|
||||||
@ -105,17 +105,17 @@ I progetti della categoria I e II sono paragonabili anche rispetto al numero di
|
|||||||
|
|
||||||
Lo studio ha evidenziato come all'interno dei progetti di \acl{ML} le \acl{PR} *mono-linguaggio* sono accettate molto più facilmente rispetto a quelle *multi-linguaggio*.
|
Lo studio ha evidenziato come all'interno dei progetti di \acl{ML} le \acl{PR} *mono-linguaggio* sono accettate molto più facilmente rispetto a quelle *multi-linguaggio*.
|
||||||
Inoltre anche nel caso in cui queste vengano accettate, il tempo necessario alla loro accettazione è maggiore.
|
Inoltre anche nel caso in cui queste vengano accettate, il tempo necessario alla loro accettazione è maggiore.
|
||||||
Inoltre si è visto anche che rispetto alle \ac{PR}s *multi-linguaggio* non esistono differenze in base all'introduzione di *bug* tra i progetti della categoria I e II.
|
Infine si è visto anche che rispetto alle \ac{PR}s *multi-linguaggio* non esistono differenze in base all'introduzione di *bug* tra i progetti della categoria I e II.
|
||||||
Mentre le \acl{PR} che includono un singolo linguaggio sembrano essere più affette da *bug* nel caso dei sistemi di \acl{ML}.
|
Mentre le \acl{PR} che includono un singolo linguaggio sembrano essere più affette da *bug* nel caso dei sistemi di \acl{ML}.
|
||||||
|
|
||||||
## Problematiche caratteristiche del ML
|
## Problematiche caratteristiche del ML
|
||||||
|
|
||||||
In letteratura sono presenti anche lavori che si concentrano sull'analisi delle problematiche e dei *bug* riscontrati all'interno di applicazioni di \acl{ML}.
|
In letteratura sono presenti anche lavori che si concentrano sull'analisi delle problematiche e dei *bug* riscontrati all'interno di applicazioni di \acl{ML}.
|
||||||
Nello studio di Zhang *et al.* [@zhang2018empiricalstudytensorflow] l'attenzione è rivolta unicamente alle problematiche correlate a `TensorFlow`.
|
Nello studio di Zhang *et al.* [@zhang2018empiricalstudytensorflow] l'attenzione è rivolta unicamente alle problematiche correlate a `TensorFlow`.
|
||||||
Per lo studio sono stati recuperati dei *bug* di `TensorFlow` sia da progetti su GitHub (88 elementi) sia da quesiti su *StackOverflow* (87 elementi).
|
Per lo studio sono stati recuperati dei *bug* di `TensorFlow` sia da progetti su GitHub (88 elementi) sia da quesiti su \acl{SO} (87 elementi).
|
||||||
|
|
||||||
Gli autori dello studio, per poter individuare la causa dei *bug* e i loro sintomi hanno dovuto analizzare manualmente gli elementi del dataset.
|
Gli autori dello studio, per poter individuare la causa dei *bug* e i loro sintomi hanno dovuto analizzare manualmente gli elementi del dataset.
|
||||||
Nel caso di *bug* discussi su *StackOverflow* le informazioni sono state recupera della discussione.
|
Nel caso di *bug* discussi su \ac{SO} le informazioni sono state recupera della discussione.
|
||||||
Mentre nel caso dei *bug* recuperati da GitHub le informazioni sono state recuperate tramite lo studio dell'intervento di *fix* e il messaggio associato ad esso.
|
Mentre nel caso dei *bug* recuperati da GitHub le informazioni sono state recuperate tramite lo studio dell'intervento di *fix* e il messaggio associato ad esso.
|
||||||
|
|
||||||
In questo modo è stato possibile individuare quattro sintomi:
|
In questo modo è stato possibile individuare quattro sintomi:
|
||||||
@ -138,7 +138,7 @@ Anche lo studio di Humbatova *et al.* [@humbatova-2019-taxonomyrealfaults] ha co
|
|||||||
In questo caso però la visione è più ampia e non si limita ad una singola libreria.
|
In questo caso però la visione è più ampia e non si limita ad una singola libreria.
|
||||||
Inoltre in questo caso lo scopo ultimo del lavoro è la costruzione di una tassonomia per le problematiche di \ac{ML}.
|
Inoltre in questo caso lo scopo ultimo del lavoro è la costruzione di una tassonomia per le problematiche di \ac{ML}.
|
||||||
|
|
||||||
Anche in questo caso il dati sono stati recuperati sia da *StackOverflow* che da GitHub.
|
Anche in questo caso il dati sono stati recuperati sia da \acl{SO} che da GitHub.
|
||||||
Inoltre per questo studio è stata anche svolta un'intervista a 20 persone tra ricercatori e sviluppatori nel campo del \acl{ML}.
|
Inoltre per questo studio è stata anche svolta un'intervista a 20 persone tra ricercatori e sviluppatori nel campo del \acl{ML}.
|
||||||
Partendo da questi dati è stata costruita una tassonomia attraverso un approccio *bottom-up*.
|
Partendo da questi dati è stata costruita una tassonomia attraverso un approccio *bottom-up*.
|
||||||
La tassonomia si compone di 5 categorie *top-level*, 3 delle quali sono state divise in sotto categorie.
|
La tassonomia si compone di 5 categorie *top-level*, 3 delle quali sono state divise in sotto categorie.
|
||||||
@ -152,24 +152,70 @@ Tra le categorie di primo livello ci sono:
|
|||||||
- *GPU Usage*: in questa categoria ricadono tutti i problemi nell'uso della \ac{GPU}.
|
- *GPU Usage*: in questa categoria ricadono tutti i problemi nell'uso della \ac{GPU}.
|
||||||
- *API*: rientrano in questa categoria tutti i problemi generati da un non corretto utilizzo dell'\ac{API} del framework di \acl{ML}.
|
- *API*: rientrano in questa categoria tutti i problemi generati da un non corretto utilizzo dell'\ac{API} del framework di \acl{ML}.
|
||||||
|
|
||||||
Come si può notare, fatta salva la specificità del primo lavoro, esiste una forte similitudine tra le categorie di problemi individuate dai due lavori.
|
Come si può notare, fatta salva la specificità del primo lavoro, esiste una forte similitudine tra le categorie di problemi individuate dai due studi.
|
||||||
|
|
||||||
## Argomenti di discussione tipici del ML
|
## Analisi delle discussioni di Stack Overflow riguardanti il ML
|
||||||
|
|
||||||
Nello studio di Bangash *et al.* [@bangash2019whatdevelopersknow] viene svolta un analisi degli argomenti riguardanti il \acl{ML} discussi più frequentemente dagli sviluppatori.
|
Nello studio di Bangash *et al.* [@bangash2019whatdevelopersknow] viene svolta un analisi degli argomenti di \acl{ML} discussi più frequentemente dagli sviluppatori.
|
||||||
In questo caso, a differenza dello studio di Han *et al.* [@han2020whatprogrammersdiscuss] discusso precedentemente, non viene svolta alcuna distinzione in base alla libreria utilizzata.
|
In questo caso, a differenza dello studio di Han *et al.* [@han2020whatprogrammersdiscuss] discusso precedentemente, non viene svolta alcuna distinzione in base alla libreria utilizzata.
|
||||||
Inoltre questo studio utilizza unicamente informazioni recuperate da *StackOverflow*, mentre l'altro lavoro univa le domande di *StackOverflow* alla discussione generata all'interno dei repositories di GitHUb.
|
Inoltre questo studio utilizza unicamente informazioni recuperate da \acl{SO}, mentre l'altro lavoro univa le domande di \ac{SO} alla discussione generata all'interno dei repositories di GitHub.
|
||||||
|
|
||||||
In questo caso il topic più frequentemente discusso riguarda la presenza di errori all'interno del codice.
|
In questo caso il topic più frequentemente discusso riguarda la presenza di errori all'interno del codice.
|
||||||
Seguono discussioni rispetto agli algoritmi di apprendimento e al training dei dati.
|
Seguono discussioni rispetto agli algoritmi di apprendimento e al training dei dati.
|
||||||
Lo studio ha evidenziato anche come molte discussioni riguardano librerie e framework di \acl{ML} come ad esempio `numpy`, `pandas`, `keras` ecc.
|
Lo studio ha evidenziato anche come molte discussioni riguardano librerie e framework di \acl{ML} come ad esempio `numpy`, `pandas`, `keras`, `Scikit-Learn`, ecc.
|
||||||
Tutte queste discussioni sono state inserite nel topic *framework*.
|
Tutte queste discussioni sono state inserite nel topic *framework*.
|
||||||
|
|
||||||
Altri lavori[@alshangiti2019whydevelopingmachine] invece hanno analizzato unicamente le discussioni su *StackOverflow* per andare a capirne il contenuto.
|
Anche nel lavoro di Alshangiti *et al.* [@alshangiti2019whydevelopingmachine] vengono analizzate le domande presenti sulla piattaforma \acl{SO}.
|
||||||
Lo scopo di questi studi è quello di individuare le fasi più critiche del processo di sviluppo e capire quali sono gli argomenti che gli sviluppatori discutono più frequentemente.
|
In questo caso però oltre ad un analisi qualitativa rispetto al contenuto di queste discussioni è stata eseguita anche un'analisi comparativa tra le discussioni inerenti al \acl{ML} e le altre.
|
||||||
|
|
||||||
## Entropia di un cambiamento
|
Per svolgere questa analisi gli autori sono partiti dal dump del database di \ac{SO} e hai individuato tre campioni:
|
||||||
|
|
||||||
Altri studi[@hassan2009predictingfaultsusing] ancora hanno traslando il concetto di entropia[@shannon1948mathematicaltheorycommunication] utilizzato nella teoria della comunicazione per andare a valutare la complessità del processo di cambiamento del software.
|
- *Quantitative Study Sample*: si compone di 86983 domande inerenti al \ac{ML}, con le relative risposte.
|
||||||
Andando, inoltre, ad evidenziare come la complessità del processo possa essere utilizzata per predire i *faults* all'interno dei prodotti software con risultati migliori rispetto alle metriche di complessità del software.
|
L'individuazione dei post è avvenuta attraverso la definizione di una lista contente 50 tag utilizzate su \ac{SO} per le domande di \acl{ML}.
|
||||||
|
- *Qualitative Study Sample*: contiene 684 post realizzati da 50 utenti.
|
||||||
|
Questo campione è stato ottenuto eseguendo un ulteriore campionamento sul campione discusso al punto precedente.
|
||||||
|
- *Baseline Sample*: si compone di post che non riguardano il \acl{ML}.
|
||||||
|
Questo campione viene utilizzato per comparare le domande di \ac{ML} con quelle generiche.
|
||||||
|
|
||||||
|
La prima *\ac{RQ}* dello studio vuole verificare se rispondere ad una domanda inerente al \acl{ML} sia più complicato.
|
||||||
|
Per valutare la complessità di risposta sono state contate le domande che non presentano alcuna risposta, le domande che non presentano risposte accettate e la mediana del tempo necessario affinché una domanda abbia una risposta accettata.
|
||||||
|
Dal confronta tra il primo e il terzo sample rispetto a queste metriche è emerso che i post inerenti al \ac{ML} hanno una maggiore probabilità di non avere risposte/risposte accettate.
|
||||||
|
Inoltre si è visto come mediamente le domande di \acl{ML} necessitano di un tempo dieci volte maggiore per poter avere una risposta accettata.
|
||||||
|
Una spiegazione a questo fenomeno ci viene fornita dalla seconda *\ac{RQ}* in cui viene evidenziato che all'interno della community di \acl{SO} c'è una carenza di esperti di \acl{ML} [^expertise-rank].
|
||||||
|
|
||||||
|
[^expertise-rank]: L'individuazione degli esperti è avvenuta secondo l'approccio *ExpertiseRank*.
|
||||||
|
Questo approccio crea un grafo diretto, in cui gli utenti sono rappresentati dai nodi e gli archi rappresentano una relazione di aiuto, attraverso il quale è possibile determinare l'esperienza degli utenti.
|
||||||
|
Per esempio considerando un caso in cui l'utente B ha aiutato l'utente A avremo che l'esperienza di B è superiore a quella di A.
|
||||||
|
Se l'utente C risponde ad una domanda di B, allora questo avrà una esperienza superiore sia ad A che a B, in quanto è stato in grado di aiutare un utente (B) che aveva dimostrato a sua volta di essere esperto (rispondendo ad A).
|
||||||
|
|
||||||
|
Lo studio è stato in grado anche di individuare le fasi in cui gli sviluppatori riscontrano maggiori problematiche.
|
||||||
|
In generale le maggiori difficoltà sono state riscontrate nel *preprocessing dei dati*, nella configurazione dell'ambiente di sviluppo e nel deployment del modello.
|
||||||
|
Per quanto riguarda i task specifici del \acl{DL} le maggiori problematiche riguarda applicazioni di \ac{NLP} e riconoscimento degli oggetti.
|
||||||
|
Infine lo studio ha mostrato come, nonostante la vasta adozione, molti utenti riscontrano problemi nell'utilizzo dell'\ac{API} di `TensorFlow`.
|
||||||
|
|
||||||
|
## Entropia di un cambiamento {#sec:entropy}
|
||||||
|
|
||||||
|
Nello studio di Hassan [@hassan2009predictingfaultsusing] si vuole capire in che modo la complessità del processo del cambiamento del software vada ad impattare sull'introduzione di difetti all'interno della codebase.
|
||||||
|
Per valutare la complessità del processo di cambiamento è stato *preso in prestito* il concetto di entropia [@shannon1948mathematicaltheorycommunication] utilizzato nella teoria della comunicazione.
|
||||||
|
|
||||||
|
Lo studio è stato condotto su sei progetti open source di grandi dimensioni.
|
||||||
|
Attraverso i sistemi di *version control* e all'analisi lessicale dei messaggi di cambiamento sono stati individuate tre tipologie di cambiamento.
|
||||||
|
|
||||||
|
- *Fault Repairing modification*: include i cambiamenti attuati per risolvere un difetto nel prodotto software.
|
||||||
|
Questa categoria di modifiche non è stata utilizzata per il calcolo dell'entropia, ma per validare lo studio.
|
||||||
|
- *General Maintenance modification*: include cambiamenti di mantenimento che non vanno ad influenzare il comportamento del codice.
|
||||||
|
Rientrano in questa categoria la re-indentazione del codice, cambiamenti alla nota del copyright ecc.
|
||||||
|
Questi cambiamenti sono stati esclusi dallo studio.
|
||||||
|
- *Feature Introduction modification*: include tutti i cambiamenti che vanno ad alterare il comportamento del codice.
|
||||||
|
Questi cambiamenti sono stati individuati per esclusione e sono stati utilizzati per il calcolo dell'entropia.
|
||||||
|
|
||||||
|
All'interno dello studio vengono definiti tre modelli che permettono di calcolare la complessità del processo di cambiamento software.
|
||||||
|
|
||||||
|
- *Basic Code Change model*: è il primo modello presentato, assume un periodo costante per il calcolo dell'entropia e considera costante il numero di file presenti all'interno del progetto.
|
||||||
|
- *Extend Code Change model*: è un'evoluzione del modello di base che lo rende più flessibile.
|
||||||
|
- *File Code Change model*: i modelli illustrati precedentemente forniscono un valore complessivo di entropia per l'intero progetto.
|
||||||
|
Questo modello permette di valutare l'entropia in modo distinto per ogni file.
|
||||||
|
|
||||||
|
Lo studio ha dimostrato che nel caso di sistemi di grandi dimensioni, la complessità del processo di cambiamento è in grado di predire l'occorrenza di fault.
|
||||||
|
Inoltre viene anche mostrato come la predizione basata sulla complessità del processo sia più precisa rispetto alla predizione basata sulla complessità del codice.
|
||||||
|
|
||||||
|
@ -208,7 +208,8 @@ L'analisi quantitativa è avvenuta attraverso un barplot in cui venivano riporta
|
|||||||
|
|
||||||
### RQ3: esiste una differenza di entropy tra ML bug e altri bug?
|
### RQ3: esiste una differenza di entropy tra ML bug e altri bug?
|
||||||
|
|
||||||
La successiva analisi avevo lo scopo di verificare l'esistenza di una differenza tra l'entropia del *fix* rispetto alla natura di questi.
|
La successiva analisi aveva lo scopo di verificare l'esistenza di una differenza tra l'entropia del *fix* rispetto alla natura di questi.
|
||||||
|
Il lavoro di questa analisi è basato sul modello *BCC* discusso nella @sec:entropy.
|
||||||
L'analisi è stata svolta sia a livello di file, sia a livello di linee quindi per ogni commit del dataset è stato necessario individuare sia il numero di file che hanno subito delle modifiche, sia il numero di linee alterate, considerando in questo modo sia le aggiunte che le rimozioni.
|
L'analisi è stata svolta sia a livello di file, sia a livello di linee quindi per ogni commit del dataset è stato necessario individuare sia il numero di file che hanno subito delle modifiche, sia il numero di linee alterate, considerando in questo modo sia le aggiunte che le rimozioni.
|
||||||
Il dato rispetto alle linee modificate è già presente nel dataset di partenza (si veda @sec:classificazione-commit), mentre il numero di file modificati può essere ricavato dalla lista dei files modificati nel commit.
|
Il dato rispetto alle linee modificate è già presente nel dataset di partenza (si veda @sec:classificazione-commit), mentre il numero di file modificati può essere ricavato dalla lista dei files modificati nel commit.
|
||||||
|
|
||||||
|
@ -49,6 +49,8 @@ acronym:
|
|||||||
long: Research Question
|
long: Research Question
|
||||||
- short: SATD
|
- short: SATD
|
||||||
long: Self-Admitted Technical Debt
|
long: Self-Admitted Technical Debt
|
||||||
|
- short: SO
|
||||||
|
long: Stack Overflow
|
||||||
- short: VPS
|
- short: VPS
|
||||||
long: Virtual Private Server
|
long: Virtual Private Server
|
||||||
##### crossref #####
|
##### crossref #####
|
||||||
|
Loading…
Reference in New Issue
Block a user