Template talk:Machine learning

__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Chire-2013-10-22T12:41:00.000Z","type":"heading","level":0,"id":"h-\"Models\"-2013-10-22T12:41:00.000Z","replies":["c-Chire-2013-10-22T12:41:00.000Z-\"Models\"","c-Chire-2013-10-22T12:45:00.000Z-\"Models\""],"text":"\"Models\"","linkableTitle":"\"Models\""}-->

"Models"

__DTELLIPSISBUTTON__{"threadItem":{"headingLevel":2,"name":"h-Chire-2013-10-22T12:41:00.000Z","type":"heading","level":0,"id":"h-\"Models\"-2013-10-22T12:41:00.000Z","replies":["c-Chire-2013-10-22T12:41:00.000Z-\"Models\"","c-Chire-2013-10-22T12:45:00.000Z-\"Models\""]}}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Chire-2013-10-22T12:41:00.000Z","type":"heading","level":0,"id":"h-\"Models\"-2013-10-22T12:41:00.000Z","replies":["c-Chire-2013-10-22T12:41:00.000Z-\"Models\"","c-Chire-2013-10-22T12:45:00.000Z-\"Models\""],"text":"\"Models\"","linkableTitle":"\"Models\""}-->

This section title and contents seem pretty much random to me. How are contents chosen? One regression, one random clustering algorithm, 4 standard classificators; but no decision tree; which is probably the grandfather of all classificators. --Chire (talk) 12:41, 22 October 2013 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2013-10-22T12:41:00.000Z","author":"Chire","type":"comment","level":1,"id":"c-Chire-2013-10-22T12:41:00.000Z-\"Models\"","replies":[]}}-->

In general, one may argue that k-means is NOT machine learning, but plain old statistics. And clustering is at most a step child of the machine learning world; it's a data mining / knowledge discovery domain, just like outlier detection and freuqent itemset mining. If you look at the communities, I would not call data mining part of machine learning either; it's living in parallel (unfortunately). Machine learners don't get or like unsupervised methods, actually. The "theory" section in this template is also pretty random, isn't it? --Chire (talk) 12:45, 22 October 2013 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2013-10-22T12:45:00.000Z","author":"Chire","type":"comment","level":1,"id":"c-Chire-2013-10-22T12:45:00.000Z-\"Models\"","replies":["c-Qwertyus-2013-10-22T14:59:00.000Z-Chire-2013-10-22T12:45:00.000Z","c-Qwertyus-2013-10-22T15:04:00.000Z-Chire-2013-10-22T12:45:00.000Z"]}}-->

This template is brand new and very incomplete. You're welcome to add it. k-means clustering is a very widely employed method in the machine learning community, e.g. by computer vision folks who use it as a feature learning method, by neural nets folks for booststrapping their RBF networks and by text mining people. New papers employing or improving k-means appear regularly in the ML literature. I can dig up some references if you like.
AFAIC, the sidebar can be renamed something like "Data mining/machine learning/pattern recognition" -- the three overlap to such a degree that they're impossible to demarcate. QVVERTYVS (hm?) 14:59, 22 October 2013 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2013-10-22T14:59:00.000Z","author":"Qwertyus","type":"comment","level":2,"id":"c-Qwertyus-2013-10-22T14:59:00.000Z-Chire-2013-10-22T12:45:00.000Z","replies":[],"displayName":"QVVERTYVS"}}-->
Re: "one regression algorithm": wrong. Logistic regression is in fact a classification algorithm. It is very popular in esp. the natural language processing community and form the basis for much recent neural nets and structured prediction work. Neural nets, k-NN and SVMs are all used for regression, though, even if this is not reflected in their Wikipedia articles. QVVERTYVS (hm?) 15:04, 22 October 2013 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2013-10-22T15:04:00.000Z","author":"Qwertyus","type":"comment","level":2,"id":"c-Qwertyus-2013-10-22T15:04:00.000Z-Chire-2013-10-22T12:45:00.000Z","replies":["c-Chire-2013-10-23T09:05:00.000Z-Qwertyus-2013-10-22T15:04:00.000Z"],"displayName":"QVVERTYVS"}}-->
I agree that they are hard to separate and it thus may be a good idea to merge them into one template. I know that k-means is used a lot in machine learning, as it is a statistical optimization problem; not so much actually a structure discovery thing. Maybe instead of the "Models" block, make one for each "Problem" above then? I.e. regression, classification, clustering, anomaly detection, etc.? --Chire (talk) 09:05, 23 October 2013 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2013-10-23T09:05:00.000Z","author":"Chire","type":"comment","level":3,"id":"c-Chire-2013-10-23T09:05:00.000Z-Qwertyus-2013-10-22T15:04:00.000Z","replies":[]}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-150.135.223.128-2014-01-28T19:12:00.000Z","type":"heading","level":0,"id":"h-Maybe_we_need_to_add_Markovian_models?-2014-01-28T19:12:00.000Z","replies":["c-150.135.223.128-2014-01-28T19:12:00.000Z-Maybe_we_need_to_add_Markovian_models?"],"text":"Maybe we need to add Markovian models?","linkableTitle":"Maybe we need to add Markovian models?"}-->

Maybe we need to add Markovian models?

__DTELLIPSISBUTTON__{"threadItem":{"headingLevel":2,"name":"h-150.135.223.128-2014-01-28T19:12:00.000Z","type":"heading","level":0,"id":"h-Maybe_we_need_to_add_Markovian_models?-2014-01-28T19:12:00.000Z","replies":["c-150.135.223.128-2014-01-28T19:12:00.000Z-Maybe_we_need_to_add_Markovian_models?"]}}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-150.135.223.128-2014-01-28T19:12:00.000Z","type":"heading","level":0,"id":"h-Maybe_we_need_to_add_Markovian_models?-2014-01-28T19:12:00.000Z","replies":["c-150.135.223.128-2014-01-28T19:12:00.000Z-Maybe_we_need_to_add_Markovian_models?"],"text":"Maybe we need to add Markovian models?","linkableTitle":"Maybe we need to add Markovian models?"}-->

Hidden Markov Models (HMM) has successfully been used (there are dozens are articles, just do a Google scholar search) where HMM have been used for NLP amongst other Machine learning tasks. I believe it should be added as one of the models. — Preceding unsigned comment added by 150.135.223.128 (talk) 19:12, 28 January 2014 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2014-01-28T19:12:00.000Z","author":"150.135.223.128","type":"comment","level":1,"id":"c-150.135.223.128-2014-01-28T19:12:00.000Z-Maybe_we_need_to_add_Markovian_models?","replies":["c-Qwertyus-2014-01-28T22:00:00.000Z-150.135.223.128-2014-01-28T19:12:00.000Z"]}}-->

I've added CRFs, HMMs and a link to the more general article graphical model. QVVERTYVS (hm?) 22:00, 28 January 2014 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2014-01-28T22:00:00.000Z","author":"Qwertyus","type":"comment","level":2,"id":"c-Qwertyus-2014-01-28T22:00:00.000Z-150.135.223.128-2014-01-28T19:12:00.000Z","replies":[],"displayName":"QVVERTYVS"}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Fgnievinski-2014-05-03T23:55:00.000Z","type":"heading","level":0,"id":"h-Two_problems_are_the_same:_\"classification\"_and_\"clustering\"-2014-05-03T23:55:00.000Z","replies":["c-Fgnievinski-2014-05-03T23:55:00.000Z-Two_problems_are_the_same:_\"classification\"_and_\"clustering\""],"text":"Two problems are the same: \"classification\" and \"clustering\"","linkableTitle":"Two problems are the same: \"classification\" and \"clustering\""}-->

Two problems are the same: "classification" and "clustering"

__DTELLIPSISBUTTON__{"threadItem":{"headingLevel":2,"name":"h-Fgnievinski-2014-05-03T23:55:00.000Z","type":"heading","level":0,"id":"h-Two_problems_are_the_same:_\"classification\"_and_\"clustering\"-2014-05-03T23:55:00.000Z","replies":["c-Fgnievinski-2014-05-03T23:55:00.000Z-Two_problems_are_the_same:_\"classification\"_and_\"clustering\""]}}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Fgnievinski-2014-05-03T23:55:00.000Z","type":"heading","level":0,"id":"h-Two_problems_are_the_same:_\"classification\"_and_\"clustering\"-2014-05-03T23:55:00.000Z","replies":["c-Fgnievinski-2014-05-03T23:55:00.000Z-Two_problems_are_the_same:_\"classification\"_and_\"clustering\""],"text":"Two problems are the same: \"classification\" and \"clustering\"","linkableTitle":"Two problems are the same: \"classification\" and \"clustering\""}-->

there are simply two general approaches to solve the same problem, supervised and unsupervised -- but the problem is one and the same. Fgnievinski (talk) 23:55, 3 May 2014 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2014-05-03T23:55:00.000Z","author":"Fgnievinski","type":"comment","level":1,"id":"c-Fgnievinski-2014-05-03T23:55:00.000Z-Two_problems_are_the_same:_\"classification\"_and_\"clustering\"","replies":["c-Fgnievinski-2014-05-05T12:32:00.000Z-Fgnievinski-2014-05-03T23:55:00.000Z","c-Chire-2014-05-05T13:36:00.000Z-Fgnievinski-2014-05-03T23:55:00.000Z"]}}-->

Applications are also overlapping if not coincident. Fgnievinski (talk) 12:32, 5 May 2014 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2014-05-05T12:32:00.000Z","author":"Fgnievinski","type":"comment","level":2,"id":"c-Fgnievinski-2014-05-05T12:32:00.000Z-Fgnievinski-2014-05-03T23:55:00.000Z","replies":[]}}-->
As discussed in Talk:Statistical classification#Terminology: "classification" is supervised, "clustering" is unsupervised -- Really?, I disagree that they are the same thing.
The objectives are different in the sense that classification tries to minimize the prediction error. Clustering however tries to discover some meaningful structure, without knowing what to look out for (which is also why clustering more often than not returns crap results - too little guideance on what you are looking for). They are related, but clearly not the same thing. IMHO, the applications as well as the methods differ fundamentally, too. You can't easily take one method and transfer it to the other problem; not even naive bayes, or kNN classification. There are some cases where you have similar ideas - k-means also minimizes squared errors - but these occur in many other areas, too. And there are many clustering approaches not based on minimizing some statistical quantity. The big problem with clustering is evaluation: usually you evaluate by some statistical quantity (internal), or by class labels (external); both of which look a lot like classification.
Either way; we are not truth finders. There is plenty of literature that distinguishes these approaches, so we should not merge them. The rule of thumb in literature is that classification and regression are supervised, and this is well resembled by the ML bar template. --Chire (talk) 13:36, 5 May 2014 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2014-05-05T13:36:00.000Z","author":"Chire","type":"comment","level":2,"id":"c-Chire-2014-05-05T13:36:00.000Z-Fgnievinski-2014-05-03T23:55:00.000Z","replies":["c-Fgnievinski-2014-05-05T13:48:00.000Z-Chire-2014-05-05T13:36:00.000Z"]}}-->
I'm sorry, this is not a restatement of the previous talk. The methods are outside the scope of the present discussion. What is inside the scope is that both methodological approaches aim to cluster, group, segment, partition, and classify input variates. All the problems addressed by unsupervised methods could be tackled by supervised ones if additional information is given. Fgnievinski (talk) 13:48, 5 May 2014 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2014-05-05T13:48:00.000Z","author":"Fgnievinski","type":"comment","level":3,"id":"c-Fgnievinski-2014-05-05T13:48:00.000Z-Chire-2014-05-05T13:36:00.000Z","replies":["c-Chire-2014-05-05T13:57:00.000Z-Fgnievinski-2014-05-05T13:48:00.000Z"]}}-->
I also disagree on that. If you added labels to a data set, it would become a different problem: how to predict the labels of new instances, given the training data set, i.e. it becomes class prediction, whereas it was structure discovery before. That is IMHO a quite different task. --Chire (talk) 13:57, 5 May 2014 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2014-05-05T13:57:00.000Z","author":"Chire","type":"comment","level":4,"id":"c-Chire-2014-05-05T13:57:00.000Z-Fgnievinski-2014-05-05T13:48:00.000Z","replies":["c-Qwertyus-2014-05-05T14:50:00.000Z-Chire-2014-05-05T13:57:00.000Z"]}}-->
The algorithms and performance metrics for clustering are radically different from those for classification, which is enough reason not to conflate them. I also challenge the statement that the applications are "coincident". QVVERTYVS (hm?) 14:50, 5 May 2014 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2014-05-05T14:50:00.000Z","author":"Qwertyus","type":"comment","level":5,"id":"c-Qwertyus-2014-05-05T14:50:00.000Z-Chire-2014-05-05T13:57:00.000Z","replies":[],"displayName":"QVVERTYVS"}}-->


__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Dm1911-2015-05-27T17:50:00.000Z","type":"heading","level":0,"id":"h-Reenforcement_learning_&_Terminology-2015-05-27T17:50:00.000Z","replies":["c-Dm1911-2015-05-27T17:50:00.000Z-Reenforcement_learning_&_Terminology"],"text":"Reenforcement learning & Terminology","linkableTitle":"Reenforcement learning & Terminology"}-->

Reenforcement learning & Terminology

__DTELLIPSISBUTTON__{"threadItem":{"headingLevel":2,"name":"h-Dm1911-2015-05-27T17:50:00.000Z","type":"heading","level":0,"id":"h-Reenforcement_learning_&_Terminology-2015-05-27T17:50:00.000Z","replies":["c-Dm1911-2015-05-27T17:50:00.000Z-Reenforcement_learning_&_Terminology"]}}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Dm1911-2015-05-27T17:50:00.000Z","type":"heading","level":0,"id":"h-Reenforcement_learning_&_Terminology-2015-05-27T17:50:00.000Z","replies":["c-Dm1911-2015-05-27T17:50:00.000Z-Reenforcement_learning_&_Terminology"],"text":"Reenforcement learning & Terminology","linkableTitle":"Reenforcement learning & Terminology"}-->

Seems confusing for an outsider to call it 'supervised learning' but then not talk about unsupervised learning or reenforcement ? Not sure what the best approach here would be, since clustering is (in some way) unsupervised learning -- would a rename be warranted? Perhaps worth renaming to "Unsupervised Learning / Clustering" ? Dm1911 (talk) 17:50, 27 May 2015 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2015-05-27T17:50:00.000Z","author":"Dm1911","type":"comment","level":1,"id":"c-Dm1911-2015-05-27T17:50:00.000Z-Reenforcement_learning_&_Terminology","replies":["c-Qwertyus-2015-05-27T18:46:00.000Z-Dm1911-2015-05-27T17:50:00.000Z"]}}-->

Reinforcement learning is currently missing, we should add it. Unsupervised learning is much broader than clustering: it also encompasses dimensionality reduction and feature learning. QVVERTYVS (hm?) 18:46, 27 May 2015 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2015-05-27T18:46:00.000Z","author":"Qwertyus","type":"comment","level":2,"id":"c-Qwertyus-2015-05-27T18:46:00.000Z-Dm1911-2015-05-27T17:50:00.000Z","replies":["c-Situphobos-2016-07-04T07:34:00.000Z-Qwertyus-2015-05-27T18:46:00.000Z"],"displayName":"QVVERTYVS"}}-->
Added Reinforcement learning as per this discussion. Situphobos (talk) 07:34, 4 July 2016 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2016-07-04T07:34:00.000Z","author":"Situphobos","type":"comment","level":3,"id":"c-Situphobos-2016-07-04T07:34:00.000Z-Qwertyus-2015-05-27T18:46:00.000Z","replies":[]}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Datakeeper-2016-02-25T19:04:00.000Z","type":"heading","level":0,"id":"h-Add_List_of_datasets_for_machine_learning_research-2016-02-25T19:04:00.000Z","replies":["c-Datakeeper-2016-02-25T19:04:00.000Z-Add_List_of_datasets_for_machine_learning_research"],"text":"Add List of datasets for machine learning research","linkableTitle":"Add List of datasets for machine learning research"}-->

Add List of datasets for machine learning research

__DTELLIPSISBUTTON__{"threadItem":{"headingLevel":2,"name":"h-Datakeeper-2016-02-25T19:04:00.000Z","type":"heading","level":0,"id":"h-Add_List_of_datasets_for_machine_learning_research-2016-02-25T19:04:00.000Z","replies":["c-Datakeeper-2016-02-25T19:04:00.000Z-Add_List_of_datasets_for_machine_learning_research"]}}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Datakeeper-2016-02-25T19:04:00.000Z","type":"heading","level":0,"id":"h-Add_List_of_datasets_for_machine_learning_research-2016-02-25T19:04:00.000Z","replies":["c-Datakeeper-2016-02-25T19:04:00.000Z-Add_List_of_datasets_for_machine_learning_research"],"text":"Add List of datasets for machine learning research","linkableTitle":"Add List of datasets for machine learning research"}-->

How about adding list of datasets for machine learning research to the bar? Any thoughts on this? --Datakeeper (talk) 19:04, 25 February 2016 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2016-02-25T19:04:00.000Z","author":"Datakeeper","type":"comment","level":1,"id":"c-Datakeeper-2016-02-25T19:04:00.000Z-Add_List_of_datasets_for_machine_learning_research","replies":[]}}-->

__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Datakeeper-2016-02-22T21:15:00.000Z","type":"heading","level":0,"id":"h-Collapsable_version-2016-02-22T21:15:00.000Z","replies":["c-Datakeeper-2016-02-22T21:15:00.000Z-Collapsable_version"],"text":"Collapsable version","linkableTitle":"Collapsable version"}-->

Collapsable version

__DTELLIPSISBUTTON__{"threadItem":{"headingLevel":2,"name":"h-Datakeeper-2016-02-22T21:15:00.000Z","type":"heading","level":0,"id":"h-Collapsable_version-2016-02-22T21:15:00.000Z","replies":["c-Datakeeper-2016-02-22T21:15:00.000Z-Collapsable_version"]}}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Datakeeper-2016-02-22T21:15:00.000Z","type":"heading","level":0,"id":"h-Collapsable_version-2016-02-22T21:15:00.000Z","replies":["c-Datakeeper-2016-02-22T21:15:00.000Z-Collapsable_version"],"text":"Collapsable version","linkableTitle":"Collapsable version"}-->

Is there a way to make this template have collapsable sections? It's pretty large and a little unwieldy to put on pages. --Datakeeper (talk) 21:15, 22 February 2016 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2016-02-22T21:15:00.000Z","author":"Datakeeper","type":"comment","level":1,"id":"c-Datakeeper-2016-02-22T21:15:00.000Z-Collapsable_version","replies":["c-Qwertyus-2016-02-25T20:15:00.000Z-Datakeeper-2016-02-22T21:15:00.000Z"]}}-->

Good point. Made it collapsible. QVVERTYVS (hm?) 20:15, 25 February 2016 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2016-02-25T20:15:00.000Z","author":"Qwertyus","type":"comment","level":2,"id":"c-Qwertyus-2016-02-25T20:15:00.000Z-Datakeeper-2016-02-22T21:15:00.000Z","replies":["c-Datakeeper-2016-02-25T21:17:00.000Z-Qwertyus-2016-02-25T20:15:00.000Z"],"displayName":"QVVERTYVS"}}-->
@Qwertyus: Excellent - looks great! Thank you!--Datakeeper (talk) 21:17, 25 February 2016 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2016-02-25T21:17:00.000Z","author":"Datakeeper","type":"comment","level":3,"id":"c-Datakeeper-2016-02-25T21:17:00.000Z-Qwertyus-2016-02-25T20:15:00.000Z","replies":[]}}-->
__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Daniel.Cardenas-2016-05-01T23:03:00.000Z","type":"heading","level":0,"id":"h-spam-2016-05-01T23:03:00.000Z","replies":["c-Daniel.Cardenas-2016-05-01T23:03:00.000Z-spam","c-Bigbluefish-2018-02-13T10:18:00.000Z-spam"],"text":"spam","linkableTitle":"spam"}-->

spam

__DTELLIPSISBUTTON__{"threadItem":{"headingLevel":2,"name":"h-Daniel.Cardenas-2016-05-01T23:03:00.000Z","type":"heading","level":0,"id":"h-spam-2016-05-01T23:03:00.000Z","replies":["c-Daniel.Cardenas-2016-05-01T23:03:00.000Z-spam","c-Bigbluefish-2018-02-13T10:18:00.000Z-spam"]}}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Daniel.Cardenas-2016-05-01T23:03:00.000Z","type":"heading","level":0,"id":"h-spam-2016-05-01T23:03:00.000Z","replies":["c-Daniel.Cardenas-2016-05-01T23:03:00.000Z-spam","c-Bigbluefish-2018-02-13T10:18:00.000Z-spam"],"text":"spam","linkableTitle":"spam"}-->

I see this all over the place as intro diagram and it is not helpful. Diagrams about the topic instead of machine learning template on every subtopic would be better. Daniel.Cardenas (talk) 23:03, 1 May 2016 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2016-05-01T23:03:00.000Z","author":"Daniel.Cardenas","type":"comment","level":1,"id":"c-Daniel.Cardenas-2016-05-01T23:03:00.000Z-spam","replies":["c-Daniel.Cardenas-2016-05-02T00:08:00.000Z-Daniel.Cardenas-2016-05-01T23:03:00.000Z"]}}-->

Suggest putting after intro or in other words the first topic. This will encourage others to create a more specific and more helpful diagram for intro. Daniel.Cardenas (talk) 00:08, 2 May 2016 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2016-05-02T00:08:00.000Z","author":"Daniel.Cardenas","type":"comment","level":2,"id":"c-Daniel.Cardenas-2016-05-02T00:08:00.000Z-Daniel.Cardenas-2016-05-01T23:03:00.000Z","replies":[]}}-->

Per WP:NAVBOX: "The collection of articles in a sidebar template should be fairly tightly related... If the articles are not tightly related, a footer template (a navbox, located at the bottom of the article) may be more appropriate.". This definitely applies here. I cannot see this template being much use to anybody for navigation, and certainly not one of the first things a reader will look for on arriving on an article. I think conversion to a footer would be ideal. Bigbluefish (talk) 10:18, 13 February 2018 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2018-02-13T10:18:00.000Z","author":"Bigbluefish","type":"comment","level":1,"id":"c-Bigbluefish-2018-02-13T10:18:00.000Z-spam","replies":[]}}-->

__DTSUBSCRIBEBUTTONDESKTOP__{"headingLevel":2,"name":"h-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z","type":"heading","level":0,"id":"h-Existing_template_image_is_not_illustrative-2021-09-28T08:15:00.000Z","replies":["c-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z-Existing_template_image_is_not_illustrative"],"text":"Existing template image is not illustrative","linkableTitle":"Existing template image is not illustrative"}-->

Existing template image is not illustrative

__DTELLIPSISBUTTON__{"threadItem":{"headingLevel":2,"name":"h-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z","type":"heading","level":0,"id":"h-Existing_template_image_is_not_illustrative-2021-09-28T08:15:00.000Z","replies":["c-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z-Existing_template_image_is_not_illustrative"]}}-->
__DTSUBSCRIBEBUTTONMOBILE__{"headingLevel":2,"name":"h-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z","type":"heading","level":0,"id":"h-Existing_template_image_is_not_illustrative-2021-09-28T08:15:00.000Z","replies":["c-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z-Existing_template_image_is_not_illustrative"],"text":"Existing template image is not illustrative","linkableTitle":"Existing template image is not illustrative"}-->

@Snus-kin: Hi, template image now is

File:Multi-Layer_Neural_Network-Vector-Blank.svg

I think this image is not conveying any helpful information about the concept of "Machine Learning". I proposed to use this image:

File:Machine Learning Icon.jpg

which is now reverted, but this image is part of the cover of this book: https://books.google.com/books?id=Ex8_tAEACAAJ&source=gbs_book_other_versions , it is a robot that is graduated from some knowledge course. However, existing template image, which is a graph (neural networks), is not suitable for this purpose and a regular person does not understand "Machine Learning" or "Data Mining" from it. @Qwertyus: Any idea? Thanks, Hooman Mallahzadeh (talk) 08:15, 28 September 2021 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2021-09-28T08:15:00.000Z","author":"Hooman Mallahzadeh","type":"comment","level":1,"id":"c-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z-Existing_template_image_is_not_illustrative","replies":["c-Snus-kin-2021-09-28T09:15:00.000Z-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z","c-Chumpih-20230805082800-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z"]}}-->

Hi Hooman, I think if we were to change the image we should aim to have something of a high quality, perhaps drawn in SVG, currently this image is rough and pixelated. I'd also be careful about recreating images from books as they are typically copywritten material, but I am not an expert on this. Snus-kin (talk) 09:15, 28 September 2021 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"2021-09-28T09:15:00.000Z","author":"Snus-kin","type":"comment","level":2,"id":"c-Snus-kin-2021-09-28T09:15:00.000Z-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z","replies":[]}}-->
I feel the current image
I'm not a fan of this.
File:Kernel Machine.svg is both distracting and unhelpful. I would support an improvement. The grey neural network File:Multi-Layer_Neural_Network-Vector-Blank.svg one above was better. The owl with mortarboard isn't to my taste. Chumpih t 08:28, 5 August 2023 (UTC), tweaked 2023-08-08.[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"20230805082800","author":"Chumpih","type":"comment","level":2,"id":"c-Chumpih-20230805082800-Hooman_Mallahzadeh-2021-09-28T08:15:00.000Z","replies":["c-Chumpih-20230808003700-Chumpih-20230805082800"]}}-->
Aaand I see this has now (2023-08-08) been changed to
"I tried to illustrate the large amount of connections in the current illustration on dark background." - DancingPhilosopher
File:Neural network with dark background.png, which is better than the Kernel Machine classification one in my eyes, but I would still prefer File:Multi-Layer_Neural_Network-Vector-Blank.svg. Then again, de gustibus. Chumpih t 00:37, 8 August 2023 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"20230808003700","author":"Chumpih","type":"comment","level":3,"id":"c-Chumpih-20230808003700-Chumpih-20230805082800","replies":["c-DancingPhilosopher-20230808095700-Chumpih-20230808003700"]}}-->
There are pros and cons for each. The one that you prefer (the one with the neurons being emphasized as big dark grey circles and the arrows illustrating the inputs for each layer) illustrates the basic blueprint introduced forty years ago (since then known under a misnomer, multilayer perceptron.), but it does not show any progress made (i.e. stochastic gradient descent introduced in 1969; backpropagation method used in 1970; different activation function used; transformer architectures used since 2018, etc). I'm not fan of the "File:Kernel Machine.svg" either, but at least it is about the (somewhat modern) 1990s kernel trick. I was thinking about all that, but the only low-hanging fruit that I can see is a large size, i.e. the largeness itself characterizing large language models, for example. DancingPhilosopher (talk) 09:57, 8 August 2023 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"20230808095700","author":"DancingPhilosopher","type":"comment","level":4,"id":"c-DancingPhilosopher-20230808095700-Chumpih-20230808003700","replies":["c-100DashSix-20230817170000-DancingPhilosopher-20230808095700"]}}-->
The new one is much better. I was always distracted by that old thumbnail, as my brain would briefly try to understand the ML article as if the image was a visual representation of the concepts of the article. 100DashSix (talk) 17:00, 17 August 2023 (UTC)[reply]__DTELLIPSISBUTTON__{"threadItem":{"timestamp":"20230817170000","author":"100DashSix","type":"comment","level":5,"id":"c-100DashSix-20230817170000-DancingPhilosopher-20230808095700","replies":[]}}-->

Strategi Solo vs Squad di Free Fire: Cara Menang Mudah!