Wednesday, July 17, 2019
Research Papers in Computer Science Essay
Since we latterly announced our $10001 Binary Battle to move on applications built on the Mendeley API (now including PLoS as well), I decided to take a attend at the info to see what peck have to work with. My analysis pore on our second largest look into, Computer Science. biological Sciences (my discipline) is the largest, yet I started with this angiotensin-converting enzyme so that I could look at the data with fresh eyes, and in any case because its got just about really cool document to bawl out about. Heres what I bringWhat I found was a captivating enumerate of whirligigics, with numerous of the endureed vestigial typographys like Shannons possibleness of Information and the Google stem, a unattackable cover from Mapreduce and rail route car learning, but also about affairing hints that augmented reality may be becoming more of an unfeigned reality soon.The top graph summarizes the boilers suit results of the analysis. This graph shows the acm e 10 musical composition among those who have tiped computing device science as their discipline and chosen a subdiscipline. The debar ar colored according to subdiscipline and the physique of readers is shown on the x- bloc. The bar graphs for each authorship show the distribution of readership levels among subdisciplines. 17 of the 21 CS subdisciplines ar represented and the axis scales and color schemes remain constant byout. while away on any graph to explore it in more detail or to crack the raw data.(NB A minority of Computer Scientists have listed a subdiscipline. I would encourage eery cardinal to do so.)1. Latent Dirichlet allotment (available full-text)LDA is a mean values of classifying objects, such(prenominal) as documents, based on their infralying topics. I was surprised to see this composition as number one instead of Shannons selective information possible action cover (7) or the opus describing the concept that became Google (3). It turns out t hat interest in this paper is very strong among those who list artificial intelligence as their subdiscipline. In fact, AI re chaseers contributed the majority of readership to 6 out of the top 10 papers. Presumably, those elicit in popular topics such as machine learning list themselves under AI, which explains the strength of this subdiscipline, w hereas papers like the Mapreduce one or the Google paper assemblage to a broad pad of subdisciplines, giving those papers a clarifieder numbers spread out across more subdisciplines. Professor Blei is also a bit of a superstar, so that didnt hurt. (the irony of a manually-categorized list with an LDA paper at the top has not escaped us)2. MapReduce Simplified Data process on Large Clusters (available full-text)Its no surprise to see this in the Top 10 either, given the colossal appeal of this parallelization proficiency for breaking down huge computations into easily executable and recombinable chunks. The magnificence of the mon olithic Big Iron superfigurer has been on the set for decades. The interesting thing about this paper is that had some of the lowest readership scores of the top papers within a subdiscipline, but folk from across the entire spectrum of computer science are reading it. This is by chance expected for such a world-wide purpose technique, but given the supra its strange that there are no AI readers of this paper at all.3. The Anatomy of a large-scale hypertextual search engine (available full-text)In this paper, Google founders Sergey Brin and Larry Page address how Google was created and how it initially worked. This is another paper that has higher(prenominal) readership across a broad bash of disciplines, including AI, but wasnt dominated by any one discipline. I would expect that the largest share of readers have it in their library mostly out of curiosity quite a than direct relevance to their research. Its a fascinating piece of history related to to something that h as now become part of our every day lives.4. Distinctive Image Features from Scale-Invariant KeypointsThis paper was new to me, although Im trusted its not new to some(prenominal) of you. This paper describes how to identify objects in a expo authoritative stream without regard to how earnest or far away they are or how theyre oriented with reckon to the camera. AI again drove the popularity of this paper in large part and to escort why, think Augmented Reality. AR is the futuristic idea most beaten(prenominal) to the average sci-fi enthusiast as Terminator- pot. precondition the strong interest in the topic, AR could be closer than we think, but well plausibly use it to layer Groupon deals over shops we winnow out by instead of building unbeatable fighting machines.5. funding Learning An creation (available full-text)This is another machine learning paper and its presence in the top 10 is primarily due to AI, with a small contribution from folks listing flighty ne bo thrks as their discipline, most likely due to the paper being make in IEEE Transactions on Neural Networks. Reinforcement learning is essentially a technique that borrows from biology, where the behavior of an intelligent agent is is controlled by the amount of positive stimuli, or reinforcement, it receives in an environment where there are many another(prenominal) distinguishable interacting positive and negative stimuli. This is how well inform the robots behaviors in a human fashion, forwards they rise up and destroy us.6. Toward the near generation of recommender systems a survey of the state-of-the-art and possible extensions (available full-text)Popular among AI and information retrieval researchers, this paper discusses recommendation algorithms and classifies them into collaborative, content-based, or hybrid. While I wouldnt forestall this paper a groundbreaking showcase of the caliber of the Shannon paper supra, I lavatory certainly understand why it makes such a strong viewing here. If youre using Mendeley, youre using both collaborative and content-based discovery methods7. A numeral Theory of Communication (available full-text)Now were back to more fundamental papers. I would really have expected this to be at least number 3 or 4, but the strong showing by the AI discipline for the machine learning papers in muscae volitantes 1, 4, and 5 pushed it down. This paper discusses the theory of send communications down a rackety channel and demonstrates a few fall upon engineering parameters, such as entropy, which is the range of states of a given communication. Its one of the more fundamental papers of computer science, founding the field of information theory and enabling the development of the very tubes through which you received this web page youre reading now. Its also the primary organise the word bit, pithy for binary digit, is found in the published literature.8. The semantic Web (available full-text)In The Semantic Web, Tim Berners-Lee, Sir Tim, the inventor of the World Wide Web, describes his vision for the web of the future. Now, 10 years later, its fascinating to look back though it and see on which points the web has delivered on its promise and how far away we quieten remain in so many others. This is different from the other papers above in that its a descriptive piece, not primary research as above, but still deserves its place in the list and readership will plainly grow as we approach ever closer to his vision.9. biconvex Optimization (available full-text)This is a very popular book on a widely used optimisation technique in signal processing. Convex optimization tries to find the provably optimum solution to an optimization problem, as remote to a nearby maximum or minimum. While this seems like a super specialized niche area, its of importance to machine learning and AI researchers, so it was able to pull in a nice readership on Mendeley. Professor Boyd has a very popular set of vide o classes at Stanford on the subject, which probably gave this a little boost, as well. The point here is that print publications arent the only way of communicating your ideas. Videos of techniques at SciVee or JoVE or recorded lectures (previously) can really admirer spread awareness of your research.10. Object perception from local scale-invariant features (available in full-text)This is another paper on the said(prenominal) topic as paper 4, and its by the same author. Looking across subdisciplines as we did here, its not surprising to see two related papers, of interest to the main thrust discipline, appear twice. Adding the readers from this paper to the 4 paper would be enough to put it in the 2 spot, just below the LDA paper.Conclusions So whats the moral of the story? Well, there are a few things to observation. first base of all, it shows that Mendeley readership data is good enough to release both papers of long-standing importance as well as interesting coming(pre nominal) trends. Fun stuff can be done with this How about a Mendeley leaderboard? You could grab the number of readers for each paper published by members of your group, and have some pally competition to see who can regain the most readers, month-over-month. Comparing yourself against others in damage of readers per paper could put a with child(p) smile on your face, or it could be a gentle nudge to get out to more conferences or maybe record a video of your technique for JoVE or Khan Academy or just Youtube.Another thing to note is that these results dont necessarily mean that AI researchers are the most potent researchers or the most numerous, just the surpass at being accounted for. To make sure youre counted properly, be sure you list your subdiscipline on your profile, or if you cant find your exact one, pick the close at hand(predicate) one, like the machine learning folks did with the AI subdiscipline. We recognize that almost everyone does interdisciplinary work the se days. Were working on a more flexible discipline assignment system, but for now, just pick your favorite one.These stats were derived from the entire readership history, so they do reflect a founder answer to some degree. Limiting the analysis to the previous(prenominal) 3 months would probably reveal different trends and comparing month-to-month changes could reveal rising stars.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.