计算机专业英语论文摘要合辑【1】 您所在的位置:网站首页 软件专业英语词汇总结 计算机专业英语论文摘要合辑【1】

计算机专业英语论文摘要合辑【1】

2024-07-15 15:11| 来源: 网络整理| 查看: 265

写在前面:我是【程序员宝藏】的宝藏派发员,致力于创作原创干货。我热爱技术、热爱开源与分享,创作的【计算机基础面试问题】系列文章和【计算机基础主干知识】系列文章广受好评!后期会创作更多优质原创系列文章!如果您对计算机基础知识、编程等感兴趣,可以关注我,我们一起成长!

本人力荐:如果觉得CSDN排版不够美观,欢迎来我的个人原创公zong号【程序员宝藏】(号如其名,诚不欺你!)查看有红色重点标记和排版美观的全系列文章(不细你来找我要红包) 参考推文链接:TCP三次握手四次挥手

好多同学问我要pdf版,我干脆把我的全部原创文章都整理成了pdf直接打印版,在公zong号后台回复关键字【宝藏】即可免费带回家慢慢看!

本系列参考文章:

计算机专业英语篇(专业英语提升必备)

文章目录 一、人工智能相关1.人工智能系统安全与隐私风险。2.智慧教育研究现状与发展趋势3.智能芯片的评述和展望。 二、机器学习相关1.基于机器学习的智能路由算法综述2.编码技术改进大规模分布式机器学习性能综述3.贝叶斯机器学习前沿进展综述

一、人工智能相关 1.人工智能系统安全与隐私风险。

Security and Privacy Risks in Artificial Intelligence Systems

摘要: 人类正在经历着由深度学习技术推动的人工智能浪潮,它为人类生产和生活带来了巨大的技术革新.在某些特定领域中,人工智能已经表现出达到甚至超越人类的工作能力.然而,以往的机器学习理论大多没有考虑开放甚至对抗的系统运行环境,人工智能系统的安全和隐私问题正逐渐暴露出来.通过回顾人工智能系统安全方面的相关研究工作,揭示人工智能系统中潜藏的安全与隐私风险.首先介绍了包含攻击面、攻击能力和攻击目标的安全威胁模型.从人工智能系统的4个关键环节——数据输入(传感器)、数据预处理、机器学习模型和输出,分析了相应的安全隐私风险及对策.讨论了未来在人工智能系统安全研究方面的发展趋势.

关键词: 智能系统安全, 系统安全, 数据处理, 人工智能, 深度学习

Abstract: Human society is witnessing a wave of artificial intelligence (AI) driven by deep learning techniques, bringing a technological revolution for human production and life. In some specific fields, AI has achieved or even surpassed human-level performance. However, most previous machine learning theories have not considered the open and even adversarial environments, and the security and privacy issues are gradually rising. Besides of insecure code implementations, biased models, adversarial examples, sensor spoofing can also lead to security risks which are hard to be discovered by traditional security analysis tools. This paper reviews previous works on AI system security and privacy, revealing potential security and privacy risks. Firstly, we introduce a threat model of AI systems, including attack surfaces, attack capabilities and attack goals. Secondly, we analyze security risks and counter measures in terms of four critical components in AI systems: data input (sensor), data preprocessing, machine learning model and output. Finally, we discuss future research trends on the security of AI systems. The aim of this paper is to arise the attention of the computer security society and the AI society on security and privacy of AI systems, and so that they can work together to unlock AI’s potential to build a bright future.

Key words: intelligent system security, system security, data processing, artificial intelligence (AI), deep learning

2.智慧教育研究现状与发展趋势

The State of the Art and Future Tendency of Smart Education

摘要: 当前,以大数据分析、人工智能等信息技术为支撑的智慧教育模式已成教育信息化发展的趋势,也成为学术界热点的研究方向.首先,对教学行为、海量知识资源2类教育大数据的挖掘技术进行调研分析;其次,重点论述了导学、推荐、答疑、评价等教学环节中的4项关键技术,包括学习路径生成与导航、学习者画像与个性化推荐、智能在线答疑以及精细化评测,进而对比分析了国内外主流的智慧教育平台;最后,探讨了当前智慧教育研究的局限性,总结出在线智能学习助手、学习者智能评估、网络化群体认知、因果关系发现等智慧教育的研究发展方向.

关键词: 智慧教育, 教育大数据, 大数据分析, 人工智能, 知识图谱

Abstract: At present the smart education pattern supported by information technology such as big data analytics and artificial intelligence has become the trend of the development of education informatization, and also has become a popular research direction in academic hotspots. Firstly, we investigate and analyze the data mining technologies of two kinds of educational big data including teaching behavior and massive knowledge resources. Secondly, we focus on four vital technologies in teaching process such as learning guidance, recommendation, Q&A and evaluation, including learning path generation and navigation, learner profiling and personalized recommendations, online smart Q&A and precise evaluation. Then we compare and analyze the mainstream smart education platforms at home and abroad. Finally, we discuss the limitations of current smart education research and summarize the research and development directions of online smart learning assistants, learner smart assessment, networked group cognition, causality discovery and other smart education aspects.

Key words: smart education, educational big data, big data analytics, artificial intelligence, knowledge graph

3.智能芯片的评述和展望。

A Survey of Artificial Intelligence Chip

摘要: 近年来,人工智能技术在许多商业领域获得了广泛应用,并且随着世界各地的科研人员和科研公司的重视和投入,人工智能技术在传统语音识别、图像识别、搜索/推荐引擎等领域证明了其不可取代的价值.但与此同时,人工智能技术的运算量也急剧扩增,给硬件设备的算力提出了巨大的挑战.从人工智能的基础算法以及其应用算法着手,描述了其运算方式及其运算特性.然后,介绍了近期人工智能芯片的发展方向,对目前智能芯片的主要架构进行了介绍和分析.而后,着重介绍了DianNao系列处理器的研究成果.该系列的处理器为智能芯片领域最新最先进的研究成果,其结构和设计分别面向不同的技术特征而提出,包括深度学习算法、大规模的深度学习算法、机器学习算法、用于处理二维图像的深度学习算法以及稀疏深度学习算法等.此外,还提出并设计了完备且高效的Cambricon指令集结构.最后,对人工神经网络技术的发展方向从多个角度进行了分析,包括网络结构、运算特性和硬件器件等,并基于此对未来工作可能的发展方向进行了预估和展望.

关键词: 人工智能, 加速器, FPGA, ASIC, 权重量化, 稀疏剪枝

Abstract: In recent years, artificial intelligence (AI)technologies have been widely used in many commercial fields. With the attention and investment of scientific researchers and research companies around the world, AI technologies have been proved their irreplaceable value in traditional speech recognition, image recognition, search/recommendation engine and other fields. However, at the same time, the amount of computation of AI technologies increases dramatically, which poses a huge challenge to the computing power of hardware equipments. At first, we describe the basic algorithms of AI technologies and their application algorithms in this paper, including their operation modes and operation characteristics. Then, we introduce the development directions of AI chips in recent years, and analyze the main architectures of AI chips. Furthermore, we emphatically introduce the researches of DianNao series processors. This series of processors are the latest and most advanced researches in the field of AI chips. Their architectures and designs are proposed for different technical features, including deep learning algorithms, large-scale deep learning algorithms, machine learning algorithms, deep learning algorithms for processing two-dimensional images and sparse deep learning algorithms. In addition, a complete and efficient instruction architecture(ISA) for deep learning algorithms, Cambricon, is proposed. Finally, we analyze the development directions of artificial neural network technologies from various angles, including network structures, operation characteristics and hardware devices. Based on the above, we predict and prospect the possible development directions of future work.

Key words: artificial intelligence, accelerators, FPGA, ASIC, weight quantization, sparse pruning

二、机器学习相关 1.基于机器学习的智能路由算法综述

A Survey on Machine Learning Based Routing Algorithms

摘要: 互联网的飞速发展催生了很多新型网络应用,其中包括实时多媒体流服务、远程云服务等.现有尽力而为的路由转发算法难以满足这些应用所带来的多样化的网络服务质量需求.随着近些年将机器学习方法应用于游戏、计算机视觉、自然语言处理获得了巨大的成功,很多人尝试基于机器学习方法去设计智能路由算法.相比于传统数学模型驱动的分布式路由算法而言,基于机器学习的路由算法通常是数据驱动的,这使得其能够适应动态变化的网络环境以及多样的性能评价指标优化需求.基于机器学习的数据驱动智能路由算法目前已经展示出了巨大的潜力,未来很有希望成为下一代互联网的重要组成部分.然而现有对于智能路由的研究仍然处于初步阶段.首先介绍了现有数据驱动智能路由算法的相关研究,展现了这些方法的核心思想和应用场景并分析了这些工作的优势与不足.分析表明,现有基于机器学习的智能路由算法研究主要针对算法原理,这些路由算法距离真实环境下部署仍然很遥远.因此接下来分析了不同的真实场景智能路由算法训练和部署方案并提出了2种合理的训练部署框架以使得智能路由算法能够低成本、高可靠性地在真实场景被部署.最后分析了基于机器学习的智能路由算法未来发展中所面临的机遇与挑战并给出了未来的研究方向.

关键词: 机器学习, 数据驱动路由算法, 深度学习, 强化学习, 服务质量

Abstract: The rapid development of the Internet accesses many new applications including real time multi-media service, remote cloud service, etc. These applications require various types of service quality, which is a significant challenge towards current best effort routing algorithms. Since the recent huge success in applying machine learning in game, computer vision and natural language processing, many people tries to design “smart” routing algorithms based on machine learning methods. In contrary with traditional model-based, decentralized routing algorithms (e.g.OSPF), machine learning based routing algorithms are usually data-driven, which can adapt to dynamically changing network environments and accommodate different service quality requirements. Data-driven routing algorithms based on machine learning approach have shown great potential in becoming an important part of the next generation network. However, researches on artificial intelligent routing are still on a very beginning stage. In this paper we firstly introduce current researches on data-driven routing algorithms based on machine learning approach, showing the main ideas, application scenarios and pros and cons of these different works. Our analysis shows that current researches are mainly for the principle of machine learning based routing algorithms but still far from deployment in real scenarios. So we then analyze different training and deploying methods for machine learning based routing algorithms in real scenarios and propose two reasonable approaches to train and deploy such routing algorithms with low overhead and high reliability. Finally, we discuss the opportunities and challenges and show several potential research directions for machine learning based routing algorithms in the future.

Key words: machine learning, data driven routing algorithm, deep learning, reinforcement learning, quality of service (QoS)

2.编码技术改进大规模分布式机器学习性能综述

Coding-Based Performance Improvement of Distributed Machine Learning in Large-Scale Clusters

摘要: 由于分布式计算系统能为大数据分析提供大规模的计算能力,近年来受到了人们的广泛关注.在分布式计算系统中,存在某些计算节点由于各种因素的影响,计算速度会以某种随机的方式变慢,从而使运行在集群上的机器学习算法执行时间增加,这种节点叫作掉队节点(straggler).介绍了基于编码技术解决这些问题和改进大规模机器学习集群性能的研究进展.首先介绍编码技术和大规模机器学习集群的相关背景;其次将相关研究按照应用场景分成了应用于矩阵乘法、梯度计算、数据洗牌和一些其他应用,并分别进行了介绍分析;最后总结讨论了相关编码技术存在的困难并对未来的研究趋势进行了展望.

关键词: 编码技术, 机器学习, 分布式计算, 掉队节点容忍, 性能优化

Abstract: With the growth of models and data sets, running large-scale machine learning algorithms in distributed clusters has become a common method. This method divides the whole machine learning algorithm and training data into several tasks and each task runs on different worker nodes. Then, the results of all tasks are combined by master node to get the results of the whole algorithm. When there are a large number of nodes in distributed cluster, some worker nodes, called straggler, will inevitably slow down than other nodes due to resource competition and other reasons, which makes the task time of running on this node significantly higher than that of other nodes. Compared with running replica task on multiple nodes, coded computing shows an impact of efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in large-scale machine learning cluster.This paper introduces the research progress of solving the straggler issues and improving the performance of large-scale machine learning cluster based on coding technology. Firstly, we introduce the background of coding technology and large-scale machine learning cluster. Secondly, we divide the related research into several categories according to application scenarios: matrix multiplication, gradient computing, data shuffling and some other applications. Finally, we summarize the difficulties of applying coding technology in large-scale machine learning cluster and discuss the future research trends about it.

3.贝叶斯机器学习前沿进展综述

Recent Advances in Bayesian Machine Learning

摘要: 随着大数据的快速发展,以概率统计为基础的机器学习在近年来受到工业界和学术界的极大关注,并在视觉、语音、自然语言、生物等领域获得很多重要的成功应用,其中贝叶斯方法在过去20多年也得到了快速发展,成为非常重要的一类机器学习方法.总结了贝叶斯方法在机器学习中的最新进展,具体内容包括贝叶斯机器学习的基础理论与方法、非参数贝叶斯方法及常用的推理方法、正则化贝叶斯方法等. 最后,还针对大规模贝叶斯学习问题进行了简要的介绍和展望,对其发展趋势作了总结和展望.

关键词: 贝叶斯机器学习, 非参数方法, 正则化方法, 大数据学习, 大数据贝叶斯学习

Abstract: With the fast growth of big data, statistical machine learning has attracted tremendous attention from both industry and academia, with many successful applications in vision, speech, natural language, and biology. In particular, the last decades have seen the fast development of Bayesian machine learning, which is now representing a very important class of techniques. In this article, we provide an overview of the recent advances in Bayesian machine learning, including the basics of Bayesian machine learning theory and methods, nonparametric Bayesian methods and inference algorithms, and regularized Bayesian inference. Finally, we also highlight the challenges and recent progress on large-scale Bayesian learning for big data, and discuss on some future directions.

Key words: Bayesian machine learning, nonparametric methods, regularized methods, learning with big data, big Bayesian learning



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有