最具影响力的数字化技术在线社区

168大数据

 找回密码
 立即注册

QQ登录

只需一步,快速开始

1 2 3 4 5
打印 上一主题 下一主题
开启左侧

大数据技术如何才能发挥最佳状态?When big data is truly better

[复制链接]
跳转到指定楼层
楼主
发表于 2014-9-13 22:19:40 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式

马上注册,结交更多数据大咖,获取更多知识干货,轻松玩转大数据

您需要 登录 才可以下载或查看,没有帐号?立即注册

x

过去的经验表明,充分发挥规模化优势能够切实提升分析机制所带来的实践价值。不过如果把大数据看作一柄榔头,可并不是所有问题都属于等待敲下的钉子。

很多人都误以为在大数据解决方案中,处理对象的规模总是越大越好。事实上,人们往往会从不同的立场出发,对“越大越好”这一命题给出自己的答案,而我汇总出了几下几种典型情况:

  • 深信不疑: 这是一种根深蒂固的观念,有些人认为无论实际情况如何,更庞大的规模、更迅捷的速度以及/或者更多样的数据类型总是能够带来更具实践价值的分析结论,而这也正是他们眼中大数据分析的核心价值所在。如果在实际操作中找到理想的结论,那么根据他们的思维方式,这仅仅是由于具体处理者不够努力、不够聪明或者没有使用正确的工具及方法。
  • 盲目迷信: 这种观点认为,大数据的绝对规模本身就是其价值的切实体现,而这与我们是否能够从中获取到实际结论并无关系。根据这种思维方式,如果我们以大数据所支持的特定企业应用程序为出发点对大数据功能进行评估,那么完全不需要像当下分析领域这样迫切需要数据科学家的帮助、而能够任意将数据保存在数据湖当中以支持未来的探索活动。
  • 视为负担: 这种观点认为,数据的庞大规模并不是带来正面或者负面结果的必要条件。不过有一项事实明确而不容否认,即现有数据库在存储与处理能力方面的匮乏根本无力负担大数据的高强度负载,因此需要新的平台加以支撑(例如hadoop)。如果我们不能将发展脚步与数据的迅猛增长保持一致,那么这种观点认为企业的当务之急是将核心业务转移到新型数据库当中。
  • 绝佳机遇: 就我个人而言,这才是看待大数据的正确方式。其核心实质在于随着数据规模的不断扩大、数据流速度的不断提升以及数据来源与格式的持续增长,我们需要以更加快捷而有效的方式所数据中提取出前所未有的分析结论。这种观点不会迷信或者过度依赖大数据,因为我们承认某些结论完全可以通过小规模数据分析方式得出。同时,这种观点也不会将数据规模视为一种负担,而单纯只是需要通过新型数据库平台、工具以及实践方案解决的另一项技术挑战。

去年,我曾在一篇博文中谈到大数据中的核心用例,主要探讨角度是从“绝佳机遇”层面出发。而去年年底,我通过亲身观察发现大数据的核心“业务”价值主要受到增量化内容在提供增量化背景信息方面的影响。如果大家希望通过数据分析来了解事物的全貌以及蕴藏在其背后的深层含义,那么背景信息总是越多越好。同理,如果大家希望将与当前问题相关的所有变量、关系以及模式进行全面考量的话,内容也总是越多越好。总体而言:更多背景信息加上更多相关内容通常意味着更大的数据规模。

大数据的价值还更多地体现在其纠正错误的能力,这种价值在小规模数据当中往往很难体现。在博文中,我曾援引某位第三方数据科学家的观察结果,发现培训信息集合当中包含的数量量越小、几类常见风险状况的发生可能性就越高。首先,小规模数据往往会令我们忽视某些至关重要的预测性变量。大家也有可能对某些切实具备代表性的样本信息产生误解。除此之外,大家往往能够在具备更为复杂并能切实体现底层工作关系的数据集的前提下,保证自身将某些虚假的相关性联系排除出去。

规模化的美好

相信每个人都已经意识到,某些数据类型及特定用例在规模化条件下能够比其它资源带来更出色的分析结论推动作用。

在这方面,我最近发现了一篇非常出色的评述文章,其中对一种特殊类型的数据——也就是低密度细化行为数据——进行了深入阐释,指出其能够在规模化条件下显著提高预测性分析的准确率。该文作者Junqué de Fortuny、Martens以及Provost指出,“此类数据集的关键特性在于其低密度:对于任何给定实例,绝大多数特征对于实际价值的贡献为零、或者说‘没有意义’。”

其中最值得注意的(作者们也引用了大量研究资料来支持他们的讨论)是,此类数据已经成为不少关注于客户分析任务的大数据应用程序的核心所在。社交媒体行为数据完全满足以上描述,而Web浏览行为数据、移动行为数据、广告响应行为数据以及自然语言行为数据等等也全部与之相符。

“事实上,”三位作者指出,“对于大多数常见的预测分析型业务应用程序来说,例如针对银行及电信、信用评分以及资源消耗管理等任务的应用,此类数据已经被普遍作为预测性分析的关键性素材……其特性往往体现在人口、地理位置以及个人心理倾向方面,并且包括对特定行为的统计汇总——例如企业此前进行过的采购行为。”

在谈到规模较大的行为数据集往往比小规模数据集更具分析价值的核心原因时,三位作者指出:“少数特定的已知行为往往无法在没有庞大数据量作为依托的前提下被准确观察得出。”这是因为在小型数据集当中,除非其表现超出预先设定的具体范围,否则个人行为是不会被记录下来的。但当我们把目光投向所有相关人员整体,很有可能会观察到那些仅仅出现过数次甚至一次、但却指向特定利基层面的特殊行为类型。在小规模数据集当中,由于对象数量与行为特征相对有限,我们很可能会忽略掉上述更为丰富的细节信息。

行为数据集的来源越丰富、预测模型就越能获得理想的施展空间,从而为未来可能出现的更为广泛的潜在场景提供更具参考价值的预测结论。因此,规模越大通常意味着分析效果越好。

有时候越大意味着越难理解

尽管如此,三位作者也注意到在某些情况下、上述结论可能并不成立,而这一切都要归结于特定行为特征的预测价值层面。基本上,权衡机制充当着行为预测模型的基础。

每一种被纳入到预测模型内的新型增量化行为特征都应当具备与分析目标的高度相关性,只有这样才能提高分析收益并保证预测模型有能力克服更显著的内容差异化状况——也就是过度拟合与错误预测——但这往往需要有规模更大的功能集作为依托。正如几位作者指出:“如果不能对模型进行平衡与改进(假定已经选定了正确的数据子集),与核心主旨无关的大量信息只会增加出现偏差与过度拟合状况的机率。”

很明显,当不利于得出预测性结论时,数据规模并非越大越好。相信没人愿意在大数据分析过程中受到其臃肿规模的严重拖累。在这种情况下,我们的数据科学家则需要开动脑筋,想办法将导入模型的数据规模尽量缩小、从而使其最大程度与当前分析任务的特性相匹配。

英语原文:

When big data is truly better

Take advantage of scale when past experience indicates greater analytic value will result. But big data is not a hammer — nor is every problem a nail

Many people assume that big data means bigger is always better. People tend to approach the “bigger is better” question from various philosophical perspectives, which I characterize thusly:

Faith: This is the notion that, somehow, greater volumes, velocities, and/or varieties of data will always deliver fresher insights, which amounts to the core value of big data analytics. If we’re unable to find those insights, according to this perspective, it’s only because we’re not trying hard enough, we’re not smart enough, or we’re not using the right tools and approaches.

Fetish: This is the notion that the sheer bigness of data is a value in its own right, regardless of whether we’re deriving any specific insights from it. If we’re evaluating the utility of big data solely on the specific business applications it supports, according to this outlook, we’re not in tune with the modern need of data scientists to store data indiscriminately in data lakes to support future explorations.

Burden: This is the notion that the bigness of data is not necessarily better or worse, but it is simply a fact of life that has the unfortunate consequence of straining the storage and processing capacity of existing databases, thereby necessitating new platforms (such as Hadoop). If we’re not able to keep up with all this burdensome new data, or so this perspective leads us to believe, the core business imperative is to change over to a new type of database.

Opportunity: This is, in my opinion, the right approach to big data. It’s focused on extracting unprecedented insights more effectively and efficiently as the data scales to new heights, streams in faster, and originates in an ever-growing range of sources and formats. It doesn’t treat big data as a faith or fetish, because it acknowledges that many differentiated insights can continue to be discovered at lower scales. It doesn’t treat data’s scale as a burden, either, but as simply a challenge to be addressed effectively through new database platforms, tooling, and practices.

Last year, I blogged on the hardcore use cases for big data in a discussion that was exclusively on the “opportunity” side of the equation. Later in the year, I observed that big data’s core “bigness” value derives from the ability of incremental content to reveal incremental context. More context is better than less when what you’re doing is analyzing data in order to ascertain its full significance. Likewise, more content is better than less when you’re trying to identify all of the variables, relationships, and patterns in your problem domain to a finer degree of granularity. The bottom line: More context plus more content usually equals more data.

Big data’s value is also in its ability to correct errors that are more likely to crop up at smaller scale. In that same post, I cited a third party who observed that, for a data scientist, having less data in their training set means they’re susceptible to several modeling risks. For starters, at smaller scales you’re more likely to overlook key predictive variables. You are also more likely to skew the model to nonrepresentative samples. In addition, you’re more likely to find spurious correlations that would disappear if you had a more complete data sets revealing the underlying relationships at work.

Scale can be beautiful

Everybody recognizes that some types of data and some use cases are more conducive than others to realizing fresh insights at scale.

In that vein, I recently came across a great article that spells out one specific category of data — sparse, fine-grained behavioral data — on which predictive performance often improves with scale. The authors, Junqué de Fortuny, Martens, and Provost, state that “a key aspect of such datasets is that they are sparse: For any given instance, the vast majority of the features have a value of zero or ‘not present.'”

What’s most noteworthy about this (and the authors support their discussion by citing ample research) is that this type of data is at the heart of many big data applications with a customer-analytics focus. Social media behavioral data fits this description, as do Web browsing behavioral data, mobile behavioral data, advertising response behavioral data, natural language behavioral data, and so on.

“Indeed,” the authors state, “for many of the most common business applications of predictive analytics, such as targeted marketing in banking and telecommunications, credit scoring, and attrition management, the data used for predictive analytics are very similar … [T]he features tend to be demographic, geographic, and psychographic characteristics of individuals, as well as statistics summarizing particular behaviors, such as their prior purchase behavior with the firm.”

The core reason why bigger behavioral data sets are usually better is simple, the authors state: “Certain telling behaviors may not be observed in sufficient numbers without massive data.” That’s because, in a sparse data set, no individual person whose behavior is being recorded is likely to exhibit more than a limited range of behaviors. But when you look across an entire population, you’re likely to observe every specific type of behavior being expressed at least once and perhaps numerous times within specific niches. At smaller data scales, looking at fewer subjects and observing fewer behavioral features, you’re likely to overlook much of this richness.

Predictive models thrive on the richness of the source behavioral data sets, in order to drive more accurate predictions across a wider range of future scenarios. Hence, bigger usually is better.

When bigger equals fuzzier

Nonetheless, the authors also note scenarios where this assumption falls apart, and it all has to do with the predictive value of specific behavioral features. Essentially, a trade-off underlies predictive behavioral modeling.

Each incremental new behavioral feature added to a predictive model should be sufficiently relevant to the prediction made so that it boosts the learning yield and predictive power of the model enough to overcome the ever-wider variances — hence over-fitting and predictive error — that tends to come with ever larger feature sets. As the authors state: “The large number of irrelevant features simply increases variance and the opportunity to over-fit, without the balancing opportunity of learning better models (presuming that one can actually select the right subset).”

Clearly, bigger isn’t better when bigness gets in the way of deriving predictive insights. You don’t want your big data analytics effort to be a victim of its own scale. Your data scientists have to be smart enough to know when to scale back their models to the hardcore of features best suited to the analytic task at hand.

原文链接:http://www.infoworld.com/d/big-data/when-big-data-truly-better-249737 译者&via:51CTO



楼主热帖
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 转播转播 分享分享 分享淘帖 赞 踩

168大数据 - 论坛版权1.本主题所有言论和图片纯属网友个人见解,与本站立场无关
2.本站所有主题由网友自行投稿发布。若为首发或独家,该帖子作者与168大数据享有帖子相关版权。
3.其他单位或个人使用、转载或引用本文时必须同时征得该帖子作者和168大数据的同意,并添加本文出处。
4.本站所收集的部分公开资料来源于网络,转载目的在于传递价值及用于交流学习,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。
5.任何通过此网页连接而得到的资讯、产品及服务,本站概不负责,亦不负任何法律责任。
6.本站遵循行业规范,任何转载的稿件都会明确标注作者和来源,若标注有误或遗漏而侵犯到任何版权问题,请尽快告知,本站将及时删除。
7.168大数据管理员和版主有权不事先通知发贴者而删除本文。

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

站长推荐上一条 /1 下一条

关于我们|小黑屋|Archiver|168大数据 ( 京ICP备14035423号|申请友情链接

GMT+8, 2024-5-16 12:11

Powered by BI168大数据社区

© 2012-2014 168大数据

快速回复 返回顶部 返回列表