168大数据

标题: 大数据技术如何才能发挥最佳状态?When big data is truly better [打印本页]

作者: 乔帮主    时间: 2014-9-13 22:19
标题: 大数据技术如何才能发挥最佳状态?When big data is truly better

过去的经验表明,充分发挥规模化优势能够切实提升分析机制所带来的实践价值。不过如果把大数据看作一柄榔头,可并不是所有问题都属于等待敲下的钉子。

很多人都误以为在大数据解决方案中,处理对象的规模总是越大越好。事实上,人们往往会从不同的立场出发,对“越大越好”这一命题给出自己的答案,而我汇总出了几下几种典型情况:

去年,我曾在一篇博文中谈到大数据中的核心用例,主要探讨角度是从“绝佳机遇”层面出发。而去年年底,我通过亲身观察发现大数据的核心“业务”价值主要受到增量化内容在提供增量化背景信息方面的影响。如果大家希望通过数据分析来了解事物的全貌以及蕴藏在其背后的深层含义,那么背景信息总是越多越好。同理,如果大家希望将与当前问题相关的所有变量、关系以及模式进行全面考量的话,内容也总是越多越好。总体而言:更多背景信息加上更多相关内容通常意味着更大的数据规模。

大数据的价值还更多地体现在其纠正错误的能力,这种价值在小规模数据当中往往很难体现。在博文中,我曾援引某位第三方数据科学家的观察结果,发现培训信息集合当中包含的数量量越小、几类常见风险状况的发生可能性就越高。首先,小规模数据往往会令我们忽视某些至关重要的预测性变量。大家也有可能对某些切实具备代表性的样本信息产生误解。除此之外,大家往往能够在具备更为复杂并能切实体现底层工作关系的数据集的前提下,保证自身将某些虚假的相关性联系排除出去。

规模化的美好

相信每个人都已经意识到,某些数据类型及特定用例在规模化条件下能够比其它资源带来更出色的分析结论推动作用。

在这方面,我最近发现了一篇非常出色的评述文章,其中对一种特殊类型的数据——也就是低密度细化行为数据——进行了深入阐释,指出其能够在规模化条件下显著提高预测性分析的准确率。该文作者Junqué de Fortuny、Martens以及Provost指出,“此类数据集的关键特性在于其低密度:对于任何给定实例,绝大多数特征对于实际价值的贡献为零、或者说‘没有意义’。”

其中最值得注意的(作者们也引用了大量研究资料来支持他们的讨论)是,此类数据已经成为不少关注于客户分析任务的大数据应用程序的核心所在。社交媒体行为数据完全满足以上描述,而Web浏览行为数据、移动行为数据、广告响应行为数据以及自然语言行为数据等等也全部与之相符。

“事实上,”三位作者指出,“对于大多数常见的预测分析型业务应用程序来说,例如针对银行及电信、信用评分以及资源消耗管理等任务的应用,此类数据已经被普遍作为预测性分析的关键性素材……其特性往往体现在人口、地理位置以及个人心理倾向方面,并且包括对特定行为的统计汇总——例如企业此前进行过的采购行为。”

在谈到规模较大的行为数据集往往比小规模数据集更具分析价值的核心原因时,三位作者指出:“少数特定的已知行为往往无法在没有庞大数据量作为依托的前提下被准确观察得出。”这是因为在小型数据集当中,除非其表现超出预先设定的具体范围,否则个人行为是不会被记录下来的。但当我们把目光投向所有相关人员整体,很有可能会观察到那些仅仅出现过数次甚至一次、但却指向特定利基层面的特殊行为类型。在小规模数据集当中,由于对象数量与行为特征相对有限,我们很可能会忽略掉上述更为丰富的细节信息。

行为数据集的来源越丰富、预测模型就越能获得理想的施展空间,从而为未来可能出现的更为广泛的潜在场景提供更具参考价值的预测结论。因此,规模越大通常意味着分析效果越好。

有时候越大意味着越难理解

尽管如此,三位作者也注意到在某些情况下、上述结论可能并不成立,而这一切都要归结于特定行为特征的预测价值层面。基本上,权衡机制充当着行为预测模型的基础。

每一种被纳入到预测模型内的新型增量化行为特征都应当具备与分析目标的高度相关性,只有这样才能提高分析收益并保证预测模型有能力克服更显著的内容差异化状况——也就是过度拟合与错误预测——但这往往需要有规模更大的功能集作为依托。正如几位作者指出:“如果不能对模型进行平衡与改进(假定已经选定了正确的数据子集),与核心主旨无关的大量信息只会增加出现偏差与过度拟合状况的机率。”

很明显,当不利于得出预测性结论时,数据规模并非越大越好。相信没人愿意在大数据分析过程中受到其臃肿规模的严重拖累。在这种情况下,我们的数据科学家则需要开动脑筋,想办法将导入模型的数据规模尽量缩小、从而使其最大程度与当前分析任务的特性相匹配。

英语原文:

When big data is truly better

Take advantage of scale when past experience indicates greater analytic value will result. But big data is not a hammer — nor is every problem a nail

Many people assume that big data means bigger is always better. People tend to approach the “bigger is better” question from various philosophical perspectives, which I characterize thusly:

Faith: This is the notion that, somehow, greater volumes, velocities, and/or varieties of data will always deliver fresher insights, which amounts to the core value of big data analytics. If we’re unable to find those insights, according to this perspective, it’s only because we’re not trying hard enough, we’re not smart enough, or we’re not using the right tools and approaches.

Fetish: This is the notion that the sheer bigness of data is a value in its own right, regardless of whether we’re deriving any specific insights from it. If we’re evaluating the utility of big data solely on the specific business applications it supports, according to this outlook, we’re not in tune with the modern need of data scientists to store data indiscriminately in data lakes to support future explorations.

Burden: This is the notion that the bigness of data is not necessarily better or worse, but it is simply a fact of life that has the unfortunate consequence of straining the storage and processing capacity of existing databases, thereby necessitating new platforms (such as Hadoop). If we’re not able to keep up with all this burdensome new data, or so this perspective leads us to believe, the core business imperative is to change over to a new type of database.

Opportunity: This is, in my opinion, the right approach to big data. It’s focused on extracting unprecedented insights more effectively and efficiently as the data scales to new heights, streams in faster, and originates in an ever-growing range of sources and formats. It doesn’t treat big data as a faith or fetish, because it acknowledges that many differentiated insights can continue to be discovered at lower scales. It doesn’t treat data’s scale as a burden, either, but as simply a challenge to be addressed effectively through new database platforms, tooling, and practices.

Last year, I blogged on the hardcore use cases for big data in a discussion that was exclusively on the “opportunity” side of the equation. Later in the year, I observed that big data’s core “bigness” value derives from the ability of incremental content to reveal incremental context. More context is better than less when what you’re doing is analyzing data in order to ascertain its full significance. Likewise, more content is better than less when you’re trying to identify all of the variables, relationships, and patterns in your problem domain to a finer degree of granularity. The bottom line: More context plus more content usually equals more data.

Big data’s value is also in its ability to correct errors that are more likely to crop up at smaller scale. In that same post, I cited a third party who observed that, for a data scientist, having less data in their training set means they’re susceptible to several modeling risks. For starters, at smaller scales you’re more likely to overlook key predictive variables. You are also more likely to skew the model to nonrepresentative samples. In addition, you’re more likely to find spurious correlations that would disappear if you had a more complete data sets revealing the underlying relationships at work.

Scale can be beautiful

Everybody recognizes that some types of data and some use cases are more conducive than others to realizing fresh insights at scale.

In that vein, I recently came across a great article that spells out one specific category of data — sparse, fine-grained behavioral data — on which predictive performance often improves with scale. The authors, Junqué de Fortuny, Martens, and Provost, state that “a key aspect of such datasets is that they are sparse: For any given instance, the vast majority of the features have a value of zero or ‘not present.'”

What’s most noteworthy about this (and the authors support their discussion by citing ample research) is that this type of data is at the heart of many big data applications with a customer-analytics focus. Social media behavioral data fits this description, as do Web browsing behavioral data, mobile behavioral data, advertising response behavioral data, natural language behavioral data, and so on.

“Indeed,” the authors state, “for many of the most common business applications of predictive analytics, such as targeted marketing in banking and telecommunications, credit scoring, and attrition management, the data used for predictive analytics are very similar … [T]he features tend to be demographic, geographic, and psychographic characteristics of individuals, as well as statistics summarizing particular behaviors, such as their prior purchase behavior with the firm.”

The core reason why bigger behavioral data sets are usually better is simple, the authors state: “Certain telling behaviors may not be observed in sufficient numbers without massive data.” That’s because, in a sparse data set, no individual person whose behavior is being recorded is likely to exhibit more than a limited range of behaviors. But when you look across an entire population, you’re likely to observe every specific type of behavior being expressed at least once and perhaps numerous times within specific niches. At smaller data scales, looking at fewer subjects and observing fewer behavioral features, you’re likely to overlook much of this richness.

Predictive models thrive on the richness of the source behavioral data sets, in order to drive more accurate predictions across a wider range of future scenarios. Hence, bigger usually is better.

When bigger equals fuzzier

Nonetheless, the authors also note scenarios where this assumption falls apart, and it all has to do with the predictive value of specific behavioral features. Essentially, a trade-off underlies predictive behavioral modeling.

Each incremental new behavioral feature added to a predictive model should be sufficiently relevant to the prediction made so that it boosts the learning yield and predictive power of the model enough to overcome the ever-wider variances — hence over-fitting and predictive error — that tends to come with ever larger feature sets. As the authors state: “The large number of irrelevant features simply increases variance and the opportunity to over-fit, without the balancing opportunity of learning better models (presuming that one can actually select the right subset).”

Clearly, bigger isn’t better when bigness gets in the way of deriving predictive insights. You don’t want your big data analytics effort to be a victim of its own scale. Your data scientists have to be smart enough to know when to scale back their models to the hardcore of features best suited to the analytic task at hand.

原文链接:http://www.infoworld.com/d/big-data/when-big-data-truly-better-249737 译者&via:51CTO








欢迎光临 168大数据 (http://www.bi168.cn/) Powered by Discuz! X3.2