dummynation进不去解决方法
dummynation进不去解决方法
1、首先,确保设备已连接至稳定互联网,尝试访问其他网站或应用,验证网络是否正常,如其他网站同样无法打开,建议重启路由器或联系网络服务提供商。
2、浏览器缓存和Cookie有时会影响网站访问,请清除它们后,再次尝试登录。
3、遇到兼容问题时,可尝试更换浏览器或设备(电脑、手机、平板),看是否能够顺利访问。
4、若以上方法无效,建议联系Dummynation的网站管理员或技术支持团队来解决问题。
怎么在hadoop作map/reduce时输出N种不同类型的value
在很多时候,特别是处理大数据的时候,我们希望一道MapReduce过程就可以解决几个问题。这样可以避免再次读取数据。比如:在做文本聚类/分类的时候,mapper读取语料,进行分词后,要同时算出每个词条(term)的term frequency以及它的document frequency. 前者对于每个词条来说其实是个向量, 它代表此词条在N篇文档各中的词频;而后者就是一个非负整数。 这时候就可以借助一种特殊的Writable类:GenericWritable.
用法是:继承这个类,然后把你要输出value的Writable类型加进它的CLASSES静态变量里,在后面的TermMapper和TermReducer中我的value使用了三种ArrayWritable,IntWritable和我自已定义的TFWritable,所以要把三者全加入TermWritable的CLASSES中。
package redpoll.examples;
import org.apache.hadoop.io.GenericWritable;
import org.apache.hadoop.io.Writable;
/**
* Generic Writable class for terms.
* @author Jeremy Chow(coderplay@gmail.com)
*/
public class TermWritable extends GenericWritable {
private static Class<? extends Writable>[] CLASSES = null;
static {
CLASSES = (Class<? extends Writable>[]) new Class[] {
org.apache.hadoop.io.ArrayWritable.class,
org.apache.hadoop.io.IntWritable.class,
redpoll.examples.TFWritable.class
};
}
public TermWritable() {
}
public TermWritable(Writable instance) {
set(instance);
}
@Override
protected Class<? extends Writable>[] getTypes() {
return CLASSES;
}
}
Mapper在collect数据时,用刚才定义的TermWritable来包装(wrap)要使用的Writable类。
package redpoll.examples;
import java.io.IOException;
import java.io.StringReader;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reporter;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.Token;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
/**
* A class provides for doing words segmenation and counting term TFs and DFs.<p>
* in: key is document id, value is a document instance. <br>
* output:
* <li>key is term, value is a <documentId, tf> pair</li>
* <li>key is term, value is document frequency corresponsing to the key</li>
* @author Jeremy Chow(coderplay@gmail.com)
*/
public class TermMapper extends MapReduceBase implements
Mapper<LongWritable, Document, Text, TermWritable> {
private static final Log log = LogFactory.getLog(TermMapper.class
.getName());
/* analyzer for words segmentation */
private Analyzer analyzer = null;
/* frequency weight for document title */
private IntWritable titleWeight = new IntWritable(2);
/* frequency weight for document content */
private IntWritable contentWeight = new IntWritable(1);
public void map(LongWritable key, Document value,
OutputCollector<Text, TermWritable> output, Reporter reporter)
throws IOException {
doMap(key, value.getTitle(), titleWeight, output, reporter);
doMap(key, value.getContent(), contentWeight, output, reporter);
}
private void doMap(LongWritable key, String value, IntWritable weight,
OutputCollector<Text, TermWritable> output, Reporter reporter)
throws IOException {
// do words segmentation
TokenStream ts = analyzer.tokenStream("dummy", new StringReader(value));
Token token = new Token();
while ((token = ts.next(token)) != null) {
String termString = new String(token.termBuffer(), 0, token.termLength());
Text term = new Text(termString);
// <term, <documentId,tf>>
TFWritable tf = new TFWritable(key, weight);
output.collect(term, new TermWritable(tf)); // wrap then collect
// <term, weight>
output.collect(term, new TermWritable(weight)); // wrap then collect
}
}
@Override
public void configure(JobConf job) {
String analyzerName = job.get("redpoll.text.analyzer");
try {
if (analyzerName != null)
analyzer = (Analyzer) Class.forName(analyzerName).newInstance();
} catch (Exception excp) {
excp.printStackTrace();
}
if (analyzer == null)
analyzer = new StandardAnalyzer();
}
}
Reduce如果想获取数据,则可以解包(unwrap)它:
package redpoll.examples;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.io.ArrayWritable;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
/**
* Form a tf vector and caculate the df for terms.
* @author Jeremy Chow(coderplay@gmail.com)
*/
public class TermReducer extends MapReduceBase implements Reducer<Text, TermWritable, Text, Writable> {
private static final Log log = LogFactory.getLog(TermReducer.class.getName());
public void reduce(Text key, Iterator<TermWritable> values,
OutputCollector<Text, Writable> output, Reporter reporter)
throws IOException {
ArrayList<TFWritable> tfs = new ArrayList<TFWritable>();
int sum = 0;
// log.info("term:" + key.toString());
while (values.hasNext()) {
Writable value = values.next().get(); // unwrap
if (value instanceof TFWritable) {
tfs.add((TFWritable) value );
}else {
sum += ((IntWritable) value).get();
}
}
TFWritable writables[] = new TFWritable[tfs.size()];
ArrayWritable aw = new ArrayWritable(TFWritable.class, tfs.toArray(writables));
// wrap again
output.collect(key, new TermWritable(aw));
output.collect(key, new TermWritable(new IntWritable(sum)));
}
}
这儿collect的时候可以不再用TermWritable,只不过我在重新定义了OutputFormat,让它输出到两个不同的文件,而且输出的类型也是不一样的。
转载,仅供参考。
声明: 我们致力于保护作者版权,注重分享,被刊用文章因无法核实真实出处,未能及时与作者取得联系,或有版权异议的,请联系管理员,我们会立即处理,本站部分文字与图片资源来自于网络,转载是出于传递更多信息之目的,若有来源标注错误或侵犯了您的合法权益,请立即通知我们(管理员邮箱:daokedao3713@qq.com),情况属实,我们会第一时间予以删除,并同时向您表示歉意,谢谢!