private SparkContext addListener(SparkContext sc, SparkUIApi sparkUIManager) { sc.addSparkListener(new SparkListener() {
sc.sc().addSparkListener(new ClientListener()); synchronized (jcLock) { jc = new JobContextImpl(sc, localTmpDir);
private LocalHiveSparkClient(SparkConf sparkConf, HiveConf hiveConf) throws FileNotFoundException, MalformedURLException { String regJar = null; // the registrator jar should already be in CP when not in test mode if (HiveConf.getBoolVar(hiveConf, HiveConf.ConfVars.HIVE_IN_TEST)) { String kryoReg = sparkConf.get("spark.kryo.registrator", ""); if (SparkClientUtilities.HIVE_KRYO_REG_NAME.equals(kryoReg)) { regJar = SparkClientUtilities.findKryoRegistratorJar(hiveConf); SparkClientUtilities.addJarToContextLoader(new File(regJar)); } } sc = new JavaSparkContext(sparkConf); if (regJar != null) { sc.addJar(regJar); } jobMetricsListener = new JobMetricsListener(); sc.sc().addSparkListener(jobMetricsListener); }
sc.sc().addSparkListener(jobListener); final FileSystem fs = partitionFilePath.getFileSystem(sc.hadoopConfiguration()); if (!fs.exists(partitionFilePath)) {
sc.sc().addSparkListener(jobListener); HadoopUtil.deletePath(sc.hadoopConfiguration(), new Path(outputPath));
sc.sc().addSparkListener(jobListener);
sc.sc().addSparkListener(jobListener);
sc.sc().addSparkListener(jobListener); HadoopUtil.deletePath(sc.hadoopConfiguration(), new Path(outputPath));
private void updateSparkContext(@NonNull final SparkArgs sparkArgs, @NonNull final SparkContext sc) { for (SparkListener sparkListener : getSparkEventListeners()) { sc.addSparkListener(sparkListener); } sc.hadoopConfiguration().addResource(sparkArgs.getHadoopConfiguration()); }
/** * Creates JavaSparkContext if its hasn't been created yet, or returns the instance. {@link #addSchema(Schema)} and * {@link #addSchemas(Collection)} must not be called once the JavaSparkContext has been created * @return the JavaSparkContext that will be used to execute the JobDags */ public JavaSparkContext getOrCreateSparkContext() { if (!this.sparkContext.isPresent()) { this.sparkContext = Optional.of(new JavaSparkContext( SparkUtil.getSparkConf( this.appName, Optional.of(this.schemas), this.serializationClasses, this.conf))); this.sparkContext.get().sc().addSparkListener(new SparkEventListener()); // Adding hadoop configuration to default this.sparkContext.get().sc().hadoopConfiguration().addResource( new HadoopConfiguration(conf).getHadoopConf()); this.appId = this.sparkContext.get().sc().applicationId(); } return this.sparkContext.get(); }
sparkContext.sc().addSparkListener(new StatsReportListener()); sparkContext.sc().addSparkListener(new JobLogger()); sparkContext.sc().addSparkListener(jobMetricsListener);
/** * Create Snappy's SQL Listener instead of SQLListener */ private static void createListenerAndUI(SparkContext sc) { SQLListener initListener = ExternalStoreUtils.getSQLListener().get(); if (initListener == null) { SnappySQLListener listener = new SnappySQLListener(sc.conf()); if (ExternalStoreUtils.getSQLListener().compareAndSet(null, listener)) { sc.addSparkListener(listener); scala.Option<SparkUI> ui = sc.ui(); // embedded mode attaches SQLTab later via ToolsCallbackImpl that also // takes care of injecting any authentication module if configured if (ui.isDefined() && !(SnappyContext.getClusterMode(sc) instanceof SnappyEmbeddedMode)) { new SQLTab(listener, ui.get()); } } } }
sc.sc().addSparkListener(new ClientListener()); synchronized (jcLock) { jc = new JobContextImpl(sc, localTmpDir);
sc.sc().addSparkListener(new ClientListener()); synchronized (jcLock) { jc = new JobContextImpl(sc, localTmpDir);
registerUDFs(this.sqlCtx); registerUDAFs(this.sqlCtx); this.sqlCtx.sparkContext().addSparkListener(new SparkListener() { @Override public void onStageCompleted(SparkListenerStageCompleted sparkListenerStageCompleted) {
sc.sc().addSparkListener(jobListener);
sc.sc().addSparkListener(jobListener); HadoopUtil.deletePath(sc.hadoopConfiguration(), new Path(outputPath));
sc.sc().addSparkListener(jobListener); HadoopUtil.deletePath(sc.hadoopConfiguration(), new Path(outputPath));
jsc.sc().addSparkListener(new SparkListener() {
iterableAsScalaIterable(Arrays.asList("treeAggregate"))); sc.addSparkListener(progressBar);