status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,737 | [Bug] [MasterServer] Event handle twice in two thread pool | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch: 2.0
```
[INFO] 2021-11-08 15:55:48.889 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 865, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.911 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 865, event type: SUCCESS
[INFO] 2021-11-08 15:55:48.920 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 866, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.929 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 866, event type: SUCCESS
[INFO] 2021-11-08 15:55:48.936 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 867, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.942 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 867, event type: SUCCESS
[INFO] 2021-11-08 15:55:48.948 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 868, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.956 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 868, event type: SUCCESS
[INFO] 2021-11-08 15:55:48.963 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:MasterEventExecution, process id: 741, task id: 868, event type: SUCCESS
[INFO] 2021-11-08 15:55:48.966 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 869, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.969 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:MasterEventExecution, process id: 741, task id: 869, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.975 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 869, event type: SUCCESS
[INFO] 2021-11-08 15:55:48.975 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:MasterEventExecution, process id: 741, task id: 869, event type: SUCCESS
[INFO] 2021-11-08 15:55:48.982 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:MasterEventExecution, process id: 741, task id: 870, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.986 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 870, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.994 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:MasterEventExecution, process id: 741, task id: 870, event type: SUCCESS
[INFO] 2021-11-08 15:55:49.007 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 870, event type: SUCCESS
[INFO] 2021-11-08 15:55:49.009 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:MasterEventExecution, process id: 741, task id: 871, event type: RUNNING_EXECUTION
```
from the log, can see that task id 870 has 4 event, 2 RUNNING_EXECUTION and 2 SUCCESS;
grep 870 from the log
```
[INFO] 2021-11-08 15:55:48.844 org.apache.dolphinscheduler.server.master.runner.task.CommonTaskProcessor:[120] - task ready to submit: TaskInstance{id=870, name='parallel_01', taskType='SHELL', processInstanceId=741, processInstanceName='null', state=SUBMITTED_SUCCESS, firstSubmitTime=Mon Nov 08 15:55:48 CST 2021, submitTime=Mon Nov 08 15:55:48 CST 2021, startTime=null, endTime=null, host='null', executePath='null', logPath='null', retryTimes=0, alertFlag=NO, processInstance=null, processDefine=null, pid=0, appLink='null', flag=YES, dependency='null', duration=null, maxRetryTimes=0, retryInterval=1, taskInstancePriority=MEDIUM, processInstancePriority=MEDIUM, dependentResult='null', workerGroup='default', environmentCode=-1, environmentConfig='null', executorId=1, executorName='null', delayTime=0, dryRun=0}
[INFO] 2021-11-08 15:55:48.856 org.apache.dolphinscheduler.server.master.processor.TaskAckProcessor:[79] - taskAckCommand : TaskExecuteAckCommand{taskInstanceId=870, startTime=Mon Nov 08 15:55:48 CST 2021, host='192.168.31.239:1234', status=1, logPath='/home/csf/javapath/dolphinscheduler/logs/882302276001792_1/741/870.log', executePath='/tmp/dolphinscheduler/exec/process/854201280159744/882302276001792_1/741/870', processInstanceId='741'}
[INFO] 2021-11-08 15:55:48.858 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[1163] - remove task from stand by list, id: 870 name:parallel_01
[INFO] 2021-11-08 15:55:48.874 org.apache.dolphinscheduler.server.master.processor.TaskResponseProcessor:[80] - received command : TaskExecuteResponseCommand{taskInstanceId=870, status=7, endTime=Mon Nov 08 15:55:48 CST 2021, processId=6637, appIds=''}
[INFO] 2021-11-08 15:55:48.982 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:MasterEventExecution, process id: 741, task id: 870, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.986 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 870, event type: RUNNING_EXECUTION
[INFO] 2021-11-08 15:55:48.986 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[370] - work flow 741 task 870 state:SUCCESS
[INFO] 2021-11-08 15:55:48.988 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[370] - work flow 741 task 870 state:SUCCESS
[INFO] 2021-11-08 15:55:48.994 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:MasterEventExecution, process id: 741, task id: 870, event type: SUCCESS
[INFO] 2021-11-08 15:55:48.995 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[370] - work flow 741 task 870 state:SUCCESS
[INFO] 2021-11-08 15:55:49.007 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[291] - handleEvent current Thread:Master-Exec-Thread, process id: 741, task id: 870, event type: SUCCESS
[INFO] 2021-11-08 15:55:49.008 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[370] - work flow 741 task 870 state:SUCCESS
```
### What you expected to happen
Each event is processed only once
### How to reproduce
Create parallel tasks and reduce the master threads.
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6737 | https://github.com/apache/dolphinscheduler/pull/6738 | bdea8d6ae4e35d64686d97f606b1e235f9664dbe | b6824b47419a5abc7ab4279d648417249a74f122 | "2021-11-08T08:11:35Z" | java | "2021-11-08T08:54:58Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteThread.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.runner;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVERY_START_NODE_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_START_NODES;
import static org.apache.dolphinscheduler.common.Constants.DEFAULT_WORKER_GROUP;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.DependResult;
import org.apache.dolphinscheduler.common.enums.Direct;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.Priority;
import org.apache.dolphinscheduler.common.enums.StateEvent;
import org.apache.dolphinscheduler.common.enums.StateEventType;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.enums.TaskTimeoutStrategy;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.ProcessDag;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.common.utils.CollectionUtils;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.common.utils.ParameterUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.Environment;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.ProjectUser;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.utils.DagHelper;
import org.apache.dolphinscheduler.remote.command.HostUpdateCommand;
import org.apache.dolphinscheduler.remote.utils.Host;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.dispatch.executor.NettyExecutorManager;
import org.apache.dolphinscheduler.server.master.runner.task.ITaskProcessor;
import org.apache.dolphinscheduler.server.master.runner.task.TaskAction;
import org.apache.dolphinscheduler.server.master.runner.task.TaskProcessorFactory;
import org.apache.dolphinscheduler.service.alert.ProcessAlertManager;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.quartz.cron.CronUtils;
import org.apache.dolphinscheduler.service.queue.PeerTaskInstancePriorityQueue;
import org.apache.commons.lang.StringUtils;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Date;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.concurrent.ExecutorService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.common.collect.HashBasedTable;
import com.google.common.collect.Lists;
import com.google.common.collect.Table;
/**
* master exec thread,split dag
*/
public class WorkflowExecuteThread implements Runnable {
/**
* logger of WorkflowExecuteThread
*/
private static final Logger logger = LoggerFactory.getLogger(WorkflowExecuteThread.class);
/**
* runing TaskNode
*/
private final Map<Integer, ITaskProcessor> activeTaskProcessorMaps = new ConcurrentHashMap<>();
/**
* task exec service
*/
private final ExecutorService taskExecService;
/**
* process instance
*/
private ProcessInstance processInstance;
/**
* submit failure nodes
*/
private boolean taskFailedSubmit = false;
/**
* recover node id list
*/
private List<TaskInstance> recoverNodeIdList = new ArrayList<>();
/**
* error task list
*/
private Map<String, TaskInstance> errorTaskList = new ConcurrentHashMap<>();
/**
* complete task list
*/
private Map<String, TaskInstance> completeTaskList = new ConcurrentHashMap<>();
/**
* ready to submit task queue
*/
private PeerTaskInstancePriorityQueue readyToSubmitTaskQueue = new PeerTaskInstancePriorityQueue();
/**
* depend failed task map
*/
private Map<String, TaskInstance> dependFailedTask = new ConcurrentHashMap<>();
/**
* forbidden task map
*/
private Map<String, TaskNode> forbiddenTaskList = new ConcurrentHashMap<>();
/**
* skip task map
*/
private Map<String, TaskNode> skipTaskNodeList = new ConcurrentHashMap<>();
/**
* recover tolerance fault task list
*/
private List<TaskInstance> recoverToleranceFaultTaskList = new ArrayList<>();
/**
* alert manager
*/
private ProcessAlertManager processAlertManager;
/**
* the object of DAG
*/
private DAG<String, TaskNode, TaskNodeRelation> dag;
/**
* process service
*/
private ProcessService processService;
/**
* master config
*/
private MasterConfig masterConfig;
/**
*
*/
private NettyExecutorManager nettyExecutorManager;
private ConcurrentLinkedQueue<StateEvent> stateEvents = new ConcurrentLinkedQueue<>();
private List<Date> complementListDate = Lists.newLinkedList();
private Table<Integer, Long, TaskInstance> taskInstanceHashMap = HashBasedTable.create();
private ProcessDefinition processDefinition;
private String key;
private ConcurrentHashMap<Integer, TaskInstance> taskTimeoutCheckList;
/**
* start flag, true: start nodes submit completely
*
*/
private boolean isStart = false;
/**
* constructor of WorkflowExecuteThread
*
* @param processInstance processInstance
* @param processService processService
* @param nettyExecutorManager nettyExecutorManager
* @param taskTimeoutCheckList
*/
public WorkflowExecuteThread(ProcessInstance processInstance
, ProcessService processService
, NettyExecutorManager nettyExecutorManager
, ProcessAlertManager processAlertManager
, MasterConfig masterConfig
, ConcurrentHashMap<Integer, TaskInstance> taskTimeoutCheckList) {
this.processService = processService;
this.processInstance = processInstance;
this.masterConfig = masterConfig;
int masterTaskExecNum = masterConfig.getMasterExecTaskNum();
this.taskExecService = ThreadUtils.newDaemonFixedThreadExecutor("Master-Task-Exec-Thread",
masterTaskExecNum);
this.nettyExecutorManager = nettyExecutorManager;
this.processAlertManager = processAlertManager;
this.taskTimeoutCheckList = taskTimeoutCheckList;
}
@Override
public void run() {
try {
startProcess();
handleEvents();
} catch (Exception e) {
logger.error("handler error:", e);
}
}
/**
* the process start nodes are submitted completely.
* @return
*/
public boolean isStart() {
return this.isStart;
}
private void handleEvents() {
while (this.stateEvents.size() > 0) {
try {
StateEvent stateEvent = this.stateEvents.peek();
if (stateEventHandler(stateEvent)) {
this.stateEvents.remove(stateEvent);
}
} catch (Exception e) {
logger.error("state handle error:", e);
}
}
}
public String getKey() {
if (StringUtils.isNotEmpty(key)
|| this.processDefinition == null) {
return key;
}
key = String.format("%d_%d_%d",
this.processDefinition.getCode(),
this.processDefinition.getVersion(),
this.processInstance.getId());
return key;
}
public boolean addStateEvent(StateEvent stateEvent) {
if (processInstance.getId() != stateEvent.getProcessInstanceId()) {
logger.info("state event would be abounded :{}", stateEvent.toString());
return false;
}
this.stateEvents.add(stateEvent);
return true;
}
public int eventSize() {
return this.stateEvents.size();
}
public ProcessInstance getProcessInstance() {
return this.processInstance;
}
private boolean stateEventHandler(StateEvent stateEvent) {
logger.info("process event: {}", stateEvent.toString());
if (!checkStateEvent(stateEvent)) {
return false;
}
boolean result = false;
switch (stateEvent.getType()) {
case PROCESS_STATE_CHANGE:
result = processStateChangeHandler(stateEvent);
break;
case TASK_STATE_CHANGE:
result = taskStateChangeHandler(stateEvent);
break;
case PROCESS_TIMEOUT:
result = processTimeout();
break;
case TASK_TIMEOUT:
result = taskTimeout(stateEvent);
break;
default:
break;
}
if (result) {
this.stateEvents.remove(stateEvent);
}
return result;
}
private boolean taskTimeout(StateEvent stateEvent) {
if (taskInstanceHashMap.containsRow(stateEvent.getTaskInstanceId())) {
return true;
}
TaskInstance taskInstance = taskInstanceHashMap
.row(stateEvent.getTaskInstanceId())
.values()
.iterator().next();
if (TimeoutFlag.CLOSE == taskInstance.getTaskDefine().getTimeoutFlag()) {
return true;
}
TaskTimeoutStrategy taskTimeoutStrategy = taskInstance.getTaskDefine().getTimeoutNotifyStrategy();
if (TaskTimeoutStrategy.FAILED == taskTimeoutStrategy) {
ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId());
taskProcessor.action(TaskAction.TIMEOUT);
return false;
} else {
processAlertManager.sendTaskTimeoutAlert(processInstance, taskInstance, taskInstance.getTaskDefine());
return true;
}
}
private boolean processTimeout() {
this.processAlertManager.sendProcessTimeoutAlert(this.processInstance, this.processDefinition);
return true;
}
private boolean taskStateChangeHandler(StateEvent stateEvent) {
TaskInstance task = processService.findTaskInstanceById(stateEvent.getTaskInstanceId());
if (task.getState().typeIsFinished()) {
taskFinished(task);
} else if (activeTaskProcessorMaps.containsKey(stateEvent.getTaskInstanceId())) {
ITaskProcessor iTaskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId());
iTaskProcessor.run();
if (iTaskProcessor.taskState().typeIsFinished()) {
task = processService.findTaskInstanceById(stateEvent.getTaskInstanceId());
taskFinished(task);
}
} else {
logger.error("state handler error: {}", stateEvent.toString());
}
return true;
}
private void taskFinished(TaskInstance task) {
logger.info("work flow {} task {} state:{} ",
processInstance.getId(),
task.getId(),
task.getState());
if (task.taskCanRetry()) {
addTaskToStandByList(task);
if (!task.retryTaskIntervalOverTime()) {
logger.info("failure task will be submitted: process id: {}, task instance id: {} state:{} retry times:{} / {}, interval:{}",
processInstance.getId(),
task.getId(),
task.getState(),
task.getRetryTimes(),
task.getMaxRetryTimes(),
task.getRetryInterval());
this.addTimeoutCheck(task);
} else {
submitStandByTask();
}
return;
}
ProcessInstance processInstance = processService.findProcessInstanceById(this.processInstance.getId());
completeTaskList.put(Long.toString(task.getTaskCode()), task);
activeTaskProcessorMaps.remove(task.getId());
taskTimeoutCheckList.remove(task.getId());
if (task.getState().typeIsSuccess()) {
processInstance.setVarPool(task.getVarPool());
processService.saveProcessInstance(processInstance);
submitPostNode(Long.toString(task.getTaskCode()));
} else if (task.getState().typeIsFailure()) {
if (task.isConditionsTask()
|| DagHelper.haveConditionsAfterNode(Long.toString(task.getTaskCode()), dag)) {
submitPostNode(Long.toString(task.getTaskCode()));
} else {
errorTaskList.put(Long.toString(task.getTaskCode()), task);
if (processInstance.getFailureStrategy() == FailureStrategy.END) {
killAllTasks();
}
}
}
this.updateProcessInstanceState();
}
private boolean checkStateEvent(StateEvent stateEvent) {
if (this.processInstance.getId() != stateEvent.getProcessInstanceId()) {
logger.error("mismatch process instance id: {}, state event:{}",
this.processInstance.getId(),
stateEvent.toString());
return false;
}
return true;
}
private boolean processStateChangeHandler(StateEvent stateEvent) {
try {
logger.info("process:{} state {} change to {}", processInstance.getId(), processInstance.getState(), stateEvent.getExecutionStatus());
processInstance = processService.findProcessInstanceById(this.processInstance.getId());
if (processComplementData()) {
return true;
}
if (stateEvent.getExecutionStatus().typeIsFinished()) {
endProcess();
}
if (processInstance.getState() == ExecutionStatus.READY_STOP) {
killAllTasks();
}
return true;
} catch (Exception e) {
logger.error("process state change error:", e);
}
return true;
}
private boolean processComplementData() throws Exception {
if (!needComplementProcess()) {
return false;
}
Date scheduleDate = processInstance.getScheduleTime();
if (scheduleDate == null) {
scheduleDate = complementListDate.get(0);
} else if (processInstance.getState().typeIsFinished()) {
endProcess();
if (complementListDate.size() <= 0) {
logger.info("process complement end. process id:{}", processInstance.getId());
return true;
}
int index = complementListDate.indexOf(scheduleDate);
if (index >= complementListDate.size() - 1 || !processInstance.getState().typeIsSuccess()) {
logger.info("process complement end. process id:{}", processInstance.getId());
// complement data ends || no success
return true;
}
logger.info("process complement continue. process id:{}, schedule time:{} complementListDate:{}",
processInstance.getId(),
processInstance.getScheduleTime(),
complementListDate.toString());
scheduleDate = complementListDate.get(index + 1);
//the next process complement
processInstance.setId(0);
}
processInstance.setScheduleTime(scheduleDate);
Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam());
if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) {
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam));
}
processInstance.setState(ExecutionStatus.RUNNING_EXECUTION);
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime()));
processInstance.setStartTime(new Date());
processInstance.setEndTime(null);
processService.saveProcessInstance(processInstance);
this.taskInstanceHashMap.clear();
startProcess();
return true;
}
private boolean needComplementProcess() {
if (processInstance.isComplementData()
&& Flag.NO == processInstance.getIsSubProcess()) {
return true;
}
return false;
}
private void startProcess() throws Exception {
if (this.taskInstanceHashMap.size() == 0) {
isStart = false;
buildFlowDag();
initTaskQueue();
submitPostNode(null);
isStart = true;
}
}
/**
* process end handle
*/
private void endProcess() {
this.stateEvents.clear();
processInstance.setEndTime(new Date());
ProcessDefinition processDefinition = this.processService.findProcessDefinition(processInstance.getProcessDefinitionCode(),processInstance.getProcessDefinitionVersion());
if (processDefinition.getExecutionType().typeIsSerialWait()) {
checkSerialProcess(processDefinition);
}
processService.updateProcessInstance(processInstance);
if (processInstance.getState().typeIsWaitingThread()) {
processService.createRecoveryWaitingThreadCommand(null, processInstance);
}
List<TaskInstance> taskInstances = processService.findValidTaskListByProcessId(processInstance.getId());
ProjectUser projectUser = processService.queryProjectWithUserByProcessInstanceId(processInstance.getId());
processAlertManager.sendAlertProcessInstance(processInstance, taskInstances, projectUser);
}
public void checkSerialProcess(ProcessDefinition processDefinition) {
this.processInstance = processService.findProcessInstanceById(processInstance.getId());
int nextInstanceId = processInstance.getNextProcessInstanceId();
if (nextInstanceId == 0) {
ProcessInstance nextProcessInstance = this.processService.loadNextProcess4Serial(processInstance.getProcessDefinition().getCode(),ExecutionStatus.SERIAL_WAIT.getCode());
if (nextProcessInstance == null) {
return;
}
nextInstanceId = nextProcessInstance.getId();
}
ProcessInstance nextProcessInstance = this.processService.findProcessInstanceById(nextInstanceId);
if (nextProcessInstance.getState().typeIsFinished() || nextProcessInstance.getState().typeIsRunning()) {
return;
}
Map<String, Object> cmdParam = new HashMap<>();
cmdParam.put(CMD_PARAM_RECOVER_PROCESS_ID_STRING, nextInstanceId);
Command command = new Command();
command.setCommandType(CommandType.RECOVER_SERIAL_WAIT);
command.setProcessDefinitionCode(processDefinition.getCode());
command.setCommandParam(JSONUtils.toJsonString(cmdParam));
processService.createCommand(command);
}
/**
* generate process dag
*
* @throws Exception exception
*/
private void buildFlowDag() throws Exception {
if (this.dag != null) {
return;
}
processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion());
recoverNodeIdList = getStartTaskInstanceList(processInstance.getCommandParam());
List<TaskNode> taskNodeList =
processService.transformTask(processService.findRelationByCode(processDefinition.getProjectCode(), processDefinition.getCode()), Lists.newArrayList());
forbiddenTaskList.clear();
taskNodeList.forEach(taskNode -> {
if (taskNode.isForbidden()) {
forbiddenTaskList.put(Long.toString(taskNode.getCode()), taskNode);
}
});
// generate process to get DAG info
List<String> recoveryNodeCodeList = getRecoveryNodeCodeList();
List<String> startNodeNameList = parseStartNodeName(processInstance.getCommandParam());
ProcessDag processDag = generateFlowDag(taskNodeList,
startNodeNameList, recoveryNodeCodeList, processInstance.getTaskDependType());
if (processDag == null) {
logger.error("processDag is null");
return;
}
// generate process dag
dag = DagHelper.buildDagGraph(processDag);
}
/**
* init task queue
*/
private void initTaskQueue() {
taskFailedSubmit = false;
activeTaskProcessorMaps.clear();
dependFailedTask.clear();
completeTaskList.clear();
errorTaskList.clear();
List<TaskInstance> taskInstanceList = processService.findValidTaskListByProcessId(processInstance.getId());
for (TaskInstance task : taskInstanceList) {
if (task.isTaskComplete()) {
completeTaskList.put(Long.toString(task.getTaskCode()), task);
}
if (task.isConditionsTask() || DagHelper.haveConditionsAfterNode(Long.toString(task.getTaskCode()), dag)) {
continue;
}
if (task.getState().typeIsFailure() && !task.taskCanRetry()) {
errorTaskList.put(Long.toString(task.getTaskCode()), task);
}
}
if (processInstance.isComplementData() && complementListDate.size() == 0) {
Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam());
if (cmdParam != null && cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) {
Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE));
Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
List<Schedule> schedules = processService.queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode());
if (complementListDate.size() == 0 && needComplementProcess()) {
complementListDate = CronUtils.getSelfFireDateList(start, end, schedules);
logger.info(" process definition code:{} complement data: {}",
processInstance.getProcessDefinitionCode(), complementListDate.toString());
if (complementListDate.size() > 0 && Flag.NO == processInstance.getIsSubProcess()) {
processInstance.setScheduleTime(complementListDate.get(0));
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime()));
processService.updateProcessInstance(processInstance);
}
}
}
}
}
/**
* submit task to execute
*
* @param taskInstance task instance
* @return TaskInstance
*/
private TaskInstance submitTaskExec(TaskInstance taskInstance) {
try {
ITaskProcessor taskProcessor = TaskProcessorFactory.getTaskProcessor(taskInstance.getTaskType());
if (taskInstance.getState() == ExecutionStatus.RUNNING_EXECUTION
&& taskProcessor.getType().equalsIgnoreCase(Constants.COMMON_TASK_TYPE)) {
notifyProcessHostUpdate(taskInstance);
}
boolean submit = taskProcessor.submit(taskInstance, processInstance, masterConfig.getMasterTaskCommitRetryTimes(), masterConfig.getMasterTaskCommitInterval());
if (submit) {
this.taskInstanceHashMap.put(taskInstance.getId(), taskInstance.getTaskCode(), taskInstance);
activeTaskProcessorMaps.put(taskInstance.getId(), taskProcessor);
taskProcessor.run();
addTimeoutCheck(taskInstance);
TaskDefinition taskDefinition = processService.findTaskDefinition(
taskInstance.getTaskCode(),
taskInstance.getTaskDefinitionVersion());
taskInstance.setTaskDefine(taskDefinition);
if (taskProcessor.taskState().typeIsFinished()) {
StateEvent stateEvent = new StateEvent();
stateEvent.setProcessInstanceId(this.processInstance.getId());
stateEvent.setTaskInstanceId(taskInstance.getId());
stateEvent.setExecutionStatus(taskProcessor.taskState());
stateEvent.setType(StateEventType.TASK_STATE_CHANGE);
this.stateEvents.add(stateEvent);
}
return taskInstance;
} else {
logger.error("process id:{} name:{} submit standby task id:{} name:{} failed!",
processInstance.getId(), processInstance.getName(),
taskInstance.getId(), taskInstance.getName());
return null;
}
} catch (Exception e) {
logger.error("submit standby task error", e);
return null;
}
}
private void notifyProcessHostUpdate(TaskInstance taskInstance) {
if (StringUtils.isEmpty(taskInstance.getHost())) {
return;
}
try {
HostUpdateCommand hostUpdateCommand = new HostUpdateCommand();
hostUpdateCommand.setProcessHost(NetUtils.getAddr(masterConfig.getListenPort()));
hostUpdateCommand.setTaskInstanceId(taskInstance.getId());
Host host = new Host(taskInstance.getHost());
nettyExecutorManager.doExecute(host, hostUpdateCommand.convert2Command());
} catch (Exception e) {
logger.error("notify process host update", e);
}
}
private void addTimeoutCheck(TaskInstance taskInstance) {
if (taskTimeoutCheckList.containsKey(taskInstance.getId())) {
return;
}
TaskDefinition taskDefinition = processService.findTaskDefinition(
taskInstance.getTaskCode(),
taskInstance.getTaskDefinitionVersion()
);
taskInstance.setTaskDefine(taskDefinition);
if (TimeoutFlag.OPEN == taskDefinition.getTimeoutFlag() || taskInstance.taskCanRetry()) {
this.taskTimeoutCheckList.put(taskInstance.getId(), taskInstance);
} else {
if (taskInstance.isDependTask() || taskInstance.isSubProcess()) {
this.taskTimeoutCheckList.put(taskInstance.getId(), taskInstance);
}
}
}
/**
* find task instance in db.
* in case submit more than one same name task in the same time.
*
* @param taskCode task code
* @param taskVersion task version
* @return TaskInstance
*/
private TaskInstance findTaskIfExists(Long taskCode, int taskVersion) {
List<TaskInstance> taskInstanceList = processService.findValidTaskListByProcessId(this.processInstance.getId());
for (TaskInstance taskInstance : taskInstanceList) {
if (taskInstance.getTaskCode() == taskCode && taskInstance.getTaskDefinitionVersion() == taskVersion) {
return taskInstance;
}
}
return null;
}
/**
* encapsulation task
*
* @param processInstance process instance
* @param taskNode taskNode
* @return TaskInstance
*/
private TaskInstance createTaskInstance(ProcessInstance processInstance, TaskNode taskNode) {
TaskInstance taskInstance = findTaskIfExists(taskNode.getCode(), taskNode.getVersion());
if (taskInstance == null) {
taskInstance = new TaskInstance();
taskInstance.setTaskCode(taskNode.getCode());
taskInstance.setTaskDefinitionVersion(taskNode.getVersion());
// task name
taskInstance.setName(taskNode.getName());
// task instance state
taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS);
// process instance id
taskInstance.setProcessInstanceId(processInstance.getId());
// task instance type
taskInstance.setTaskType(taskNode.getType().toUpperCase());
// task instance whether alert
taskInstance.setAlertFlag(Flag.NO);
// task instance start time
taskInstance.setStartTime(null);
// task instance flag
taskInstance.setFlag(Flag.YES);
// task dry run flag
taskInstance.setDryRun(processInstance.getDryRun());
// task instance retry times
taskInstance.setRetryTimes(0);
// max task instance retry times
taskInstance.setMaxRetryTimes(taskNode.getMaxRetryTimes());
// retry task instance interval
taskInstance.setRetryInterval(taskNode.getRetryInterval());
//set task param
taskInstance.setTaskParams(taskNode.getTaskParams());
// task instance priority
if (taskNode.getTaskInstancePriority() == null) {
taskInstance.setTaskInstancePriority(Priority.MEDIUM);
} else {
taskInstance.setTaskInstancePriority(taskNode.getTaskInstancePriority());
}
String processWorkerGroup = processInstance.getWorkerGroup();
processWorkerGroup = StringUtils.isBlank(processWorkerGroup) ? DEFAULT_WORKER_GROUP : processWorkerGroup;
String taskWorkerGroup = StringUtils.isBlank(taskNode.getWorkerGroup()) ? processWorkerGroup : taskNode.getWorkerGroup();
Long processEnvironmentCode = Objects.isNull(processInstance.getEnvironmentCode()) ? -1 : processInstance.getEnvironmentCode();
Long taskEnvironmentCode = Objects.isNull(taskNode.getEnvironmentCode()) ? processEnvironmentCode : taskNode.getEnvironmentCode();
if (!processWorkerGroup.equals(DEFAULT_WORKER_GROUP) && taskWorkerGroup.equals(DEFAULT_WORKER_GROUP)) {
taskInstance.setWorkerGroup(processWorkerGroup);
taskInstance.setEnvironmentCode(processEnvironmentCode);
} else {
taskInstance.setWorkerGroup(taskWorkerGroup);
taskInstance.setEnvironmentCode(taskEnvironmentCode);
}
if (!taskInstance.getEnvironmentCode().equals(-1L)) {
Environment environment = processService.findEnvironmentByCode(taskInstance.getEnvironmentCode());
if (Objects.nonNull(environment) && StringUtils.isNotEmpty(environment.getConfig())) {
taskInstance.setEnvironmentConfig(environment.getConfig());
}
}
// delay execution time
taskInstance.setDelayTime(taskNode.getDelayTime());
}
return taskInstance;
}
public void getPreVarPool(TaskInstance taskInstance, Set<String> preTask) {
Map<String, Property> allProperty = new HashMap<>();
Map<String, TaskInstance> allTaskInstance = new HashMap<>();
if (CollectionUtils.isNotEmpty(preTask)) {
for (String preTaskCode : preTask) {
TaskInstance preTaskInstance = completeTaskList.get(preTaskCode);
if (preTaskInstance == null) {
continue;
}
String preVarPool = preTaskInstance.getVarPool();
if (StringUtils.isNotEmpty(preVarPool)) {
List<Property> properties = JSONUtils.toList(preVarPool, Property.class);
for (Property info : properties) {
setVarPoolValue(allProperty, allTaskInstance, preTaskInstance, info);
}
}
}
if (allProperty.size() > 0) {
taskInstance.setVarPool(JSONUtils.toJsonString(allProperty.values()));
}
}
}
private void setVarPoolValue(Map<String, Property> allProperty, Map<String, TaskInstance> allTaskInstance, TaskInstance preTaskInstance, Property thisProperty) {
//for this taskInstance all the param in this part is IN.
thisProperty.setDirect(Direct.IN);
//get the pre taskInstance Property's name
String proName = thisProperty.getProp();
//if the Previous nodes have the Property of same name
if (allProperty.containsKey(proName)) {
//comparison the value of two Property
Property otherPro = allProperty.get(proName);
//if this property'value of loop is empty,use the other,whether the other's value is empty or not
if (StringUtils.isEmpty(thisProperty.getValue())) {
allProperty.put(proName, otherPro);
//if property'value of loop is not empty,and the other's value is not empty too, use the earlier value
} else if (StringUtils.isNotEmpty(otherPro.getValue())) {
TaskInstance otherTask = allTaskInstance.get(proName);
if (otherTask.getEndTime().getTime() > preTaskInstance.getEndTime().getTime()) {
allProperty.put(proName, thisProperty);
allTaskInstance.put(proName, preTaskInstance);
} else {
allProperty.put(proName, otherPro);
}
} else {
allProperty.put(proName, thisProperty);
allTaskInstance.put(proName, preTaskInstance);
}
} else {
allProperty.put(proName, thisProperty);
allTaskInstance.put(proName, preTaskInstance);
}
}
private void submitPostNode(String parentNodeCode) {
Set<String> submitTaskNodeList = DagHelper.parsePostNodes(parentNodeCode, skipTaskNodeList, dag, completeTaskList);
List<TaskInstance> taskInstances = new ArrayList<>();
for (String taskNode : submitTaskNodeList) {
TaskNode taskNodeObject = dag.getNode(taskNode);
if (taskInstanceHashMap.containsColumn(taskNodeObject.getCode())) {
continue;
}
TaskInstance task = createTaskInstance(processInstance, taskNodeObject);
taskInstances.add(task);
}
// if previous node success , post node submit
for (TaskInstance task : taskInstances) {
if (readyToSubmitTaskQueue.contains(task)) {
continue;
}
if (completeTaskList.containsKey(Long.toString(task.getTaskCode()))) {
logger.info("task {} has already run success", task.getName());
continue;
}
if (task.getState().typeIsPause() || task.getState().typeIsCancel()) {
logger.info("task {} stopped, the state is {}", task.getName(), task.getState());
} else {
addTaskToStandByList(task);
}
}
submitStandByTask();
updateProcessInstanceState();
}
/**
* determine whether the dependencies of the task node are complete
*
* @return DependResult
*/
private DependResult isTaskDepsComplete(String taskCode) {
Collection<String> startNodes = dag.getBeginNode();
// if vertex,returns true directly
if (startNodes.contains(taskCode)) {
return DependResult.SUCCESS;
}
TaskNode taskNode = dag.getNode(taskCode);
List<String> depCodeList = taskNode.getDepList();
for (String depsNode : depCodeList) {
if (!dag.containsNode(depsNode)
|| forbiddenTaskList.containsKey(depsNode)
|| skipTaskNodeList.containsKey(depsNode)) {
continue;
}
// dependencies must be fully completed
if (!completeTaskList.containsKey(depsNode)) {
return DependResult.WAITING;
}
ExecutionStatus depTaskState = completeTaskList.get(depsNode).getState();
if (depTaskState.typeIsPause() || depTaskState.typeIsCancel()) {
return DependResult.NON_EXEC;
}
// ignore task state if current task is condition
if (taskNode.isConditionsTask()) {
continue;
}
if (!dependTaskSuccess(depsNode, taskCode)) {
return DependResult.FAILED;
}
}
logger.info("taskCode: {} completeDependTaskList: {}", taskCode, Arrays.toString(completeTaskList.keySet().toArray()));
return DependResult.SUCCESS;
}
/**
* depend node is completed, but here need check the condition task branch is the next node
*/
private boolean dependTaskSuccess(String dependNodeName, String nextNodeName) {
if (dag.getNode(dependNodeName).isConditionsTask()) {
//condition task need check the branch to run
List<String> nextTaskList = DagHelper.parseConditionTask(dependNodeName, skipTaskNodeList, dag, completeTaskList);
if (!nextTaskList.contains(nextNodeName)) {
return false;
}
} else {
ExecutionStatus depTaskState = completeTaskList.get(dependNodeName).getState();
if (depTaskState.typeIsFailure()) {
return false;
}
}
return true;
}
/**
* query task instance by complete state
*
* @param state state
* @return task instance list
*/
private List<TaskInstance> getCompleteTaskByState(ExecutionStatus state) {
List<TaskInstance> resultList = new ArrayList<>();
for (Map.Entry<String, TaskInstance> entry : completeTaskList.entrySet()) {
if (entry.getValue().getState() == state) {
resultList.add(entry.getValue());
}
}
return resultList;
}
/**
* where there are ongoing tasks
*
* @param state state
* @return ExecutionStatus
*/
private ExecutionStatus runningState(ExecutionStatus state) {
if (state == ExecutionStatus.READY_STOP
|| state == ExecutionStatus.READY_PAUSE
|| state == ExecutionStatus.WAITING_THREAD
|| state == ExecutionStatus.DELAY_EXECUTION) {
// if the running task is not completed, the state remains unchanged
return state;
} else {
return ExecutionStatus.RUNNING_EXECUTION;
}
}
/**
* exists failure task,contains submit failure、dependency failure,execute failure(retry after)
*
* @return Boolean whether has failed task
*/
private boolean hasFailedTask() {
if (this.taskFailedSubmit) {
return true;
}
if (this.errorTaskList.size() > 0) {
return true;
}
return this.dependFailedTask.size() > 0;
}
/**
* process instance failure
*
* @return Boolean whether process instance failed
*/
private boolean processFailed() {
if (hasFailedTask()) {
if (processInstance.getFailureStrategy() == FailureStrategy.END) {
return true;
}
if (processInstance.getFailureStrategy() == FailureStrategy.CONTINUE) {
return readyToSubmitTaskQueue.size() == 0 || activeTaskProcessorMaps.size() == 0;
}
}
return false;
}
/**
* whether task for waiting thread
*
* @return Boolean whether has waiting thread task
*/
private boolean hasWaitingThreadTask() {
List<TaskInstance> waitingList = getCompleteTaskByState(ExecutionStatus.WAITING_THREAD);
return CollectionUtils.isNotEmpty(waitingList);
}
/**
* prepare for pause
* 1,failed retry task in the preparation queue , returns to failure directly
* 2,exists pause task,complement not completed, pending submission of tasks, return to suspension
* 3,success
*
* @return ExecutionStatus
*/
private ExecutionStatus processReadyPause() {
if (hasRetryTaskInStandBy()) {
return ExecutionStatus.FAILURE;
}
List<TaskInstance> pauseList = getCompleteTaskByState(ExecutionStatus.PAUSE);
if (CollectionUtils.isNotEmpty(pauseList)
|| !isComplementEnd()
|| readyToSubmitTaskQueue.size() > 0) {
return ExecutionStatus.PAUSE;
} else {
return ExecutionStatus.SUCCESS;
}
}
/**
* generate the latest process instance status by the tasks state
*
* @param instance
* @return process instance execution status
*/
private ExecutionStatus getProcessInstanceState(ProcessInstance instance) {
ExecutionStatus state = instance.getState();
if (activeTaskProcessorMaps.size() > 0 || hasRetryTaskInStandBy()) {
// active task and retry task exists
return runningState(state);
}
// process failure
if (processFailed()) {
return ExecutionStatus.FAILURE;
}
// waiting thread
if (hasWaitingThreadTask()) {
return ExecutionStatus.WAITING_THREAD;
}
// pause
if (state == ExecutionStatus.READY_PAUSE) {
return processReadyPause();
}
// stop
if (state == ExecutionStatus.READY_STOP) {
List<TaskInstance> stopList = getCompleteTaskByState(ExecutionStatus.STOP);
List<TaskInstance> killList = getCompleteTaskByState(ExecutionStatus.KILL);
if (CollectionUtils.isNotEmpty(stopList)
|| CollectionUtils.isNotEmpty(killList)
|| !isComplementEnd()) {
return ExecutionStatus.STOP;
} else {
return ExecutionStatus.SUCCESS;
}
}
// success
if (state == ExecutionStatus.RUNNING_EXECUTION) {
List<TaskInstance> killTasks = getCompleteTaskByState(ExecutionStatus.KILL);
if (readyToSubmitTaskQueue.size() > 0) {
//tasks currently pending submission, no retries, indicating that depend is waiting to complete
return ExecutionStatus.RUNNING_EXECUTION;
} else if (CollectionUtils.isNotEmpty(killTasks)) {
// tasks maybe killed manually
return ExecutionStatus.FAILURE;
} else {
// if the waiting queue is empty and the status is in progress, then success
return ExecutionStatus.SUCCESS;
}
}
return state;
}
/**
* whether complement end
*
* @return Boolean whether is complement end
*/
private boolean isComplementEnd() {
if (!processInstance.isComplementData()) {
return true;
}
try {
Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam());
Date endTime = DateUtils.getScheduleDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
return processInstance.getScheduleTime().equals(endTime);
} catch (Exception e) {
logger.error("complement end failed ", e);
return false;
}
}
/**
* updateProcessInstance process instance state
* after each batch of tasks is executed, the status of the process instance is updated
*/
private void updateProcessInstanceState() {
ProcessInstance instance = processService.findProcessInstanceById(processInstance.getId());
ExecutionStatus state = getProcessInstanceState(instance);
if (processInstance.getState() != state) {
logger.info(
"work flow process instance [id: {}, name:{}], state change from {} to {}, cmd type: {}",
processInstance.getId(), processInstance.getName(),
processInstance.getState(), state,
processInstance.getCommandType());
instance.setState(state);
processService.updateProcessInstance(instance);
processInstance = instance;
StateEvent stateEvent = new StateEvent();
stateEvent.setExecutionStatus(processInstance.getState());
stateEvent.setProcessInstanceId(this.processInstance.getId());
stateEvent.setType(StateEventType.PROCESS_STATE_CHANGE);
this.processStateChangeHandler(stateEvent);
}
}
/**
* get task dependency result
*
* @param taskInstance task instance
* @return DependResult
*/
private DependResult getDependResultForTask(TaskInstance taskInstance) {
return isTaskDepsComplete(Long.toString(taskInstance.getTaskCode()));
}
/**
* add task to standby list
*
* @param taskInstance task instance
*/
private void addTaskToStandByList(TaskInstance taskInstance) {
logger.info("add task to stand by list: {}", taskInstance.getName());
try {
if (!readyToSubmitTaskQueue.contains(taskInstance)) {
readyToSubmitTaskQueue.put(taskInstance);
}
} catch (Exception e) {
logger.error("add task instance to readyToSubmitTaskQueue error, taskName: {}", taskInstance.getName(), e);
}
}
/**
* remove task from stand by list
*
* @param taskInstance task instance
*/
private void removeTaskFromStandbyList(TaskInstance taskInstance) {
logger.info("remove task from stand by list, id: {} name:{}",
taskInstance.getId(),
taskInstance.getName());
try {
readyToSubmitTaskQueue.remove(taskInstance);
} catch (Exception e) {
logger.error("remove task instance from readyToSubmitTaskQueue error, task id:{}, Name: {}",
taskInstance.getId(),
taskInstance.getName(), e);
}
}
/**
* has retry task in standby
*
* @return Boolean whether has retry task in standby
*/
private boolean hasRetryTaskInStandBy() {
for (Iterator<TaskInstance> iter = readyToSubmitTaskQueue.iterator(); iter.hasNext(); ) {
if (iter.next().getState().typeIsFailure()) {
return true;
}
}
return false;
}
/**
* close the on going tasks
*/
private void killAllTasks() {
logger.info("kill called on process instance id: {}, num: {}", processInstance.getId(),
activeTaskProcessorMaps.size());
for (int taskId : activeTaskProcessorMaps.keySet()) {
TaskInstance taskInstance = processService.findTaskInstanceById(taskId);
if (taskInstance == null || taskInstance.getState().typeIsFinished()) {
continue;
}
ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(taskId);
taskProcessor.action(TaskAction.STOP);
if (taskProcessor.taskState().typeIsFinished()) {
StateEvent stateEvent = new StateEvent();
stateEvent.setType(StateEventType.TASK_STATE_CHANGE);
stateEvent.setProcessInstanceId(this.processInstance.getId());
stateEvent.setTaskInstanceId(taskInstance.getId());
stateEvent.setExecutionStatus(taskProcessor.taskState());
this.addStateEvent(stateEvent);
}
}
}
public boolean workFlowFinish() {
return this.processInstance.getState().typeIsFinished();
}
/**
* handling the list of tasks to be submitted
*/
private void submitStandByTask() {
try {
int length = readyToSubmitTaskQueue.size();
for (int i = 0; i < length; i++) {
TaskInstance task = readyToSubmitTaskQueue.peek();
if (task == null) {
continue;
}
// stop tasks which is retrying if forced success happens
if (task.taskCanRetry()) {
TaskInstance retryTask = processService.findTaskInstanceById(task.getId());
if (retryTask != null && retryTask.getState().equals(ExecutionStatus.FORCED_SUCCESS)) {
task.setState(retryTask.getState());
logger.info("task: {} has been forced success, put it into complete task list and stop retrying", task.getName());
removeTaskFromStandbyList(task);
completeTaskList.put(Long.toString(task.getTaskCode()), task);
submitPostNode(Long.toString(task.getTaskCode()));
continue;
}
}
//init varPool only this task is the first time running
if (task.isFirstRun()) {
//get pre task ,get all the task varPool to this task
Set<String> preTask = dag.getPreviousNodes(Long.toString(task.getTaskCode()));
getPreVarPool(task, preTask);
}
DependResult dependResult = getDependResultForTask(task);
if (DependResult.SUCCESS == dependResult) {
if (task.retryTaskIntervalOverTime()) {
int originalId = task.getId();
TaskInstance taskInstance = submitTaskExec(task);
if (taskInstance == null) {
this.taskFailedSubmit = true;
} else {
removeTaskFromStandbyList(task);
if (taskInstance.getId() != originalId) {
activeTaskProcessorMaps.remove(originalId);
}
}
}
} else if (DependResult.FAILED == dependResult) {
// if the dependency fails, the current node is not submitted and the state changes to failure.
dependFailedTask.put(Long.toString(task.getTaskCode()), task);
removeTaskFromStandbyList(task);
logger.info("task {},id:{} depend result : {}", task.getName(), task.getId(), dependResult);
} else if (DependResult.NON_EXEC == dependResult) {
// for some reasons(depend task pause/stop) this task would not be submit
removeTaskFromStandbyList(task);
logger.info("remove task {},id:{} , because depend result : {}", task.getName(), task.getId(), dependResult);
}
}
} catch (Exception e) {
logger.error("submit standby task error", e);
}
}
/**
* get recovery task instance
*
* @param taskId task id
* @return recovery task instance
*/
private TaskInstance getRecoveryTaskInstance(String taskId) {
if (!StringUtils.isNotEmpty(taskId)) {
return null;
}
try {
Integer intId = Integer.valueOf(taskId);
TaskInstance task = processService.findTaskInstanceById(intId);
if (task == null) {
logger.error("start node id cannot be found: {}", taskId);
} else {
return task;
}
} catch (Exception e) {
logger.error("get recovery task instance failed ", e);
}
return null;
}
/**
* get start task instance list
*
* @param cmdParam command param
* @return task instance list
*/
private List<TaskInstance> getStartTaskInstanceList(String cmdParam) {
List<TaskInstance> instanceList = new ArrayList<>();
Map<String, String> paramMap = JSONUtils.toMap(cmdParam);
if (paramMap != null && paramMap.containsKey(CMD_PARAM_RECOVERY_START_NODE_STRING)) {
String[] idList = paramMap.get(CMD_PARAM_RECOVERY_START_NODE_STRING).split(Constants.COMMA);
for (String nodeId : idList) {
TaskInstance task = getRecoveryTaskInstance(nodeId);
if (task != null) {
instanceList.add(task);
}
}
}
return instanceList;
}
/**
* parse "StartNodeNameList" from cmd param
*
* @param cmdParam command param
* @return start node name list
*/
private List<String> parseStartNodeName(String cmdParam) {
List<String> startNodeNameList = new ArrayList<>();
Map<String, String> paramMap = JSONUtils.toMap(cmdParam);
if (paramMap == null) {
return startNodeNameList;
}
if (paramMap.containsKey(CMD_PARAM_START_NODES)) {
startNodeNameList = Arrays.asList(paramMap.get(CMD_PARAM_START_NODES).split(Constants.COMMA));
}
return startNodeNameList;
}
/**
* generate start node code list from parsing command param;
* if "StartNodeIdList" exists in command param, return StartNodeIdList
*
* @return recovery node code list
*/
private List<String> getRecoveryNodeCodeList() {
List<String> recoveryNodeCodeList = new ArrayList<>();
if (CollectionUtils.isNotEmpty(recoverNodeIdList)) {
for (TaskInstance task : recoverNodeIdList) {
recoveryNodeCodeList.add(Long.toString(task.getTaskCode()));
}
}
return recoveryNodeCodeList;
}
/**
* generate flow dag
*
* @param totalTaskNodeList total task node list
* @param startNodeNameList start node name list
* @param recoveryNodeCodeList recovery node code list
* @param depNodeType depend node type
* @return ProcessDag process dag
* @throws Exception exception
*/
public ProcessDag generateFlowDag(List<TaskNode> totalTaskNodeList,
List<String> startNodeNameList,
List<String> recoveryNodeCodeList,
TaskDependType depNodeType) throws Exception {
return DagHelper.generateFlowDag(totalTaskNodeList, startNodeNameList, recoveryNodeCodeList, depNodeType);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,742 | [Bug] [Common] SnowFlakeUtils will generate duplicate id due to clock backward | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
The SnowFlakeUtils will generate duplicate id due to clock backward.
This may be caused by we use `System.currentTimeMillis()` to get system time, this method is based on system time, will forward or backward.
https://stackoverflow.com/questions/2978598/will-system-currenttimemillis-always-return-a-value-previous-calls
### What you expected to happen
Generate duplicate Id
### How to reproduce
Hard to reproduce :)
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6742 | https://github.com/apache/dolphinscheduler/pull/6740 | b6824b47419a5abc7ab4279d648417249a74f122 | 0f94577d2613634b268b56b6dbd445bd5528542c | "2021-11-08T12:02:26Z" | java | "2021-11-08T14:26:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/SnowFlakeUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.Objects;
public class SnowFlakeUtils {
// start timestamp
private static final long START_TIMESTAMP = 1609430400000L; //2021-01-01 00:00:00
// Number of digits
private static final long SEQUENCE_BIT = 13;
private static final long MACHINE_BIT = 2;
private static final long MAX_SEQUENCE = ~(-1L << SEQUENCE_BIT);
// The displacement to the left
private static final long MACHINE_LEFT = SEQUENCE_BIT;
private static final long TIMESTAMP_LEFT = SEQUENCE_BIT + MACHINE_BIT;
private final int machineId;
private long sequence = 0L;
private long lastTimestamp = -1L;
private SnowFlakeUtils() throws SnowFlakeException {
try {
this.machineId = Math.abs(Objects.hash(InetAddress.getLocalHost().getHostName())) % 32;
} catch (UnknownHostException e) {
throw new SnowFlakeException(e.getMessage());
}
}
private static SnowFlakeUtils instance = null;
public static synchronized SnowFlakeUtils getInstance() throws SnowFlakeException {
if (instance == null) {
instance = new SnowFlakeUtils();
}
return instance;
}
public synchronized long nextId() throws SnowFlakeException {
long currStmp = nowTimestamp();
if (currStmp < lastTimestamp) {
throw new SnowFlakeException("Clock moved backwards. Refusing to generate id");
}
if (currStmp == lastTimestamp) {
sequence = (sequence + 1) & MAX_SEQUENCE;
if (sequence == 0L) {
currStmp = getNextMill();
}
} else {
sequence = 0L;
}
lastTimestamp = currStmp;
return (currStmp - START_TIMESTAMP) << TIMESTAMP_LEFT
| machineId << MACHINE_LEFT
| sequence;
}
private long getNextMill() {
long mill = nowTimestamp();
while (mill <= lastTimestamp) {
mill = nowTimestamp();
}
return mill;
}
private long nowTimestamp() {
return System.currentTimeMillis();
}
public static class SnowFlakeException extends Exception {
public SnowFlakeException(String message) {
super(message);
}
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,431 | [Feature][dev] Make sure to synchronize change for sql file | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
When developer change sql files in https://github.com/apache/dolphinscheduler/tree/dev/sql, sometime they would forgot change `H2` sql file. Standalone server could not start when this happen. Maybe we should
* (better) Add bot ensure those file change synchronize
* Add check list when contributor create PR in [PR template][1]
[1]: https://github.com/apache/dolphinscheduler/blob/dev/.github/PULL_REQUEST_TEMPLATE.md
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6431 | https://github.com/apache/dolphinscheduler/pull/6465 | 28f871e58b10262788275d74ba1c8bdc31ad6708 | 98f466e09e2260e247ba1aed3c80a2fcc81fc817 | "2021-09-30T10:04:01Z" | java | "2021-11-10T06:50:05Z" | .github/mergeable.yml | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,792 | [Bug] [alert] NettyRemotingServer bind 50052 fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
NettyRemotingServer bind fail bind(..) failed: Address already in use, exit
io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
--dev
### What you expected to happen
--
### How to reproduce
--
### Anything else
--
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6792 | https://github.com/apache/dolphinscheduler/pull/6815 | da2a04494723a02b03f05056eb5f164eae4ee705 | e36d18b58890eb18a64ecc2a5181b37125a070f8 | "2021-11-11T07:24:58Z" | java | "2021-11-12T02:51:11Z" | dolphinscheduler-alert/dolphinscheduler-alert-server/src/main/java/org/apache/dolphinscheduler/alert/AlertServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.alert;
import static org.apache.dolphinscheduler.common.Constants.ALERT_RPC_PORT;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.dao.AlertDao;
import org.apache.dolphinscheduler.dao.PluginDao;
import org.apache.dolphinscheduler.dao.entity.Alert;
import org.apache.dolphinscheduler.remote.NettyRemotingServer;
import org.apache.dolphinscheduler.remote.command.CommandType;
import org.apache.dolphinscheduler.remote.config.NettyServerConfig;
import java.io.Closeable;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ComponentScan;
@SpringBootApplication
@ComponentScan(value = {
"org.apache.dolphinscheduler.alert",
"org.apache.dolphinscheduler.dao"
})
public class AlertServer implements Closeable {
private static final Logger log = LoggerFactory.getLogger(AlertServer.class);
private final PluginDao pluginDao;
private final AlertDao alertDao;
private final AlertPluginManager alertPluginManager;
private final AlertSender alertSender;
private final AlertRequestProcessor alertRequestProcessor;
private NettyRemotingServer server;
public AlertServer(PluginDao pluginDao, AlertDao alertDao, AlertPluginManager alertPluginManager, AlertSender alertSender, AlertRequestProcessor alertRequestProcessor) {
this.pluginDao = pluginDao;
this.alertDao = alertDao;
this.alertPluginManager = alertPluginManager;
this.alertSender = alertSender;
this.alertRequestProcessor = alertRequestProcessor;
}
public static void main(String[] args) {
SpringApplication.run(AlertServer.class, args);
}
@PostConstruct
public void start() {
log.info("Starting Alert server");
checkTable();
startServer();
if (alertPluginManager.size() == 0) {
log.warn("No alert plugin, alert sender will exit.");
return;
}
Executors.newScheduledThreadPool(1)
.scheduleAtFixedRate(new Sender(), 5, 5, TimeUnit.SECONDS);
}
@Override
@PreDestroy
public void close() {
server.close();
}
private void checkTable() {
if (!pluginDao.checkPluginDefineTableExist()) {
log.error("Plugin Define Table t_ds_plugin_define Not Exist . Please Create it First !");
System.exit(1);
}
}
private void startServer() {
NettyServerConfig serverConfig = new NettyServerConfig();
serverConfig.setListenPort(ALERT_RPC_PORT);
server = new NettyRemotingServer(serverConfig);
server.registerProcessor(CommandType.ALERT_SEND_REQUEST, alertRequestProcessor);
server.start();
}
final class Sender implements Runnable {
@Override
public void run() {
if (!Stopper.isRunning()) {
return;
}
try {
final List<Alert> alerts = alertDao.listPendingAlerts();
alertSender.send(alerts);
} catch (Exception e) {
log.error("Failed to send alert", e);
}
}
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,792 | [Bug] [alert] NettyRemotingServer bind 50052 fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
NettyRemotingServer bind fail bind(..) failed: Address already in use, exit
io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
--dev
### What you expected to happen
--
### How to reproduce
--
### Anything else
--
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6792 | https://github.com/apache/dolphinscheduler/pull/6815 | da2a04494723a02b03f05056eb5f164eae4ee705 | e36d18b58890eb18a64ecc2a5181b37125a070f8 | "2021-11-11T07:24:58Z" | java | "2021-11-12T02:51:11Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/ApiApplicationServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.ServletComponentScan;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.FilterType;
@SpringBootApplication
@ServletComponentScan
@ComponentScan(value = "org.apache.dolphinscheduler",
excludeFilters = @ComponentScan.Filter(type = FilterType.REGEX, pattern = "org.apache.dolphinscheduler.server.*"))
public class ApiApplicationServer extends SpringBootServletInitializer {
public static void main(String[] args) {
SpringApplication.run(ApiApplicationServer.class, args);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,792 | [Bug] [alert] NettyRemotingServer bind 50052 fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
NettyRemotingServer bind fail bind(..) failed: Address already in use, exit
io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
--dev
### What you expected to happen
--
### How to reproduce
--
### Anything else
--
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6792 | https://github.com/apache/dolphinscheduler/pull/6815 | da2a04494723a02b03f05056eb5f164eae4ee705 | e36d18b58890eb18a64ecc2a5181b37125a070f8 | "2021-11-11T07:24:58Z" | java | "2021-11-12T02:51:11Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/log/LoggerServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.log;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.remote.NettyRemotingServer;
import org.apache.dolphinscheduler.remote.command.CommandType;
import org.apache.dolphinscheduler.remote.config.NettyServerConfig;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* logger server
*/
public class LoggerServer {
private static final Logger logger = LoggerFactory.getLogger(LoggerServer.class);
/**
* netty server
*/
private final NettyRemotingServer server;
/**
* netty server config
*/
private final NettyServerConfig serverConfig;
/**
* loggger request processor
*/
private final LoggerRequestProcessor requestProcessor;
public LoggerServer(){
this.serverConfig = new NettyServerConfig();
this.serverConfig.setListenPort(Constants.RPC_PORT);
this.server = new NettyRemotingServer(serverConfig);
this.requestProcessor = new LoggerRequestProcessor();
this.server.registerProcessor(CommandType.GET_LOG_BYTES_REQUEST, requestProcessor, requestProcessor.getExecutor());
this.server.registerProcessor(CommandType.ROLL_VIEW_LOG_REQUEST, requestProcessor, requestProcessor.getExecutor());
this.server.registerProcessor(CommandType.VIEW_WHOLE_LOG_REQUEST, requestProcessor, requestProcessor.getExecutor());
this.server.registerProcessor(CommandType.REMOVE_TAK_LOG_REQUEST, requestProcessor, requestProcessor.getExecutor());
}
/**
* main launches the server from the command line.
* @param args arguments
*/
public static void main(String[] args) {
final LoggerServer server = new LoggerServer();
server.start();
}
/**
* server start
*/
public void start() {
this.server.start();
logger.info("logger server started, listening on port : {}" , Constants.RPC_PORT);
Runtime.getRuntime().addShutdownHook(new Thread() {
@Override
public void run() {
LoggerServer.this.stop();
}
});
}
/**
* stop
*/
public void stop() {
this.server.close();
logger.info("logger server shut down");
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,792 | [Bug] [alert] NettyRemotingServer bind 50052 fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
NettyRemotingServer bind fail bind(..) failed: Address already in use, exit
io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
--dev
### What you expected to happen
--
### How to reproduce
--
### Anything else
--
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6792 | https://github.com/apache/dolphinscheduler/pull/6815 | da2a04494723a02b03f05056eb5f164eae4ee705 | e36d18b58890eb18a64ecc2a5181b37125a070f8 | "2021-11-11T07:24:58Z" | java | "2021-11-12T02:51:11Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/MasterServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.IStoppable;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.remote.NettyRemotingServer;
import org.apache.dolphinscheduler.remote.command.CommandType;
import org.apache.dolphinscheduler.remote.config.NettyServerConfig;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.processor.StateEventProcessor;
import org.apache.dolphinscheduler.server.master.processor.TaskAckProcessor;
import org.apache.dolphinscheduler.server.master.processor.TaskKillResponseProcessor;
import org.apache.dolphinscheduler.server.master.processor.TaskResponseProcessor;
import org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient;
import org.apache.dolphinscheduler.server.master.runner.EventExecuteService;
import org.apache.dolphinscheduler.server.master.runner.MasterSchedulerService;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import org.apache.dolphinscheduler.service.quartz.QuartzExecutors;
import javax.annotation.PostConstruct;
import org.quartz.SchedulerException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.FilterType;
import org.springframework.transaction.annotation.EnableTransactionManagement;
/**
* master server
*/
@ComponentScan(value = "org.apache.dolphinscheduler", excludeFilters = {
@ComponentScan.Filter(type = FilterType.REGEX, pattern = {
"org.apache.dolphinscheduler.server.worker.*",
"org.apache.dolphinscheduler.server.monitor.*",
"org.apache.dolphinscheduler.server.log.*"
})
})
@EnableTransactionManagement
public class MasterServer implements IStoppable {
/**
* logger of MasterServer
*/
private static final Logger logger = LoggerFactory.getLogger(MasterServer.class);
/**
* master config
*/
@Autowired
private MasterConfig masterConfig;
/**
* spring application context
* only use it for initialization
*/
@Autowired
private SpringApplicationContext springApplicationContext;
/**
* netty remote server
*/
private NettyRemotingServer nettyRemotingServer;
/**
* zk master client
*/
@Autowired
private MasterRegistryClient masterRegistryClient;
/**
* scheduler service
*/
@Autowired
private MasterSchedulerService masterSchedulerService;
@Autowired
private EventExecuteService eventExecuteService;
/**
* master server startup, not use web service
*
* @param args arguments
*/
public static void main(String[] args) {
Thread.currentThread().setName(Constants.THREAD_NAME_MASTER_SERVER);
new SpringApplicationBuilder(MasterServer.class).web(WebApplicationType.NONE).run(args);
}
/**
* run master server
*/
@PostConstruct
public void run() {
// init remoting server
NettyServerConfig serverConfig = new NettyServerConfig();
serverConfig.setListenPort(masterConfig.getListenPort());
this.nettyRemotingServer = new NettyRemotingServer(serverConfig);
this.nettyRemotingServer.registerProcessor(CommandType.TASK_EXECUTE_RESPONSE, new TaskResponseProcessor());
this.nettyRemotingServer.registerProcessor(CommandType.TASK_EXECUTE_ACK, new TaskAckProcessor());
this.nettyRemotingServer.registerProcessor(CommandType.TASK_KILL_RESPONSE, new TaskKillResponseProcessor());
this.nettyRemotingServer.registerProcessor(CommandType.STATE_EVENT_REQUEST, new StateEventProcessor());
this.nettyRemotingServer.start();
// self tolerant
this.masterRegistryClient.init();
this.masterRegistryClient.start();
this.masterRegistryClient.setRegistryStoppable(this);
this.eventExecuteService.init();
this.eventExecuteService.start();
// scheduler start
this.masterSchedulerService.init();
this.masterSchedulerService.start();
// start QuartzExecutors
// what system should do if exception
try {
logger.info("start Quartz server...");
QuartzExecutors.getInstance().start();
} catch (Exception e) {
try {
QuartzExecutors.getInstance().shutdown();
} catch (SchedulerException e1) {
logger.error("QuartzExecutors shutdown failed : " + e1.getMessage(), e1);
}
logger.error("start Quartz failed", e);
}
/**
* register hooks, which are called before the process exits
*/
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
if (Stopper.isRunning()) {
close("shutdownHook");
}
}));
}
/**
* gracefully close
*
* @param cause close cause
*/
public void close(String cause) {
try {
// execute only once
if (Stopper.isStopped()) {
return;
}
logger.info("master server is stopping ..., cause : {}", cause);
// set stop signal is true
Stopper.stop();
try {
// thread sleep 3 seconds for thread quietly stop
Thread.sleep(3000L);
} catch (Exception e) {
logger.warn("thread sleep exception ", e);
}
// close
this.masterSchedulerService.close();
this.nettyRemotingServer.close();
this.masterRegistryClient.closeRegistry();
// close quartz
try {
QuartzExecutors.getInstance().shutdown();
logger.info("Quartz service stopped");
} catch (Exception e) {
logger.warn("Quartz service stopped exception:{}", e.getMessage());
}
// close spring Context and will invoke method with @PreDestroy annotation to destory beans. like ServerNodeManager,HostManager,TaskResponseService,CuratorZookeeperClient,etc
springApplicationContext.close();
} catch (Exception e) {
logger.error("master server stop exception ", e);
}
}
@Override
public void stop(String cause) {
close(cause);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,792 | [Bug] [alert] NettyRemotingServer bind 50052 fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
NettyRemotingServer bind fail bind(..) failed: Address already in use, exit
io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
--dev
### What you expected to happen
--
### How to reproduce
--
### Anything else
--
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6792 | https://github.com/apache/dolphinscheduler/pull/6815 | da2a04494723a02b03f05056eb5f164eae4ee705 | e36d18b58890eb18a64ecc2a5181b37125a070f8 | "2021-11-11T07:24:58Z" | java | "2021-11-12T02:51:11Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/WorkerServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.worker;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.IStoppable;
import org.apache.dolphinscheduler.common.enums.NodeType;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.remote.NettyRemotingServer;
import org.apache.dolphinscheduler.remote.command.CommandType;
import org.apache.dolphinscheduler.remote.config.NettyServerConfig;
import org.apache.dolphinscheduler.server.worker.config.WorkerConfig;
import org.apache.dolphinscheduler.server.worker.plugin.TaskPluginManager;
import org.apache.dolphinscheduler.server.worker.processor.DBTaskAckProcessor;
import org.apache.dolphinscheduler.server.worker.processor.DBTaskResponseProcessor;
import org.apache.dolphinscheduler.server.worker.processor.HostUpdateProcessor;
import org.apache.dolphinscheduler.server.worker.processor.TaskExecuteProcessor;
import org.apache.dolphinscheduler.server.worker.processor.TaskKillProcessor;
import org.apache.dolphinscheduler.server.worker.registry.WorkerRegistryClient;
import org.apache.dolphinscheduler.server.worker.runner.RetryReportTaskStatusThread;
import org.apache.dolphinscheduler.server.worker.runner.WorkerManagerThread;
import org.apache.dolphinscheduler.service.alert.AlertClientService;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import java.util.Set;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.WebApplicationType;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.FilterType;
import org.springframework.transaction.annotation.EnableTransactionManagement;
/**
* worker server
*/
@ComponentScan(value = "org.apache.dolphinscheduler", excludeFilters = {
@ComponentScan.Filter(type = FilterType.REGEX, pattern = {
"org.apache.dolphinscheduler.server.master.*",
"org.apache.dolphinscheduler.server.monitor.*",
"org.apache.dolphinscheduler.server.log.*"
})
})
@EnableTransactionManagement
public class WorkerServer implements IStoppable {
/**
* logger
*/
private static final Logger logger = LoggerFactory.getLogger(WorkerServer.class);
/**
* netty remote server
*/
private NettyRemotingServer nettyRemotingServer;
/**
* worker config
*/
@Autowired
private WorkerConfig workerConfig;
/**
* spring application context
* only use it for initialization
*/
@Autowired
private SpringApplicationContext springApplicationContext;
/**
* alert model netty remote server
*/
private AlertClientService alertClientService;
@Autowired
private RetryReportTaskStatusThread retryReportTaskStatusThread;
@Autowired
private WorkerManagerThread workerManagerThread;
/**
* worker registry
*/
@Autowired
private WorkerRegistryClient workerRegistryClient;
@Autowired
private TaskPluginManager taskPluginManager;
/**
* worker server startup, not use web service
*
* @param args arguments
*/
public static void main(String[] args) {
Thread.currentThread().setName(Constants.THREAD_NAME_WORKER_SERVER);
new SpringApplicationBuilder(WorkerServer.class).web(WebApplicationType.NONE).run(args);
}
/**
* worker server run
*/
@PostConstruct
public void run() {
// alert-server client registry
alertClientService = new AlertClientService(workerConfig.getAlertListenHost(), Constants.ALERT_RPC_PORT);
// init remoting server
NettyServerConfig serverConfig = new NettyServerConfig();
serverConfig.setListenPort(workerConfig.getListenPort());
this.nettyRemotingServer = new NettyRemotingServer(serverConfig);
this.nettyRemotingServer.registerProcessor(CommandType.TASK_EXECUTE_REQUEST, new TaskExecuteProcessor(alertClientService, taskPluginManager));
this.nettyRemotingServer.registerProcessor(CommandType.TASK_KILL_REQUEST, new TaskKillProcessor());
this.nettyRemotingServer.registerProcessor(CommandType.DB_TASK_ACK, new DBTaskAckProcessor());
this.nettyRemotingServer.registerProcessor(CommandType.DB_TASK_RESPONSE, new DBTaskResponseProcessor());
this.nettyRemotingServer.registerProcessor(CommandType.PROCESS_HOST_UPDATE_REQUEST, new HostUpdateProcessor());
this.nettyRemotingServer.start();
// worker registry
try {
this.workerRegistryClient.registry();
this.workerRegistryClient.setRegistryStoppable(this);
Set<String> workerZkPaths = this.workerRegistryClient.getWorkerZkPaths();
this.workerRegistryClient.handleDeadServer(workerZkPaths, NodeType.WORKER, Constants.DELETE_OP);
} catch (Exception e) {
logger.error(e.getMessage(), e);
throw new RuntimeException(e);
}
// task execute manager
this.workerManagerThread.start();
// retry report task status
this.retryReportTaskStatusThread.start();
/*
* registry hooks, which are called before the process exits
*/
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
if (Stopper.isRunning()) {
close("shutdownHook");
}
}));
}
public void close(String cause) {
try {
// execute only once
if (Stopper.isStopped()) {
return;
}
logger.info("worker server is stopping ..., cause : {}", cause);
// set stop signal is true
Stopper.stop();
try {
// thread sleep 3 seconds for thread quitely stop
Thread.sleep(3000L);
} catch (Exception e) {
logger.warn("thread sleep exception", e);
}
// close
this.nettyRemotingServer.close();
this.workerRegistryClient.unRegistry();
this.alertClientService.close();
this.springApplicationContext.close();
} catch (Exception e) {
logger.error("worker server stop exception ", e);
}
}
@Override
public void stop(String cause) {
close(cause);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,792 | [Bug] [alert] NettyRemotingServer bind 50052 fail | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
NettyRemotingServer bind fail bind(..) failed: Address already in use, exit
io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
--dev
### What you expected to happen
--
### How to reproduce
--
### Anything else
--
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6792 | https://github.com/apache/dolphinscheduler/pull/6815 | da2a04494723a02b03f05056eb5f164eae4ee705 | e36d18b58890eb18a64ecc2a5181b37125a070f8 | "2021-11-11T07:24:58Z" | java | "2021-11-12T02:51:11Z" | dolphinscheduler-standalone-server/src/main/java/org/apache/dolphinscheduler/server/StandaloneServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server;
import org.apache.dolphinscheduler.alert.AlertServer;
import org.apache.dolphinscheduler.api.ApiApplicationServer;
import org.apache.dolphinscheduler.server.master.MasterServer;
import org.apache.dolphinscheduler.server.worker.WorkerServer;
import org.apache.curator.test.TestingServer;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
@SpringBootApplication
public class StandaloneServer {
public static void main(String[] args) throws Exception {
final TestingServer server = new TestingServer(true);
System.setProperty("registry.servers", server.getConnectString());
new SpringApplicationBuilder(
ApiApplicationServer.class,
MasterServer.class,
WorkerServer.class,
AlertServer.class,
PythonGatewayServer.class
).profiles("h2").run(args);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,771 | [Bug] [MasterServer] failover worker interrupt | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch: 2.0
When worker down and start again, a lot of process instance and task instance is not run by fault tolerance, but still keep running state.
And I found that when Master run failoverWorker, it's interrupted.
```
grep failover logs/dolphinscheduler-master.2021-11-10_14.*
[root@ds1 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# grep failover logs/dolphinscheduler-master.2021-11-10_14.*
logs/dolphinscheduler-master.2021-11-10_14.0.log:[INFO] 2021-11-10 14:33:47.887 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[333] - start master failover ...
logs/dolphinscheduler-master.2021-11-10_14.0.log:[INFO] 2021-11-10 14:33:47.948 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[337] - failover process list size:0
logs/dolphinscheduler-master.2021-11-10_14.0.log:[INFO] 2021-11-10 14:33:47.949 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[281] - start worker[null] failover ...
logs/dolphinscheduler-master.2021-11-10_14.0.log:[INFO] 2021-11-10 14:33:47.955 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[324] - end worker[null] failover ...
logs/dolphinscheduler-master.2021-11-10_14.0.log:[INFO] 2021-11-10 14:33:47.955 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[348] - master failover end
logs/dolphinscheduler-master.2021-11-10_14.0.log:[INFO] 2021-11-10 14:33:47.958 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[281] - start worker[null] failover ...
logs/dolphinscheduler-master.2021-11-10_14.0.log:[INFO] 2021-11-10 14:33:47.960 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[324] - end worker[null] failover ...
logs/dolphinscheduler-master.2021-11-10_14.0.log:[INFO] 2021-11-10 14:49:06.004 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[281] - start worker[172.28.230.24:1234] failover ...
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverWorker(MasterRegistryClient.java:307)
logs/dolphinscheduler-master.2021-11-10_14.0.log: at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.failoverServerWhenDown(MasterRegistryClient.java:196)
```
I found a return if processInstanceExecMaps doesn't contain `processInstance.getId()`, it should be continue.
### What you expected to happen
failover normally.
### How to reproduce
run a lot of tasks and restart the worker.
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6771 | https://github.com/apache/dolphinscheduler/pull/6801 | d168a3cdf7cec9689dc8ae439c4a02ab10922287 | 5741c758b3521885e4ea11fdcb3c66d275537739 | "2021-11-10T08:05:17Z" | java | "2021-11-12T06:19:57Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistryClient.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.registry;
import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_MASTERS;
import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_NODE;
import static org.apache.dolphinscheduler.common.Constants.SLEEP_TIME_MILLIS;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.IStoppable;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.NodeType;
import org.apache.dolphinscheduler.common.enums.StateEvent;
import org.apache.dolphinscheduler.common.enums.StateEventType;
import org.apache.dolphinscheduler.common.model.Server;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.registry.api.ConnectionState;
import org.apache.dolphinscheduler.remote.utils.NamedThreadFactory;
import org.apache.dolphinscheduler.server.builder.TaskExecutionContextBuilder;
import org.apache.dolphinscheduler.server.master.cache.ProcessInstanceExecCacheManager;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread;
import org.apache.dolphinscheduler.server.registry.HeartBeatTask;
import org.apache.dolphinscheduler.server.utils.ProcessUtils;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.queue.entity.TaskExecutionContext;
import org.apache.dolphinscheduler.service.registry.RegistryClient;
import org.apache.commons.lang.StringUtils;
import java.util.Collections;
import java.util.Date;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import com.google.common.collect.Sets;
/**
* zookeeper master client
* <p>
* single instance
*/
@Component
public class MasterRegistryClient {
/**
* logger
*/
private static final Logger logger = LoggerFactory.getLogger(MasterRegistryClient.class);
/**
* process service
*/
@Autowired
private ProcessService processService;
@Autowired
private RegistryClient registryClient;
/**
* master config
*/
@Autowired
private MasterConfig masterConfig;
/**
* heartbeat executor
*/
private ScheduledExecutorService heartBeatExecutor;
@Autowired
private ProcessInstanceExecCacheManager processInstanceExecCacheManager;
/**
* master startup time, ms
*/
private long startupTime;
private String localNodePath;
public void init() {
this.startupTime = System.currentTimeMillis();
this.heartBeatExecutor = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("HeartBeatExecutor"));
}
public void start() {
String nodeLock = Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS;
try {
// create distributed lock with the root node path of the lock space as /dolphinscheduler/lock/failover/startup-masters
registryClient.getLock(nodeLock);
// master registry
registry();
String registryPath = getMasterPath();
registryClient.handleDeadServer(Collections.singleton(registryPath), NodeType.MASTER, Constants.DELETE_OP);
// init system node
while (!registryClient.checkNodeExists(NetUtils.getHost(), NodeType.MASTER)) {
ThreadUtils.sleep(SLEEP_TIME_MILLIS);
}
// self tolerant
if (registryClient.getActiveMasterNum() == 1) {
removeNodePath(null, NodeType.MASTER, true);
removeNodePath(null, NodeType.WORKER, true);
}
registryClient.subscribe(REGISTRY_DOLPHINSCHEDULER_NODE, new MasterRegistryDataListener());
} catch (Exception e) {
logger.error("master start up exception", e);
} finally {
registryClient.releaseLock(nodeLock);
}
}
public void setRegistryStoppable(IStoppable stoppable) {
registryClient.setStoppable(stoppable);
}
public void closeRegistry() {
// TODO unsubscribe MasterRegistryDataListener
deregister();
}
/**
* remove zookeeper node path
*
* @param path zookeeper node path
* @param nodeType zookeeper node type
* @param failover is failover
*/
public void removeNodePath(String path, NodeType nodeType, boolean failover) {
logger.info("{} node deleted : {}", nodeType, path);
String failoverPath = getFailoverLockPath(nodeType);
try {
registryClient.getLock(failoverPath);
String serverHost = null;
if (!StringUtils.isEmpty(path)) {
serverHost = registryClient.getHostByEventDataPath(path);
if (StringUtils.isEmpty(serverHost)) {
logger.error("server down error: unknown path: {}", path);
return;
}
// handle dead server
registryClient.handleDeadServer(Collections.singleton(path), nodeType, Constants.ADD_OP);
}
//failover server
if (failover) {
failoverServerWhenDown(serverHost, nodeType);
}
} catch (Exception e) {
logger.error("{} server failover failed.", nodeType);
logger.error("failover exception ", e);
} finally {
registryClient.releaseLock(failoverPath);
}
}
/**
* failover server when server down
*
* @param serverHost server host
* @param nodeType zookeeper node type
*/
private void failoverServerWhenDown(String serverHost, NodeType nodeType) {
switch (nodeType) {
case MASTER:
failoverMaster(serverHost);
break;
case WORKER:
failoverWorker(serverHost, true, true);
break;
default:
break;
}
}
/**
* get failover lock path
*
* @param nodeType zookeeper node type
* @return fail over lock path
*/
private String getFailoverLockPath(NodeType nodeType) {
switch (nodeType) {
case MASTER:
return Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS;
case WORKER:
return Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS;
default:
return "";
}
}
/**
* task needs failover if task start before worker starts
*
* @param taskInstance task instance
* @return true if task instance need fail over
*/
private boolean checkTaskInstanceNeedFailover(TaskInstance taskInstance) {
boolean taskNeedFailover = true;
//now no host will execute this task instance,so no need to failover the task
if (taskInstance.getHost() == null) {
return false;
}
// if the worker node exists in zookeeper, we must check the task starts after the worker
if (registryClient.checkNodeExists(taskInstance.getHost(), NodeType.WORKER)) {
//if task start after worker starts, there is no need to failover the task.
if (checkTaskAfterWorkerStart(taskInstance)) {
taskNeedFailover = false;
}
}
return taskNeedFailover;
}
/**
* check task start after the worker server starts.
*
* @param taskInstance task instance
* @return true if task instance start time after worker server start date
*/
private boolean checkTaskAfterWorkerStart(TaskInstance taskInstance) {
if (StringUtils.isEmpty(taskInstance.getHost())) {
return false;
}
Date workerServerStartDate = null;
List<Server> workerServers = registryClient.getServerList(NodeType.WORKER);
for (Server workerServer : workerServers) {
if (taskInstance.getHost().equals(workerServer.getHost() + Constants.COLON + workerServer.getPort())) {
workerServerStartDate = workerServer.getCreateTime();
break;
}
}
if (workerServerStartDate != null) {
return taskInstance.getStartTime().after(workerServerStartDate);
}
return false;
}
/**
* failover worker tasks
* <p>
* 1. kill yarn job if there are yarn jobs in tasks.
* 2. change task state from running to need failover.
* 3. failover all tasks when workerHost is null
*
* @param workerHost worker host
* @param needCheckWorkerAlive need check worker alive
* @param checkOwner need check process instance owner
*/
private void failoverWorker(String workerHost, boolean needCheckWorkerAlive, boolean checkOwner) {
logger.info("start worker[{}] failover ...", workerHost);
List<TaskInstance> needFailoverTaskInstanceList = processService.queryNeedFailoverTaskInstances(workerHost);
for (TaskInstance taskInstance : needFailoverTaskInstanceList) {
if (needCheckWorkerAlive) {
if (!checkTaskInstanceNeedFailover(taskInstance)) {
continue;
}
}
ProcessInstance processInstance = processService.findProcessInstanceDetailById(taskInstance.getProcessInstanceId());
if (workerHost == null
|| !checkOwner
|| processInstance.getHost().equalsIgnoreCase(getLocalAddress())) {
// only failover the task owned myself if worker down.
if (processInstance == null) {
logger.error("failover error, the process {} of task {} do not exists.",
taskInstance.getProcessInstanceId(), taskInstance.getId());
continue;
}
taskInstance.setProcessInstance(processInstance);
TaskExecutionContext taskExecutionContext = TaskExecutionContextBuilder.get()
.buildTaskInstanceRelatedInfo(taskInstance)
.buildProcessInstanceRelatedInfo(processInstance)
.create();
// only kill yarn job if exists , the local thread has exited
ProcessUtils.killYarnJob(taskExecutionContext);
taskInstance.setState(ExecutionStatus.NEED_FAULT_TOLERANCE);
processService.saveTaskInstance(taskInstance);
if (!processInstanceExecCacheManager.contains(processInstance.getId())) {
return;
}
WorkflowExecuteThread workflowExecuteThreadNotify = processInstanceExecCacheManager.getByProcessInstanceId(processInstance.getId());
StateEvent stateEvent = new StateEvent();
stateEvent.setTaskInstanceId(taskInstance.getId());
stateEvent.setType(StateEventType.TASK_STATE_CHANGE);
stateEvent.setProcessInstanceId(processInstance.getId());
stateEvent.setExecutionStatus(taskInstance.getState());
workflowExecuteThreadNotify.addStateEvent(stateEvent);
}
}
logger.info("end worker[{}] failover ...", workerHost);
}
/**
* failover master tasks
*
* @param masterHost master host
*/
private void failoverMaster(String masterHost) {
logger.info("start master failover ...");
List<ProcessInstance> needFailoverProcessInstanceList = processService.queryNeedFailoverProcessInstances(masterHost);
logger.info("failover process list size:{} ", needFailoverProcessInstanceList.size());
//updateProcessInstance host is null and insert into command
for (ProcessInstance processInstance : needFailoverProcessInstanceList) {
logger.info("failover process instance id: {} host:{}", processInstance.getId(), processInstance.getHost());
if (Constants.NULL.equals(processInstance.getHost())) {
continue;
}
processService.processNeedFailoverProcessInstances(processInstance);
}
failoverWorker(masterHost, true, false);
logger.info("master failover end");
}
/**
* registry
*/
public void registry() {
String address = NetUtils.getAddr(masterConfig.getListenPort());
localNodePath = getMasterPath();
int masterHeartbeatInterval = masterConfig.getMasterHeartbeatInterval();
HeartBeatTask heartBeatTask = new HeartBeatTask(startupTime,
masterConfig.getMasterMaxCpuloadAvg(),
masterConfig.getMasterReservedMemory(),
Sets.newHashSet(getMasterPath()),
Constants.MASTER_TYPE,
registryClient);
registryClient.persistEphemeral(localNodePath, heartBeatTask.getHeartBeatInfo());
registryClient.addConnectionStateListener(newState -> {
if (newState == ConnectionState.RECONNECTED || newState == ConnectionState.SUSPENDED) {
registryClient.persistEphemeral(localNodePath, "");
}
});
this.heartBeatExecutor.scheduleAtFixedRate(heartBeatTask, masterHeartbeatInterval, masterHeartbeatInterval, TimeUnit.SECONDS);
logger.info("master node : {} registry to ZK successfully with heartBeatInterval : {}s", address, masterHeartbeatInterval);
}
public void deregister() {
try {
String address = getLocalAddress();
String localNodePath = getMasterPath();
registryClient.remove(localNodePath);
logger.info("master node : {} unRegistry to register center.", address);
heartBeatExecutor.shutdown();
logger.info("heartbeat executor shutdown");
registryClient.close();
} catch (Exception e) {
logger.error("remove registry path exception ", e);
}
}
/**
* get master path
*/
public String getMasterPath() {
String address = getLocalAddress();
return REGISTRY_DOLPHINSCHEDULER_MASTERS + "/" + address;
}
/**
* get local address
*/
private String getLocalAddress() {
return NetUtils.getAddr(masterConfig.getListenPort());
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,750 | [Bug] [python] Specific user args not pass to user create function | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
In pydolphinscheduler, when I create process definitions and use specific tenant to it, https://github.com/apache/dolphinscheduler/blob/484153ef761bd2f404a9cbf5a72e8605b5a376eb/dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/core/process_definition.py?_pjax=%23js-repo-pjax-container#L86-L90it would not pass to `create_user` function, and still use default value.
I inspect and find out `User` object alway use default value in https://github.com/apache/dolphinscheduler/blob/484153ef761bd2f404a9cbf5a72e8605b5a376eb/dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/core/process_definition.py?_pjax=%23js-repo-pjax-container#L142-L150
### What you expected to happen
parameter I assigned should be user
### How to reproduce
run example
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6750 | https://github.com/apache/dolphinscheduler/pull/6753 | e2d516f27989618461b72aca899205a152a4cfe6 | cd8205217a08b54228f8530785f49515ff6ec7f0 | "2021-11-09T02:48:18Z" | java | "2021-11-13T08:47:08Z" | dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/core/process_definition.py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Module process definition, core class for workflow define."""
import json
from typing import Optional, List, Dict, Set
from pydolphinscheduler.constants import (
ProcessDefinitionReleaseState,
ProcessDefinitionDefault,
)
from pydolphinscheduler.core.base import Base
from pydolphinscheduler.java_gateway import launch_gateway
from pydolphinscheduler.side import Tenant, Project, User
class ProcessDefinitionContext:
"""Class process definition context, use when task get process definition from context expression."""
_context_managed_process_definition: Optional["ProcessDefinition"] = None
@classmethod
def set(cls, pd: "ProcessDefinition") -> None:
"""Set attribute self._context_managed_process_definition."""
cls._context_managed_process_definition = pd
@classmethod
def get(cls) -> Optional["ProcessDefinition"]:
"""Get attribute self._context_managed_process_definition."""
return cls._context_managed_process_definition
@classmethod
def delete(cls) -> None:
"""Delete attribute self._context_managed_process_definition."""
cls._context_managed_process_definition = None
class ProcessDefinition(Base):
"""process definition object, will define process definition attribute, task, relation.
TODO: maybe we should rename this class, currently use DS object name.
"""
# key attribute for identify ProcessDefinition object
_KEY_ATTR = {
"name",
"project",
"tenant",
"release_state",
"param",
}
_TO_DICT_ATTR = {
"name",
"description",
"_project",
"_tenant",
"worker_group",
"timeout",
"release_state",
"param",
"tasks",
"task_definition_json",
"task_relation_json",
}
def __init__(
self,
name: str,
description: Optional[str] = None,
user: Optional[str] = ProcessDefinitionDefault.USER,
project: Optional[str] = ProcessDefinitionDefault.PROJECT,
tenant: Optional[str] = ProcessDefinitionDefault.TENANT,
queue: Optional[str] = ProcessDefinitionDefault.QUEUE,
worker_group: Optional[str] = ProcessDefinitionDefault.WORKER_GROUP,
timeout: Optional[int] = 0,
release_state: Optional[str] = ProcessDefinitionReleaseState.ONLINE,
param: Optional[List] = None,
):
super().__init__(name, description)
self._user = user
self._project = project
self._tenant = tenant
self._queue = queue
self.worker_group = worker_group
self.timeout = timeout
self.release_state = release_state
self.param = param
self.tasks: dict = {}
# TODO how to fix circle import
self._task_relations: set["TaskRelation"] = set() # noqa: F821
self._process_definition_code = None
def __enter__(self) -> "ProcessDefinition":
ProcessDefinitionContext.set(self)
return self
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
ProcessDefinitionContext.delete()
@property
def tenant(self) -> Tenant:
"""Get attribute tenant."""
return Tenant(self._tenant)
@tenant.setter
def tenant(self, tenant: Tenant) -> None:
"""Set attribute tenant."""
self._tenant = tenant.name
@property
def project(self) -> Project:
"""Get attribute project."""
return Project(self._project)
@project.setter
def project(self, project: Project) -> None:
"""Set attribute project."""
self._project = project.name
@property
def user(self) -> User:
"""Get user object.
For now we just get from python side but not from java gateway side, so it may not correct.
"""
return User(
self._user,
ProcessDefinitionDefault.USER_PWD,
ProcessDefinitionDefault.USER_EMAIL,
ProcessDefinitionDefault.USER_PHONE,
ProcessDefinitionDefault.TENANT,
ProcessDefinitionDefault.QUEUE,
ProcessDefinitionDefault.USER_STATE,
)
@property
def task_definition_json(self) -> List[Dict]:
"""Return all tasks definition in list of dict."""
if not self.tasks:
return [self.tasks]
else:
return [task.to_dict() for task in self.tasks.values()]
@property
def task_relation_json(self) -> List[Dict]:
"""Return all relation between tasks pair in list of dict."""
if not self.tasks:
return [self.tasks]
else:
self._handle_root_relation()
return [tr.to_dict() for tr in self._task_relations]
# TODO inti DAG's tasks are in the same location with default {x: 0, y: 0}
@property
def task_location(self) -> List[Dict]:
"""Return all tasks location for all process definition.
For now, we only set all location with same x and y valued equal to 0. Because we do not
find a good way to set task locations. This is requests from java gateway interface.
"""
if not self.tasks:
return [self.tasks]
else:
return [{"taskCode": task_code, "x": 0, "y": 0} for task_code in self.tasks]
@property
def task_list(self) -> List["Task"]: # noqa: F821
"""Return list of tasks objects."""
return list(self.tasks.values())
def _handle_root_relation(self):
"""Handle root task property :class:`pydolphinscheduler.core.task.TaskRelation`.
Root task in DAG do not have dominant upstream node, but we have to add an exactly default
upstream task with task_code equal to `0`. This is requests from java gateway interface.
"""
from pydolphinscheduler.core.task import TaskRelation
post_relation_code = set()
for relation in self._task_relations:
post_relation_code.add(relation.post_task_code)
for task in self.task_list:
if task.code not in post_relation_code:
root_relation = TaskRelation(pre_task_code=0, post_task_code=task.code)
self._task_relations.add(root_relation)
def add_task(self, task: "Task") -> None: # noqa: F821
"""Add a single task to process definition."""
self.tasks[task.code] = task
task._process_definition = self
def add_tasks(self, tasks: List["Task"]) -> None: # noqa: F821
"""Add task sequence to process definition, it a wrapper of :func:`add_task`."""
for task in tasks:
self.add_task(task)
def get_task(self, code: str) -> "Task": # noqa: F821
"""Get task object from process definition by given code."""
if code not in self.tasks:
raise ValueError(
"Task with code %s can not found in process definition %",
(code, self.name),
)
return self.tasks[code]
# TODO which tying should return in this case
def get_tasks_by_name(self, name: str) -> Set["Task"]: # noqa: F821
"""Get tasks object by given name, if will return all tasks with this name."""
find = set()
for task in self.tasks.values():
if task.name == name:
find.add(task)
return find
def get_one_task_by_name(self, name: str) -> "Task": # noqa: F821
"""Get exact one task from process definition by given name.
Function always return one task even though this process definition have more than one task with
this name.
"""
tasks = self.get_tasks_by_name(name)
if not tasks:
raise ValueError(f"Can not find task with name {name}.")
return tasks.pop()
def run(self):
"""Submit and Start ProcessDefinition instance.
Shortcut for function :func:`submit` and function :func:`start`. Only support manual start workflow
for now, and schedule run will coming soon.
:return:
"""
self.submit()
self.start()
def _ensure_side_model_exists(self):
"""Ensure process definition side model exists.
For now, side object including :class:`pydolphinscheduler.side.project.Project`,
:class:`pydolphinscheduler.side.tenant.Tenant`, :class:`pydolphinscheduler.side.user.User`.
If these model not exists, would create default value in
:class:`pydolphinscheduler.constants.ProcessDefinitionDefault`.
"""
# TODO used metaclass for more pythonic
self.tenant.create_if_not_exists(self._queue)
# model User have to create after Tenant created
self.user.create_if_not_exists()
# Project model need User object exists
self.project.create_if_not_exists(self._user)
def submit(self) -> int:
"""Submit ProcessDefinition instance to java gateway."""
self._ensure_side_model_exists()
gateway = launch_gateway()
self._process_definition_code = gateway.entry_point.createOrUpdateProcessDefinition(
self._user,
self._project,
self.name,
str(self.description) if self.description else "",
str(self.param) if self.param else None,
json.dumps(self.task_location),
self.timeout,
self.worker_group,
self._tenant,
# TODO add serialization function
json.dumps(self.task_relation_json),
json.dumps(self.task_definition_json),
None,
)
return self._process_definition_code
def start(self) -> None:
"""Create and start ProcessDefinition instance.
which post to `start-process-instance` to java gateway
"""
gateway = launch_gateway()
gateway.entry_point.execProcessInstance(
self._user,
self._project,
self.name,
"",
self.worker_group,
24 * 3600,
)
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,750 | [Bug] [python] Specific user args not pass to user create function | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
In pydolphinscheduler, when I create process definitions and use specific tenant to it, https://github.com/apache/dolphinscheduler/blob/484153ef761bd2f404a9cbf5a72e8605b5a376eb/dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/core/process_definition.py?_pjax=%23js-repo-pjax-container#L86-L90it would not pass to `create_user` function, and still use default value.
I inspect and find out `User` object alway use default value in https://github.com/apache/dolphinscheduler/blob/484153ef761bd2f404a9cbf5a72e8605b5a376eb/dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/core/process_definition.py?_pjax=%23js-repo-pjax-container#L142-L150
### What you expected to happen
parameter I assigned should be user
### How to reproduce
run example
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6750 | https://github.com/apache/dolphinscheduler/pull/6753 | e2d516f27989618461b72aca899205a152a4cfe6 | cd8205217a08b54228f8530785f49515ff6ec7f0 | "2021-11-09T02:48:18Z" | java | "2021-11-13T08:47:08Z" | dolphinscheduler-python/pydolphinscheduler/tests/core/test_process_definition.py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Test process definition."""
import pytest
from pydolphinscheduler.constants import (
ProcessDefinitionDefault,
ProcessDefinitionReleaseState,
)
from pydolphinscheduler.core.process_definition import ProcessDefinition
from pydolphinscheduler.core.task import TaskParams
from pydolphinscheduler.side import Tenant, Project, User
from tests.testing.task import Task
TEST_PROCESS_DEFINITION_NAME = "simple-test-process-definition"
@pytest.mark.parametrize("func", ["run", "submit", "start"])
def test_process_definition_key_attr(func):
"""Test process definition have specific functions or attributes."""
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
assert hasattr(
pd, func
), f"ProcessDefinition instance don't have attribute `{func}`"
@pytest.mark.parametrize(
"name,value",
[
("project", Project(ProcessDefinitionDefault.PROJECT)),
("tenant", Tenant(ProcessDefinitionDefault.TENANT)),
(
"user",
User(
ProcessDefinitionDefault.USER,
ProcessDefinitionDefault.USER_PWD,
ProcessDefinitionDefault.USER_EMAIL,
ProcessDefinitionDefault.USER_PHONE,
ProcessDefinitionDefault.TENANT,
ProcessDefinitionDefault.QUEUE,
ProcessDefinitionDefault.USER_STATE,
),
),
("worker_group", ProcessDefinitionDefault.WORKER_GROUP),
("release_state", ProcessDefinitionReleaseState.ONLINE),
],
)
def test_process_definition_default_value(name, value):
"""Test process definition default attributes."""
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
assert getattr(pd, name) == value, (
f"ProcessDefinition instance attribute `{name}` not with "
f"except default value `{getattr(pd, name)}`"
)
@pytest.mark.parametrize(
"name,cls,expect",
[
("project", Project, "project"),
("tenant", Tenant, "tenant"),
("worker_group", str, "worker_group"),
],
)
def test_process_definition_set_attr(name, cls, expect):
"""Test process definition set specific attributes."""
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
setattr(pd, name, cls(expect))
assert getattr(pd, name) == cls(
expect
), f"ProcessDefinition set attribute `{name}` do not work expect"
def test_process_definition_to_dict_without_task():
"""Test process definition function to_dict without task."""
expect = {
"name": TEST_PROCESS_DEFINITION_NAME,
"description": None,
"project": ProcessDefinitionDefault.PROJECT,
"tenant": ProcessDefinitionDefault.TENANT,
"workerGroup": ProcessDefinitionDefault.WORKER_GROUP,
"timeout": 0,
"releaseState": ProcessDefinitionReleaseState.ONLINE,
"param": None,
"tasks": {},
"taskDefinitionJson": [{}],
"taskRelationJson": [{}],
}
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
assert pd.to_dict() == expect
def test_process_definition_simple():
"""Test process definition simple create workflow, including process definition, task, relation define."""
expect_tasks_num = 5
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
for i in range(expect_tasks_num):
task_params = TaskParams(raw_script=f"test-raw-script-{i}")
curr_task = Task(
name=f"task-{i}", task_type=f"type-{i}", task_params=task_params
)
# Set deps task i as i-1 parent
if i > 0:
pre_task = pd.get_one_task_by_name(f"task-{i - 1}")
curr_task.set_upstream(pre_task)
assert len(pd.tasks) == expect_tasks_num
# Test if task process_definition same as origin one
task: Task = pd.get_one_task_by_name("task-0")
assert pd is task.process_definition
# Test if all tasks with expect deps
for i in range(expect_tasks_num):
task: Task = pd.get_one_task_by_name(f"task-{i}")
if i == 0:
assert task._upstream_task_codes == set()
assert task._downstream_task_codes == {
pd.get_one_task_by_name("task-1").code
}
elif i == expect_tasks_num - 1:
assert task._upstream_task_codes == {
pd.get_one_task_by_name(f"task-{i - 1}").code
}
assert task._downstream_task_codes == set()
else:
assert task._upstream_task_codes == {
pd.get_one_task_by_name(f"task-{i - 1}").code
}
assert task._downstream_task_codes == {
pd.get_one_task_by_name(f"task-{i + 1}").code
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,620 | [Feature][community] Add version selection button for our bug template | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
I find out we do not have version selection bottom in our bug report template. It will save maintainer time to have version selection, unless their have to ask reporter, such as https://github.com/apache/dolphinscheduler/issues/6613#issuecomment-952944539
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6620 | https://github.com/apache/dolphinscheduler/pull/6793 | 60457284286b06007c06a1efef7f9706ca848320 | 0f92a1f54118d91c7c01e6618e3c232faf59d476 | "2021-10-27T13:47:25Z" | java | "2021-11-13T08:47:42Z" | .github/ISSUE_TEMPLATE/bug-report.yml | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
name: Bug report
title: "[Bug] [Module Name] Bug title "
description: Problems and issues with code of Apache Dolphinscheduler
labels: [ "bug", "Waiting for reply" ]
body:
- type: markdown
attributes:
value: |
Please make sure what you are reporting is indeed a bug with reproducible steps, if you want to ask questions
or share ideas, please [subscribe to our mailing list](mailto:dev-subscribe@dolphinscheduler.apache.org) and sent
emails to [our mailing list](mailto:dev@dolphinscheduler.apache.org), you can also head to our
[Discussion](https://github.com/apache/dolphinscheduler/discussions) tab.
For better global communication, Please write in English.
If you feel the description in English is not clear, then you can append description in Chinese, thanks!
- type: checkboxes
attributes:
label: Search before asking
description: >
Please make sure to search in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue)
first to see whether the same issue was reported already.
options:
- label: >
I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found
no similar issues.
required: true
- type: textarea
attributes:
label: What happened
description: Describe what happened.
placeholder: >
Please provide the context in which the problem occurred and explain what happened
validations:
required: true
- type: textarea
attributes:
label: What you expected to happen
description: What do you think went wrong?
placeholder: >
Please explain why you think the behaviour is erroneous. It is extremely helpful if you copy and paste
the fragment of logs showing the exact error messages or wrong behaviour and screenshots for
UI problems. You can include files by dragging and dropping them here.
**NOTE**: please copy and paste texts instead of taking screenshots of them for easy future search.
validations:
required: true
- type: textarea
attributes:
label: How to reproduce
description: >
What should we do to reproduce the problem? If you are not able to provide a reproducible case,
please open a [Discussion](https://github.com/apache/dolphinscheduler/discussions) instead.
placeholder: >
Please make sure you provide a reproducible step-by-step case of how to reproduce the problem
as minimally and precisely as possible. Keep in mind we do not have access to your deployment.
Remember that non-reproducible issues will be closed! Opening a discussion is recommended as a
first step.
validations:
required: true
- type: textarea
attributes:
label: Anything else
description: Anything else we need to know?
placeholder: >
How often does this problem occur? (Once? Every time? Only when certain conditions are met?)
Any relevant logs to include? Put them here inside fenced
``` ``` blocks or inside a collapsable details tag if it's too long:
<details><summary>x.log</summary> lots of stuff </details>
- type: checkboxes
attributes:
label: Are you willing to submit PR?
description: >
This is absolutely not required, but we are happy to guide you in the contribution process
especially if you already have a good understanding of how to implement the fix.
Dolphinscheduler is a totally community-driven project and we love to bring new contributors in.
options:
- label: Yes I am willing to submit a PR!
- type: checkboxes
attributes:
label: Code of Conduct
description: |
The Code of Conduct helps create a safe space for everyone. We require that everyone agrees to it.
options:
- label: >
I agree to follow this project's
[Code of Conduct](https://www.apache.org/foundation/policies/conduct)
required: true
- type: markdown
attributes:
value: "Thanks for completing our form!"
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
r"""
A tutorial example take you to experience pydolphinscheduler.
After tutorial.py file submit to Apache DolphinScheduler server a DAG would be create,
and workflow DAG graph as below:
--> task_child_one
/ \
task_parent --> --> task_union
\ /
--> task_child_two
it will instantiate and run all the task it have.
"""
from pydolphinscheduler.core.process_definition import ProcessDefinition
from pydolphinscheduler.tasks.shell import Shell
with ProcessDefinition(name="tutorial", tenant="tenant_exists") as pd:
task_parent = Shell(name="task_parent", command="echo hello pydolphinscheduler")
task_child_one = Shell(name="task_child_one", command="echo 'child one'")
task_child_two = Shell(name="task_child_two", command="echo 'child two'")
task_union = Shell(name="task_union", command="echo union")
task_group = [task_child_one, task_child_two]
task_parent.set_downstream(task_group)
task_union << task_group
pd.run()
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/pydolphinscheduler/requirements_dev.txt | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# testting
pytest~=6.2.5
# code linting and formatting
flake8
flake8-docstrings
flake8-black
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/constants.py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Constants for pydolphinscheduler."""
class ProcessDefinitionReleaseState:
"""Constants for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition` release state."""
ONLINE: str = "ONLINE"
OFFLINE: str = "OFFLINE"
class ProcessDefinitionDefault:
"""Constants default value for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition`."""
PROJECT: str = "project-pydolphin"
TENANT: str = "tenant_pydolphin"
USER: str = "userPythonGateway"
# TODO simple set password same as username
USER_PWD: str = "userPythonGateway"
USER_EMAIL: str = "userPythonGateway@dolphinscheduler.com"
USER_PHONE: str = "11111111111"
USER_STATE: int = 1
QUEUE: str = "queuePythonGateway"
WORKER_GROUP: str = "default"
class TaskPriority(str):
"""Constants for task priority."""
HIGHEST = "HIGHEST"
HIGH = "HIGH"
MEDIUM = "MEDIUM"
LOW = "LOW"
LOWEST = "LOWEST"
class TaskFlag(str):
"""Constants for task flag."""
YES = "YES"
NO = "NO"
class TaskTimeoutFlag(str):
"""Constants for task timeout flag."""
CLOSE = "CLOSE"
class TaskType(str):
"""Constants for task type, it will also show you which kind we support up to now."""
SHELL = "SHELL"
class DefaultTaskCodeNum(str):
"""Constants and default value for default task code number."""
DEFAULT = 1
class JavaGatewayDefault(str):
"""Constants and default value for java gateway."""
RESULT_MESSAGE_KEYWORD = "msg"
RESULT_MESSAGE_SUCCESS = "success"
RESULT_STATUS_KEYWORD = "status"
RESULT_STATUS_SUCCESS = "SUCCESS"
RESULT_DATA = "data"
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/core/process_definition.py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Module process definition, core class for workflow define."""
import json
from typing import Optional, List, Dict, Set
from pydolphinscheduler.constants import (
ProcessDefinitionReleaseState,
ProcessDefinitionDefault,
)
from pydolphinscheduler.core.base import Base
from pydolphinscheduler.java_gateway import launch_gateway
from pydolphinscheduler.side import Tenant, Project, User
class ProcessDefinitionContext:
"""Class process definition context, use when task get process definition from context expression."""
_context_managed_process_definition: Optional["ProcessDefinition"] = None
@classmethod
def set(cls, pd: "ProcessDefinition") -> None:
"""Set attribute self._context_managed_process_definition."""
cls._context_managed_process_definition = pd
@classmethod
def get(cls) -> Optional["ProcessDefinition"]:
"""Get attribute self._context_managed_process_definition."""
return cls._context_managed_process_definition
@classmethod
def delete(cls) -> None:
"""Delete attribute self._context_managed_process_definition."""
cls._context_managed_process_definition = None
class ProcessDefinition(Base):
"""process definition object, will define process definition attribute, task, relation.
TODO: maybe we should rename this class, currently use DS object name.
"""
# key attribute for identify ProcessDefinition object
_KEY_ATTR = {
"name",
"project",
"tenant",
"release_state",
"param",
}
_TO_DICT_ATTR = {
"name",
"description",
"_project",
"_tenant",
"worker_group",
"timeout",
"release_state",
"param",
"tasks",
"task_definition_json",
"task_relation_json",
}
def __init__(
self,
name: str,
description: Optional[str] = None,
user: Optional[str] = ProcessDefinitionDefault.USER,
project: Optional[str] = ProcessDefinitionDefault.PROJECT,
tenant: Optional[str] = ProcessDefinitionDefault.TENANT,
queue: Optional[str] = ProcessDefinitionDefault.QUEUE,
worker_group: Optional[str] = ProcessDefinitionDefault.WORKER_GROUP,
timeout: Optional[int] = 0,
release_state: Optional[str] = ProcessDefinitionReleaseState.ONLINE,
param: Optional[List] = None,
):
super().__init__(name, description)
self._user = user
self._project = project
self._tenant = tenant
self._queue = queue
self.worker_group = worker_group
self.timeout = timeout
self.release_state = release_state
self.param = param
self.tasks: dict = {}
# TODO how to fix circle import
self._task_relations: set["TaskRelation"] = set() # noqa: F821
self._process_definition_code = None
def __enter__(self) -> "ProcessDefinition":
ProcessDefinitionContext.set(self)
return self
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
ProcessDefinitionContext.delete()
@property
def tenant(self) -> Tenant:
"""Get attribute tenant."""
return Tenant(self._tenant)
@tenant.setter
def tenant(self, tenant: Tenant) -> None:
"""Set attribute tenant."""
self._tenant = tenant.name
@property
def project(self) -> Project:
"""Get attribute project."""
return Project(self._project)
@project.setter
def project(self, project: Project) -> None:
"""Set attribute project."""
self._project = project.name
@property
def user(self) -> User:
"""Get user object.
For now we just get from python side but not from java gateway side, so it may not correct.
"""
return User(
self._user,
ProcessDefinitionDefault.USER_PWD,
ProcessDefinitionDefault.USER_EMAIL,
ProcessDefinitionDefault.USER_PHONE,
self._tenant,
self._queue,
ProcessDefinitionDefault.USER_STATE,
)
@property
def task_definition_json(self) -> List[Dict]:
"""Return all tasks definition in list of dict."""
if not self.tasks:
return [self.tasks]
else:
return [task.to_dict() for task in self.tasks.values()]
@property
def task_relation_json(self) -> List[Dict]:
"""Return all relation between tasks pair in list of dict."""
if not self.tasks:
return [self.tasks]
else:
self._handle_root_relation()
return [tr.to_dict() for tr in self._task_relations]
# TODO inti DAG's tasks are in the same location with default {x: 0, y: 0}
@property
def task_location(self) -> List[Dict]:
"""Return all tasks location for all process definition.
For now, we only set all location with same x and y valued equal to 0. Because we do not
find a good way to set task locations. This is requests from java gateway interface.
"""
if not self.tasks:
return [self.tasks]
else:
return [{"taskCode": task_code, "x": 0, "y": 0} for task_code in self.tasks]
@property
def task_list(self) -> List["Task"]: # noqa: F821
"""Return list of tasks objects."""
return list(self.tasks.values())
def _handle_root_relation(self):
"""Handle root task property :class:`pydolphinscheduler.core.task.TaskRelation`.
Root task in DAG do not have dominant upstream node, but we have to add an exactly default
upstream task with task_code equal to `0`. This is requests from java gateway interface.
"""
from pydolphinscheduler.core.task import TaskRelation
post_relation_code = set()
for relation in self._task_relations:
post_relation_code.add(relation.post_task_code)
for task in self.task_list:
if task.code not in post_relation_code:
root_relation = TaskRelation(pre_task_code=0, post_task_code=task.code)
self._task_relations.add(root_relation)
def add_task(self, task: "Task") -> None: # noqa: F821
"""Add a single task to process definition."""
self.tasks[task.code] = task
task._process_definition = self
def add_tasks(self, tasks: List["Task"]) -> None: # noqa: F821
"""Add task sequence to process definition, it a wrapper of :func:`add_task`."""
for task in tasks:
self.add_task(task)
def get_task(self, code: str) -> "Task": # noqa: F821
"""Get task object from process definition by given code."""
if code not in self.tasks:
raise ValueError(
"Task with code %s can not found in process definition %",
(code, self.name),
)
return self.tasks[code]
# TODO which tying should return in this case
def get_tasks_by_name(self, name: str) -> Set["Task"]: # noqa: F821
"""Get tasks object by given name, if will return all tasks with this name."""
find = set()
for task in self.tasks.values():
if task.name == name:
find.add(task)
return find
def get_one_task_by_name(self, name: str) -> "Task": # noqa: F821
"""Get exact one task from process definition by given name.
Function always return one task even though this process definition have more than one task with
this name.
"""
tasks = self.get_tasks_by_name(name)
if not tasks:
raise ValueError(f"Can not find task with name {name}.")
return tasks.pop()
def run(self):
"""Submit and Start ProcessDefinition instance.
Shortcut for function :func:`submit` and function :func:`start`. Only support manual start workflow
for now, and schedule run will coming soon.
:return:
"""
self.submit()
self.start()
def _ensure_side_model_exists(self):
"""Ensure process definition side model exists.
For now, side object including :class:`pydolphinscheduler.side.project.Project`,
:class:`pydolphinscheduler.side.tenant.Tenant`, :class:`pydolphinscheduler.side.user.User`.
If these model not exists, would create default value in
:class:`pydolphinscheduler.constants.ProcessDefinitionDefault`.
"""
# TODO used metaclass for more pythonic
self.tenant.create_if_not_exists(self._queue)
# model User have to create after Tenant created
self.user.create_if_not_exists()
# Project model need User object exists
self.project.create_if_not_exists(self._user)
def submit(self) -> int:
"""Submit ProcessDefinition instance to java gateway."""
self._ensure_side_model_exists()
gateway = launch_gateway()
self._process_definition_code = gateway.entry_point.createOrUpdateProcessDefinition(
self._user,
self._project,
self.name,
str(self.description) if self.description else "",
str(self.param) if self.param else None,
json.dumps(self.task_location),
self.timeout,
self.worker_group,
self._tenant,
# TODO add serialization function
json.dumps(self.task_relation_json),
json.dumps(self.task_definition_json),
None,
)
return self._process_definition_code
def start(self) -> None:
"""Create and start ProcessDefinition instance.
which post to `start-process-instance` to java gateway
"""
gateway = launch_gateway()
gateway.entry_point.execProcessInstance(
self._user,
self._project,
self.name,
"",
self.worker_group,
24 * 3600,
)
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/utils/date.py | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/pydolphinscheduler/tests/core/test_process_definition.py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Test process definition."""
import pytest
from pydolphinscheduler.constants import (
ProcessDefinitionDefault,
ProcessDefinitionReleaseState,
)
from pydolphinscheduler.core.process_definition import ProcessDefinition
from pydolphinscheduler.core.task import TaskParams
from pydolphinscheduler.side import Tenant, Project, User
from tests.testing.task import Task
TEST_PROCESS_DEFINITION_NAME = "simple-test-process-definition"
@pytest.mark.parametrize("func", ["run", "submit", "start"])
def test_process_definition_key_attr(func):
"""Test process definition have specific functions or attributes."""
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
assert hasattr(
pd, func
), f"ProcessDefinition instance don't have attribute `{func}`"
@pytest.mark.parametrize(
"name,value",
[
("project", Project(ProcessDefinitionDefault.PROJECT)),
("tenant", Tenant(ProcessDefinitionDefault.TENANT)),
(
"user",
User(
ProcessDefinitionDefault.USER,
ProcessDefinitionDefault.USER_PWD,
ProcessDefinitionDefault.USER_EMAIL,
ProcessDefinitionDefault.USER_PHONE,
ProcessDefinitionDefault.TENANT,
ProcessDefinitionDefault.QUEUE,
ProcessDefinitionDefault.USER_STATE,
),
),
("worker_group", ProcessDefinitionDefault.WORKER_GROUP),
("release_state", ProcessDefinitionReleaseState.ONLINE),
],
)
def test_process_definition_default_value(name, value):
"""Test process definition default attributes."""
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
assert getattr(pd, name) == value, (
f"ProcessDefinition instance attribute `{name}` not with "
f"except default value `{getattr(pd, name)}`"
)
@pytest.mark.parametrize(
"name,cls,expect",
[
("project", Project, "project"),
("tenant", Tenant, "tenant"),
("worker_group", str, "worker_group"),
],
)
def test_process_definition_set_attr(name, cls, expect):
"""Test process definition set specific attributes."""
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
setattr(pd, name, cls(expect))
assert getattr(pd, name) == cls(
expect
), f"ProcessDefinition set attribute `{name}` do not work expect"
def test_process_definition_to_dict_without_task():
"""Test process definition function to_dict without task."""
expect = {
"name": TEST_PROCESS_DEFINITION_NAME,
"description": None,
"project": ProcessDefinitionDefault.PROJECT,
"tenant": ProcessDefinitionDefault.TENANT,
"workerGroup": ProcessDefinitionDefault.WORKER_GROUP,
"timeout": 0,
"releaseState": ProcessDefinitionReleaseState.ONLINE,
"param": None,
"tasks": {},
"taskDefinitionJson": [{}],
"taskRelationJson": [{}],
}
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
assert pd.to_dict() == expect
def test_process_definition_simple():
"""Test process definition simple create workflow, including process definition, task, relation define."""
expect_tasks_num = 5
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME) as pd:
for i in range(expect_tasks_num):
task_params = TaskParams(raw_script=f"test-raw-script-{i}")
curr_task = Task(
name=f"task-{i}", task_type=f"type-{i}", task_params=task_params
)
# Set deps task i as i-1 parent
if i > 0:
pre_task = pd.get_one_task_by_name(f"task-{i - 1}")
curr_task.set_upstream(pre_task)
assert len(pd.tasks) == expect_tasks_num
# Test if task process_definition same as origin one
task: Task = pd.get_one_task_by_name("task-0")
assert pd is task.process_definition
# Test if all tasks with expect deps
for i in range(expect_tasks_num):
task: Task = pd.get_one_task_by_name(f"task-{i}")
if i == 0:
assert task._upstream_task_codes == set()
assert task._downstream_task_codes == {
pd.get_one_task_by_name("task-1").code
}
elif i == expect_tasks_num - 1:
assert task._upstream_task_codes == {
pd.get_one_task_by_name(f"task-{i - 1}").code
}
assert task._downstream_task_codes == set()
else:
assert task._upstream_task_codes == {
pd.get_one_task_by_name(f"task-{i - 1}").code
}
assert task._downstream_task_codes == {
pd.get_one_task_by_name(f"task-{i + 1}").code
}
@pytest.mark.parametrize(
"user_attrs",
[
{"tenant": "tenant_specific"},
{"queue": "queue_specific"},
{"tenant": "tenant_specific", "queue": "queue_specific"},
],
)
def test_set_process_definition_user_attr(user_attrs):
"""Test user with correct attributes if we specific assigned to process definition object."""
default_value = {
"tenant": ProcessDefinitionDefault.TENANT,
"queue": ProcessDefinitionDefault.QUEUE,
}
with ProcessDefinition(TEST_PROCESS_DEFINITION_NAME, **user_attrs) as pd:
user = pd.user
for attr in default_value:
# Get assigned attribute if we specific, else get default value
except_attr = (
user_attrs[attr] if attr in user_attrs else default_value[attr]
)
# Get actually attribute of user object
actual_attr = getattr(user, attr)
assert (
except_attr == actual_attr
), f"Except attribute is {except_attr} but get {actual_attr}"
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/pydolphinscheduler/tests/utils/__init__.py | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/pydolphinscheduler/tests/utils/test_date.py | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,838 | [Feature][python] Add parameter schedule to process definition | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Python API `ProcessDefinition` do not support schedule for now, we should add this to it, and set workflow schedule parameter. https://github.com/apache/dolphinscheduler/blob/dev/dolphinscheduler-python/pydolphinscheduler/examples/tutorial.py
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6838 | https://github.com/apache/dolphinscheduler/pull/6664 | 089f73ebe4b6fcdb8c02b9098ebe86a8de183ace | e76cf77040ae558461c54730c27eb0fbe28bc54f | "2021-11-13T07:04:12Z" | java | "2021-11-13T09:21:40Z" | dolphinscheduler-python/src/main/java/org/apache/dolphinscheduler/server/PythonGatewayServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.ExecutorService;
import org.apache.dolphinscheduler.api.service.ProcessDefinitionService;
import org.apache.dolphinscheduler.api.service.ProjectService;
import org.apache.dolphinscheduler.api.service.QueueService;
import org.apache.dolphinscheduler.api.service.TaskDefinitionService;
import org.apache.dolphinscheduler.api.service.TenantService;
import org.apache.dolphinscheduler.api.service.UsersService;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Priority;
import org.apache.dolphinscheduler.common.enums.ProcessExecutionTypeEnum;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.common.enums.RunMode;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.enums.UserType;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.utils.SnowFlakeUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.Project;
import org.apache.dolphinscheduler.dao.entity.Queue;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.FilterType;
import py4j.GatewayServer;
@ComponentScan(value = "org.apache.dolphinscheduler", excludeFilters = {
@ComponentScan.Filter(type = FilterType.REGEX, pattern = {
"org.apache.dolphinscheduler.server.master.*",
"org.apache.dolphinscheduler.server.worker.*",
"org.apache.dolphinscheduler.server.monitor.*",
"org.apache.dolphinscheduler.server.log.*"
})
})
public class PythonGatewayServer extends SpringBootServletInitializer {
private static final Logger LOGGER = LoggerFactory.getLogger(PythonGatewayServer.class);
@Autowired
private ProcessDefinitionMapper processDefinitionMapper;
@Autowired
private ProjectService projectService;
@Autowired
private TenantService tenantService;
@Autowired
private ExecutorService executorService;
@Autowired
private ProcessDefinitionService processDefinitionService;
@Autowired
private TaskDefinitionService taskDefinitionService;
@Autowired
private UsersService usersService;
@Autowired
private QueueService queueService;
@Autowired
private ProjectMapper projectMapper;
@Autowired
private TaskDefinitionMapper taskDefinitionMapper;
// TODO replace this user to build in admin user if we make sure build in one could not be change
private final User dummyAdminUser = new User() {
{
setId(Integer.MAX_VALUE);
setUserName("dummyUser");
setUserType(UserType.ADMIN_USER);
}
};
private final Queue queuePythonGateway = new Queue() {
{
setId(Integer.MAX_VALUE);
setQueueName("queuePythonGateway");
}
};
public String ping() {
return "PONG";
}
// TODO Should we import package in python client side? utils package can but service can not, why
// Core api
public Map<String, Object> genTaskCodeList(Integer genNum) {
return taskDefinitionService.genTaskCodeList(genNum);
}
public Map<String, Long> getCodeAndVersion(String projectName, String taskName) throws SnowFlakeUtils.SnowFlakeException {
Project project = projectMapper.queryByName(projectName);
Map<String, Long> result = new HashMap<>();
// project do not exists, mean task not exists too, so we should directly return init value
if (project == null) {
result.put("code", SnowFlakeUtils.getInstance().nextId());
result.put("version", 0L);
return result;
}
TaskDefinition taskDefinition = taskDefinitionMapper.queryByName(project.getCode(), taskName);
if (taskDefinition == null) {
result.put("code", SnowFlakeUtils.getInstance().nextId());
result.put("version", 0L);
} else {
result.put("code", taskDefinition.getCode());
result.put("version", (long) taskDefinition.getVersion());
}
return result;
}
/**
* create or update process definition.
* If process definition do not exists in Project=`projectCode` would create a new one
* If process definition already exists in Project=`projectCode` would update it
* All requests
* <p>
*
* @param name process definition name
* @param description description
* @param globalParams global params
* @param locations locations for nodes
* @param timeout timeout
* @param tenantCode tenantCode
* @param taskRelationJson relation json for nodes
* @param taskDefinitionJson taskDefinitionJson
* @return create result code
*/
public Long createOrUpdateProcessDefinition(String userName,
String projectName,
String name,
String description,
String globalParams,
String locations,
int timeout,
String tenantCode,
String taskRelationJson,
String taskDefinitionJson,
ProcessExecutionTypeEnum executionType) {
User user = usersService.queryUser(userName);
Project project = (Project) projectService.queryByName(user, projectName).get(Constants.DATA_LIST);
long projectCode = project.getCode();
Map<String, Object> verifyProcessDefinitionExists = processDefinitionService.verifyProcessDefinitionName(user, projectCode, name);
Status verifyStatus = (Status) verifyProcessDefinitionExists.get(Constants.STATUS);
if (verifyStatus == Status.PROCESS_DEFINITION_NAME_EXIST) {
// update process definition
ProcessDefinition processDefinition = processDefinitionMapper.queryByDefineName(projectCode, name);
long processDefinitionCode = processDefinition.getCode();
// make sure process definition offline which could edit
processDefinitionService.releaseProcessDefinition(user, projectCode, processDefinitionCode, ReleaseState.OFFLINE);
Map<String, Object> result = processDefinitionService.updateProcessDefinition(user, projectCode, name, processDefinitionCode, description, globalParams,
locations, timeout, tenantCode, taskRelationJson, taskDefinitionJson,executionType);
return processDefinitionCode;
} else if (verifyStatus == Status.SUCCESS) {
// create process definition
Map<String, Object> result = processDefinitionService.createProcessDefinition(user, projectCode, name, description, globalParams,
locations, timeout, tenantCode, taskRelationJson, taskDefinitionJson,executionType);
ProcessDefinition processDefinition = (ProcessDefinition) result.get(Constants.DATA_LIST);
return processDefinition.getCode();
} else {
String msg = "Verify process definition exists status is invalid, neither SUCCESS or PROCESS_DEFINITION_NAME_EXIST.";
LOGGER.error(msg);
throw new RuntimeException(msg);
}
}
public void execProcessInstance(String userName,
String projectName,
String processDefinitionName,
String cronTime,
String workerGroup,
Integer timeout
) {
User user = usersService.queryUser(userName);
Project project = projectMapper.queryByName(projectName);
ProcessDefinition processDefinition = processDefinitionMapper.queryByDefineName(project.getCode(), processDefinitionName);
// temp default value
FailureStrategy failureStrategy = FailureStrategy.CONTINUE;
TaskDependType taskDependType = TaskDependType.TASK_POST;
WarningType warningType = WarningType.NONE;
RunMode runMode = RunMode.RUN_MODE_SERIAL;
Priority priority = Priority.MEDIUM;
int warningGroupId = 0;
Long environmentCode = -1L;
Map<String, String> startParams = null;
Integer expectedParallelismNumber = null;
String startNodeList = null;
// make sure process definition online
processDefinitionService.releaseProcessDefinition(user, project.getCode(), processDefinition.getCode(), ReleaseState.ONLINE);
executorService.execProcessInstance(user,
project.getCode(),
processDefinition.getCode(),
cronTime,
null,
failureStrategy,
startNodeList,
taskDependType,
warningType,
warningGroupId,
runMode,
priority,
workerGroup,
environmentCode,
timeout,
startParams,
expectedParallelismNumber,
0
);
}
// side object
public Map<String, Object> createProject(String userName, String name, String desc) {
User user = usersService.queryUser(userName);
return projectService.createProject(user, name, desc);
}
public Map<String, Object> createQueue(String name, String queueName) {
Result<Object> verifyQueueExists = queueService.verifyQueue(name, queueName);
if (verifyQueueExists.getCode() == 0) {
return queueService.createQueue(dummyAdminUser, name, queueName);
} else {
Map<String, Object> result = new HashMap<>();
// TODO function putMsg do not work here
result.put(Constants.STATUS, Status.SUCCESS);
result.put(Constants.MSG, Status.SUCCESS.getMsg());
return result;
}
}
public Map<String, Object> createTenant(String tenantCode, String desc, String queueName) throws Exception {
if (tenantService.checkTenantExists(tenantCode)) {
Map<String, Object> result = new HashMap<>();
// TODO function putMsg do not work here
result.put(Constants.STATUS, Status.SUCCESS);
result.put(Constants.MSG, Status.SUCCESS.getMsg());
return result;
} else {
Result<Object> verifyQueueExists = queueService.verifyQueue(queueName, queueName);
if (verifyQueueExists.getCode() == 0) {
// TODO why create do not return id?
queueService.createQueue(dummyAdminUser, queueName, queueName);
}
Map<String, Object> result = queueService.queryQueueName(queueName);
List<Queue> queueList = (List<Queue>) result.get(Constants.DATA_LIST);
Queue queue = queueList.get(0);
return tenantService.createTenant(dummyAdminUser, tenantCode, queue.getId(), desc);
}
}
public void createUser(String userName,
String userPassword,
String email,
String phone,
String tenantCode,
String queue,
int state) {
User user = usersService.queryUser(userName);
if (Objects.isNull(user)) {
Map<String, Object> tenantResult = tenantService.queryByTenantCode(tenantCode);
Tenant tenant = (Tenant) tenantResult.get(Constants.DATA_LIST);
usersService.createUser(userName, userPassword, email, tenant.getId(), phone, queue, state);
}
}
@PostConstruct
public void run() {
GatewayServer server = new GatewayServer(this);
GatewayServer.turnLoggingOn();
// Start server to accept python client RPC
server.start();
}
public static void main(String[] args) {
SpringApplication.run(PythonGatewayServer.class, args);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,613 | [Bug] [Master] A bug on task retry mechanism | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
There are process nodes at the same level in the process, a failed error retry "node1_fail_retry" node (the actual task scenario may fail for the first time, and the task runs successfully after retry), and a failed "node2_fail" node;
The current phenomenon is that after the "node2_fail" node fails, the process is in the failure state, and the "node1_fail_retry" node will not retry the task (the retry state may be successful) and the subsequent tasks will not run;
Because the "node2_fail" node fails and the "errorTaskList" in the "MasterExecThread" is not empty, in the "processFailed()" method, "hasFailedTask()" returns true. When the FailureStrategy is CONTINUE, "activetasknode = 0", "processFailed()" returns false, causing the process instance state to change to failed.
![8](https://user-images.githubusercontent.com/39785282/139018718-881ecc96-010f-4e95-9844-2f75fa8cb4bb.png)
### What you expected to happen
I think it is wrong not to retry the process node at the same level if it fails, because the actual task scenario may fail for the first time, run successfully after retrying, and then continue to run the downstream node.
### How to reproduce
1. Create a new process
2. Create a process node relationship as shown in the figure. The node "node1_fail_retry" is set error retry. The node "node2_fail" fails directly. Nodes "node3", "node4" and "node5 "can be created arbitrarily
3. Online process and runprocess
![11](https://user-images.githubusercontent.com/39785282/139022256-6b54c0c7-948c-4319-bf1a-c072754ce912.png)
<img width="878" alt="22" src="https://user-images.githubusercontent.com/39785282/139022278-1eae4ae2-d935-429a-b861-3232733f73f1.png">
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6613 | https://github.com/apache/dolphinscheduler/pull/6843 | 880c1d28ff7f334fd50b2376655f1a884fcb496a | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | "2021-10-27T07:45:37Z" | java | "2021-11-14T17:05:13Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteThread.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.runner;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVERY_START_NODE_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_START_NODES;
import static org.apache.dolphinscheduler.common.Constants.DEFAULT_WORKER_GROUP;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.DependResult;
import org.apache.dolphinscheduler.common.enums.Direct;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.Priority;
import org.apache.dolphinscheduler.common.enums.StateEvent;
import org.apache.dolphinscheduler.common.enums.StateEventType;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.enums.TaskTimeoutStrategy;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.ProcessDag;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.common.utils.ParameterUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.Environment;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.ProjectUser;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.utils.DagHelper;
import org.apache.dolphinscheduler.remote.command.HostUpdateCommand;
import org.apache.dolphinscheduler.remote.utils.Host;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.dispatch.executor.NettyExecutorManager;
import org.apache.dolphinscheduler.server.master.runner.task.ITaskProcessor;
import org.apache.dolphinscheduler.server.master.runner.task.TaskAction;
import org.apache.dolphinscheduler.server.master.runner.task.TaskProcessorFactory;
import org.apache.dolphinscheduler.service.alert.ProcessAlertManager;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.quartz.cron.CronUtils;
import org.apache.dolphinscheduler.service.queue.PeerTaskInstancePriorityQueue;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang.StringUtils;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Date;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedQueue;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.common.collect.HashBasedTable;
import com.google.common.collect.Lists;
import com.google.common.collect.Table;
/**
* master exec thread,split dag
*/
public class WorkflowExecuteThread implements Runnable {
/**
* logger of WorkflowExecuteThread
*/
private static final Logger logger = LoggerFactory.getLogger(WorkflowExecuteThread.class);
/**
* runing TaskNode
*/
private final Map<Integer, ITaskProcessor> activeTaskProcessorMaps = new ConcurrentHashMap<>();
/**
* process instance
*/
private ProcessInstance processInstance;
/**
* submit failure nodes
*/
private boolean taskFailedSubmit = false;
/**
* recover node id list
*/
private List<TaskInstance> recoverNodeIdList = new ArrayList<>();
/**
* error task list
*/
private Map<String, TaskInstance> errorTaskList = new ConcurrentHashMap<>();
/**
* complete task list
*/
private Map<String, TaskInstance> completeTaskList = new ConcurrentHashMap<>();
/**
* ready to submit task queue
*/
private PeerTaskInstancePriorityQueue readyToSubmitTaskQueue = new PeerTaskInstancePriorityQueue();
/**
* depend failed task map
*/
private Map<String, TaskInstance> dependFailedTask = new ConcurrentHashMap<>();
/**
* forbidden task map
*/
private Map<String, TaskNode> forbiddenTaskList = new ConcurrentHashMap<>();
/**
* skip task map
*/
private Map<String, TaskNode> skipTaskNodeList = new ConcurrentHashMap<>();
/**
* recover tolerance fault task list
*/
private List<TaskInstance> recoverToleranceFaultTaskList = new ArrayList<>();
/**
* alert manager
*/
private ProcessAlertManager processAlertManager;
/**
* the object of DAG
*/
private DAG<String, TaskNode, TaskNodeRelation> dag;
/**
* process service
*/
private ProcessService processService;
/**
* master config
*/
private MasterConfig masterConfig;
/**
*
*/
private NettyExecutorManager nettyExecutorManager;
private ConcurrentLinkedQueue<StateEvent> stateEvents = new ConcurrentLinkedQueue<>();
private List<Date> complementListDate = Lists.newLinkedList();
private Table<Integer, Long, TaskInstance> taskInstanceHashMap = HashBasedTable.create();
private ProcessDefinition processDefinition;
private String key;
private ConcurrentHashMap<Integer, TaskInstance> taskTimeoutCheckList;
/**
* start flag, true: start nodes submit completely
*
*/
private boolean isStart = false;
/**
* constructor of WorkflowExecuteThread
*
* @param processInstance processInstance
* @param processService processService
* @param nettyExecutorManager nettyExecutorManager
* @param taskTimeoutCheckList
*/
public WorkflowExecuteThread(ProcessInstance processInstance
, ProcessService processService
, NettyExecutorManager nettyExecutorManager
, ProcessAlertManager processAlertManager
, MasterConfig masterConfig
, ConcurrentHashMap<Integer, TaskInstance> taskTimeoutCheckList) {
this.processService = processService;
this.processInstance = processInstance;
this.masterConfig = masterConfig;
this.nettyExecutorManager = nettyExecutorManager;
this.processAlertManager = processAlertManager;
this.taskTimeoutCheckList = taskTimeoutCheckList;
}
@Override
public void run() {
try {
if (!this.isStart()) {
startProcess();
} else {
handleEvents();
}
} catch (Exception e) {
logger.error("handler error:", e);
}
}
/**
* the process start nodes are submitted completely.
* @return
*/
public boolean isStart() {
return this.isStart;
}
private void handleEvents() {
while (this.stateEvents.size() > 0) {
try {
StateEvent stateEvent = this.stateEvents.peek();
if (stateEventHandler(stateEvent)) {
this.stateEvents.remove(stateEvent);
}
} catch (Exception e) {
logger.error("state handle error:", e);
}
}
}
public String getKey() {
if (StringUtils.isNotEmpty(key)
|| this.processDefinition == null) {
return key;
}
key = String.format("%d_%d_%d",
this.processDefinition.getCode(),
this.processDefinition.getVersion(),
this.processInstance.getId());
return key;
}
public boolean addStateEvent(StateEvent stateEvent) {
if (processInstance.getId() != stateEvent.getProcessInstanceId()) {
logger.info("state event would be abounded :{}", stateEvent.toString());
return false;
}
this.stateEvents.add(stateEvent);
return true;
}
public int eventSize() {
return this.stateEvents.size();
}
public ProcessInstance getProcessInstance() {
return this.processInstance;
}
private boolean stateEventHandler(StateEvent stateEvent) {
logger.info("process event: {}", stateEvent.toString());
if (!checkStateEvent(stateEvent)) {
return false;
}
boolean result = false;
switch (stateEvent.getType()) {
case PROCESS_STATE_CHANGE:
result = processStateChangeHandler(stateEvent);
break;
case TASK_STATE_CHANGE:
result = taskStateChangeHandler(stateEvent);
break;
case PROCESS_TIMEOUT:
result = processTimeout();
break;
case TASK_TIMEOUT:
result = taskTimeout(stateEvent);
break;
default:
break;
}
if (result) {
this.stateEvents.remove(stateEvent);
}
return result;
}
private boolean taskTimeout(StateEvent stateEvent) {
if (taskInstanceHashMap.containsRow(stateEvent.getTaskInstanceId())) {
return true;
}
TaskInstance taskInstance = taskInstanceHashMap
.row(stateEvent.getTaskInstanceId())
.values()
.iterator().next();
if (TimeoutFlag.CLOSE == taskInstance.getTaskDefine().getTimeoutFlag()) {
return true;
}
TaskTimeoutStrategy taskTimeoutStrategy = taskInstance.getTaskDefine().getTimeoutNotifyStrategy();
if (TaskTimeoutStrategy.FAILED == taskTimeoutStrategy) {
ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId());
taskProcessor.action(TaskAction.TIMEOUT);
return false;
} else {
processAlertManager.sendTaskTimeoutAlert(processInstance, taskInstance, taskInstance.getTaskDefine());
return true;
}
}
private boolean processTimeout() {
this.processAlertManager.sendProcessTimeoutAlert(this.processInstance, this.processDefinition);
return true;
}
private boolean taskStateChangeHandler(StateEvent stateEvent) {
TaskInstance task = processService.findTaskInstanceById(stateEvent.getTaskInstanceId());
if (task.getState().typeIsFinished()) {
taskFinished(task);
} else if (activeTaskProcessorMaps.containsKey(stateEvent.getTaskInstanceId())) {
ITaskProcessor iTaskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId());
iTaskProcessor.run();
if (iTaskProcessor.taskState().typeIsFinished()) {
task = processService.findTaskInstanceById(stateEvent.getTaskInstanceId());
taskFinished(task);
}
} else {
logger.error("state handler error: {}", stateEvent.toString());
}
return true;
}
private void taskFinished(TaskInstance task) {
logger.info("work flow {} task {} state:{} ",
processInstance.getId(),
task.getId(),
task.getState());
if (task.taskCanRetry()) {
addTaskToStandByList(task);
if (!task.retryTaskIntervalOverTime()) {
logger.info("failure task will be submitted: process id: {}, task instance id: {} state:{} retry times:{} / {}, interval:{}",
processInstance.getId(),
task.getId(),
task.getState(),
task.getRetryTimes(),
task.getMaxRetryTimes(),
task.getRetryInterval());
this.addTimeoutCheck(task);
} else {
submitStandByTask();
}
return;
}
ProcessInstance processInstance = processService.findProcessInstanceById(this.processInstance.getId());
completeTaskList.put(Long.toString(task.getTaskCode()), task);
activeTaskProcessorMaps.remove(task.getId());
taskTimeoutCheckList.remove(task.getId());
if (task.getState().typeIsSuccess()) {
processInstance.setVarPool(task.getVarPool());
processService.saveProcessInstance(processInstance);
submitPostNode(Long.toString(task.getTaskCode()));
} else if (task.getState().typeIsFailure()) {
if (task.isConditionsTask()
|| DagHelper.haveConditionsAfterNode(Long.toString(task.getTaskCode()), dag)) {
submitPostNode(Long.toString(task.getTaskCode()));
} else {
errorTaskList.put(Long.toString(task.getTaskCode()), task);
if (processInstance.getFailureStrategy() == FailureStrategy.END) {
killAllTasks();
}
}
}
this.updateProcessInstanceState();
}
private boolean checkStateEvent(StateEvent stateEvent) {
if (this.processInstance.getId() != stateEvent.getProcessInstanceId()) {
logger.error("mismatch process instance id: {}, state event:{}",
this.processInstance.getId(),
stateEvent.toString());
return false;
}
return true;
}
private boolean processStateChangeHandler(StateEvent stateEvent) {
try {
logger.info("process:{} state {} change to {}", processInstance.getId(), processInstance.getState(), stateEvent.getExecutionStatus());
processInstance = processService.findProcessInstanceById(this.processInstance.getId());
if (processComplementData()) {
return true;
}
if (stateEvent.getExecutionStatus().typeIsFinished()) {
endProcess();
}
if (processInstance.getState() == ExecutionStatus.READY_STOP) {
killAllTasks();
}
return true;
} catch (Exception e) {
logger.error("process state change error:", e);
}
return true;
}
private boolean processComplementData() throws Exception {
if (!needComplementProcess()) {
return false;
}
Date scheduleDate = processInstance.getScheduleTime();
if (scheduleDate == null) {
scheduleDate = complementListDate.get(0);
} else if (processInstance.getState().typeIsFinished()) {
endProcess();
if (complementListDate.size() <= 0) {
logger.info("process complement end. process id:{}", processInstance.getId());
return true;
}
int index = complementListDate.indexOf(scheduleDate);
if (index >= complementListDate.size() - 1 || !processInstance.getState().typeIsSuccess()) {
logger.info("process complement end. process id:{}", processInstance.getId());
// complement data ends || no success
return true;
}
logger.info("process complement continue. process id:{}, schedule time:{} complementListDate:{}",
processInstance.getId(),
processInstance.getScheduleTime(),
complementListDate.toString());
scheduleDate = complementListDate.get(index + 1);
//the next process complement
processInstance.setId(0);
}
processInstance.setScheduleTime(scheduleDate);
Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam());
if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) {
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam));
}
processInstance.setState(ExecutionStatus.RUNNING_EXECUTION);
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime()));
processInstance.setStartTime(new Date());
processInstance.setEndTime(null);
processService.saveProcessInstance(processInstance);
this.taskInstanceHashMap.clear();
startProcess();
return true;
}
private boolean needComplementProcess() {
if (processInstance.isComplementData()
&& Flag.NO == processInstance.getIsSubProcess()) {
return true;
}
return false;
}
private void startProcess() throws Exception {
if (this.taskInstanceHashMap.size() == 0) {
isStart = false;
buildFlowDag();
initTaskQueue();
submitPostNode(null);
isStart = true;
}
}
/**
* process end handle
*/
private void endProcess() {
this.stateEvents.clear();
processInstance.setEndTime(new Date());
ProcessDefinition processDefinition = this.processService.findProcessDefinition(processInstance.getProcessDefinitionCode(),processInstance.getProcessDefinitionVersion());
if (processDefinition.getExecutionType().typeIsSerialWait()) {
checkSerialProcess(processDefinition);
}
processService.updateProcessInstance(processInstance);
if (processInstance.getState().typeIsWaitingThread()) {
processService.createRecoveryWaitingThreadCommand(null, processInstance);
}
List<TaskInstance> taskInstances = processService.findValidTaskListByProcessId(processInstance.getId());
ProjectUser projectUser = processService.queryProjectWithUserByProcessInstanceId(processInstance.getId());
processAlertManager.sendAlertProcessInstance(processInstance, taskInstances, projectUser);
}
public void checkSerialProcess(ProcessDefinition processDefinition) {
this.processInstance = processService.findProcessInstanceById(processInstance.getId());
int nextInstanceId = processInstance.getNextProcessInstanceId();
if (nextInstanceId == 0) {
ProcessInstance nextProcessInstance = this.processService.loadNextProcess4Serial(processInstance.getProcessDefinition().getCode(),ExecutionStatus.SERIAL_WAIT.getCode());
if (nextProcessInstance == null) {
return;
}
nextInstanceId = nextProcessInstance.getId();
}
ProcessInstance nextProcessInstance = this.processService.findProcessInstanceById(nextInstanceId);
if (nextProcessInstance.getState().typeIsFinished() || nextProcessInstance.getState().typeIsRunning()) {
return;
}
Map<String, Object> cmdParam = new HashMap<>();
cmdParam.put(CMD_PARAM_RECOVER_PROCESS_ID_STRING, nextInstanceId);
Command command = new Command();
command.setCommandType(CommandType.RECOVER_SERIAL_WAIT);
command.setProcessDefinitionCode(processDefinition.getCode());
command.setCommandParam(JSONUtils.toJsonString(cmdParam));
processService.createCommand(command);
}
/**
* generate process dag
*
* @throws Exception exception
*/
private void buildFlowDag() throws Exception {
if (this.dag != null) {
return;
}
processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion());
recoverNodeIdList = getStartTaskInstanceList(processInstance.getCommandParam());
List<TaskNode> taskNodeList =
processService.transformTask(processService.findRelationByCode(processDefinition.getProjectCode(), processDefinition.getCode()), Lists.newArrayList());
forbiddenTaskList.clear();
taskNodeList.forEach(taskNode -> {
if (taskNode.isForbidden()) {
forbiddenTaskList.put(Long.toString(taskNode.getCode()), taskNode);
}
});
// generate process to get DAG info
List<String> recoveryNodeCodeList = getRecoveryNodeCodeList();
List<String> startNodeNameList = parseStartNodeName(processInstance.getCommandParam());
ProcessDag processDag = generateFlowDag(taskNodeList,
startNodeNameList, recoveryNodeCodeList, processInstance.getTaskDependType());
if (processDag == null) {
logger.error("processDag is null");
return;
}
// generate process dag
dag = DagHelper.buildDagGraph(processDag);
}
/**
* init task queue
*/
private void initTaskQueue() {
taskFailedSubmit = false;
activeTaskProcessorMaps.clear();
dependFailedTask.clear();
completeTaskList.clear();
errorTaskList.clear();
List<TaskInstance> taskInstanceList = processService.findValidTaskListByProcessId(processInstance.getId());
for (TaskInstance task : taskInstanceList) {
if (task.isTaskComplete()) {
completeTaskList.put(Long.toString(task.getTaskCode()), task);
}
if (task.isConditionsTask() || DagHelper.haveConditionsAfterNode(Long.toString(task.getTaskCode()), dag)) {
continue;
}
if (task.getState().typeIsFailure() && !task.taskCanRetry()) {
errorTaskList.put(Long.toString(task.getTaskCode()), task);
}
}
if (processInstance.isComplementData() && complementListDate.size() == 0) {
Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam());
if (cmdParam != null && cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) {
Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE));
Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
List<Schedule> schedules = processService.queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode());
if (complementListDate.size() == 0 && needComplementProcess()) {
complementListDate = CronUtils.getSelfFireDateList(start, end, schedules);
logger.info(" process definition code:{} complement data: {}",
processInstance.getProcessDefinitionCode(), complementListDate.toString());
if (complementListDate.size() > 0 && Flag.NO == processInstance.getIsSubProcess()) {
processInstance.setScheduleTime(complementListDate.get(0));
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime()));
processService.updateProcessInstance(processInstance);
}
}
}
}
}
/**
* submit task to execute
*
* @param taskInstance task instance
* @return TaskInstance
*/
private TaskInstance submitTaskExec(TaskInstance taskInstance) {
try {
ITaskProcessor taskProcessor = TaskProcessorFactory.getTaskProcessor(taskInstance.getTaskType());
if (taskInstance.getState() == ExecutionStatus.RUNNING_EXECUTION
&& taskProcessor.getType().equalsIgnoreCase(Constants.COMMON_TASK_TYPE)) {
notifyProcessHostUpdate(taskInstance);
}
boolean submit = taskProcessor.submit(taskInstance, processInstance, masterConfig.getTaskCommitRetryTimes(), masterConfig.getTaskCommitInterval());
if (submit) {
this.taskInstanceHashMap.put(taskInstance.getId(), taskInstance.getTaskCode(), taskInstance);
activeTaskProcessorMaps.put(taskInstance.getId(), taskProcessor);
taskProcessor.run();
addTimeoutCheck(taskInstance);
TaskDefinition taskDefinition = processService.findTaskDefinition(
taskInstance.getTaskCode(),
taskInstance.getTaskDefinitionVersion());
taskInstance.setTaskDefine(taskDefinition);
if (taskProcessor.taskState().typeIsFinished()) {
StateEvent stateEvent = new StateEvent();
stateEvent.setProcessInstanceId(this.processInstance.getId());
stateEvent.setTaskInstanceId(taskInstance.getId());
stateEvent.setExecutionStatus(taskProcessor.taskState());
stateEvent.setType(StateEventType.TASK_STATE_CHANGE);
this.stateEvents.add(stateEvent);
}
return taskInstance;
} else {
logger.error("process id:{} name:{} submit standby task id:{} name:{} failed!",
processInstance.getId(), processInstance.getName(),
taskInstance.getId(), taskInstance.getName());
return null;
}
} catch (Exception e) {
logger.error("submit standby task error", e);
return null;
}
}
private void notifyProcessHostUpdate(TaskInstance taskInstance) {
if (StringUtils.isEmpty(taskInstance.getHost())) {
return;
}
try {
HostUpdateCommand hostUpdateCommand = new HostUpdateCommand();
hostUpdateCommand.setProcessHost(NetUtils.getAddr(masterConfig.getListenPort()));
hostUpdateCommand.setTaskInstanceId(taskInstance.getId());
Host host = new Host(taskInstance.getHost());
nettyExecutorManager.doExecute(host, hostUpdateCommand.convert2Command());
} catch (Exception e) {
logger.error("notify process host update", e);
}
}
private void addTimeoutCheck(TaskInstance taskInstance) {
if (taskTimeoutCheckList.containsKey(taskInstance.getId())) {
return;
}
TaskDefinition taskDefinition = processService.findTaskDefinition(
taskInstance.getTaskCode(),
taskInstance.getTaskDefinitionVersion()
);
taskInstance.setTaskDefine(taskDefinition);
if (TimeoutFlag.OPEN == taskDefinition.getTimeoutFlag() || taskInstance.taskCanRetry()) {
this.taskTimeoutCheckList.put(taskInstance.getId(), taskInstance);
} else {
if (taskInstance.isDependTask() || taskInstance.isSubProcess()) {
this.taskTimeoutCheckList.put(taskInstance.getId(), taskInstance);
}
}
}
/**
* find task instance in db.
* in case submit more than one same name task in the same time.
*
* @param taskCode task code
* @param taskVersion task version
* @return TaskInstance
*/
private TaskInstance findTaskIfExists(Long taskCode, int taskVersion) {
List<TaskInstance> taskInstanceList = processService.findValidTaskListByProcessId(this.processInstance.getId());
for (TaskInstance taskInstance : taskInstanceList) {
if (taskInstance.getTaskCode() == taskCode && taskInstance.getTaskDefinitionVersion() == taskVersion) {
return taskInstance;
}
}
return null;
}
/**
* encapsulation task
*
* @param processInstance process instance
* @param taskNode taskNode
* @return TaskInstance
*/
private TaskInstance createTaskInstance(ProcessInstance processInstance, TaskNode taskNode) {
TaskInstance taskInstance = findTaskIfExists(taskNode.getCode(), taskNode.getVersion());
if (taskInstance == null) {
taskInstance = new TaskInstance();
taskInstance.setTaskCode(taskNode.getCode());
taskInstance.setTaskDefinitionVersion(taskNode.getVersion());
// task name
taskInstance.setName(taskNode.getName());
// task instance state
taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS);
// process instance id
taskInstance.setProcessInstanceId(processInstance.getId());
// task instance type
taskInstance.setTaskType(taskNode.getType().toUpperCase());
// task instance whether alert
taskInstance.setAlertFlag(Flag.NO);
// task instance start time
taskInstance.setStartTime(null);
// task instance flag
taskInstance.setFlag(Flag.YES);
// task dry run flag
taskInstance.setDryRun(processInstance.getDryRun());
// task instance retry times
taskInstance.setRetryTimes(0);
// max task instance retry times
taskInstance.setMaxRetryTimes(taskNode.getMaxRetryTimes());
// retry task instance interval
taskInstance.setRetryInterval(taskNode.getRetryInterval());
//set task param
taskInstance.setTaskParams(taskNode.getTaskParams());
// task instance priority
if (taskNode.getTaskInstancePriority() == null) {
taskInstance.setTaskInstancePriority(Priority.MEDIUM);
} else {
taskInstance.setTaskInstancePriority(taskNode.getTaskInstancePriority());
}
String processWorkerGroup = processInstance.getWorkerGroup();
processWorkerGroup = StringUtils.isBlank(processWorkerGroup) ? DEFAULT_WORKER_GROUP : processWorkerGroup;
String taskWorkerGroup = StringUtils.isBlank(taskNode.getWorkerGroup()) ? processWorkerGroup : taskNode.getWorkerGroup();
Long processEnvironmentCode = Objects.isNull(processInstance.getEnvironmentCode()) ? -1 : processInstance.getEnvironmentCode();
Long taskEnvironmentCode = Objects.isNull(taskNode.getEnvironmentCode()) ? processEnvironmentCode : taskNode.getEnvironmentCode();
if (!processWorkerGroup.equals(DEFAULT_WORKER_GROUP) && taskWorkerGroup.equals(DEFAULT_WORKER_GROUP)) {
taskInstance.setWorkerGroup(processWorkerGroup);
taskInstance.setEnvironmentCode(processEnvironmentCode);
} else {
taskInstance.setWorkerGroup(taskWorkerGroup);
taskInstance.setEnvironmentCode(taskEnvironmentCode);
}
if (!taskInstance.getEnvironmentCode().equals(-1L)) {
Environment environment = processService.findEnvironmentByCode(taskInstance.getEnvironmentCode());
if (Objects.nonNull(environment) && StringUtils.isNotEmpty(environment.getConfig())) {
taskInstance.setEnvironmentConfig(environment.getConfig());
}
}
// delay execution time
taskInstance.setDelayTime(taskNode.getDelayTime());
}
return taskInstance;
}
public void getPreVarPool(TaskInstance taskInstance, Set<String> preTask) {
Map<String, Property> allProperty = new HashMap<>();
Map<String, TaskInstance> allTaskInstance = new HashMap<>();
if (CollectionUtils.isNotEmpty(preTask)) {
for (String preTaskCode : preTask) {
TaskInstance preTaskInstance = completeTaskList.get(preTaskCode);
if (preTaskInstance == null) {
continue;
}
String preVarPool = preTaskInstance.getVarPool();
if (StringUtils.isNotEmpty(preVarPool)) {
List<Property> properties = JSONUtils.toList(preVarPool, Property.class);
for (Property info : properties) {
setVarPoolValue(allProperty, allTaskInstance, preTaskInstance, info);
}
}
}
if (allProperty.size() > 0) {
taskInstance.setVarPool(JSONUtils.toJsonString(allProperty.values()));
}
}
}
private void setVarPoolValue(Map<String, Property> allProperty, Map<String, TaskInstance> allTaskInstance, TaskInstance preTaskInstance, Property thisProperty) {
//for this taskInstance all the param in this part is IN.
thisProperty.setDirect(Direct.IN);
//get the pre taskInstance Property's name
String proName = thisProperty.getProp();
//if the Previous nodes have the Property of same name
if (allProperty.containsKey(proName)) {
//comparison the value of two Property
Property otherPro = allProperty.get(proName);
//if this property'value of loop is empty,use the other,whether the other's value is empty or not
if (StringUtils.isEmpty(thisProperty.getValue())) {
allProperty.put(proName, otherPro);
//if property'value of loop is not empty,and the other's value is not empty too, use the earlier value
} else if (StringUtils.isNotEmpty(otherPro.getValue())) {
TaskInstance otherTask = allTaskInstance.get(proName);
if (otherTask.getEndTime().getTime() > preTaskInstance.getEndTime().getTime()) {
allProperty.put(proName, thisProperty);
allTaskInstance.put(proName, preTaskInstance);
} else {
allProperty.put(proName, otherPro);
}
} else {
allProperty.put(proName, thisProperty);
allTaskInstance.put(proName, preTaskInstance);
}
} else {
allProperty.put(proName, thisProperty);
allTaskInstance.put(proName, preTaskInstance);
}
}
private void submitPostNode(String parentNodeCode) {
Set<String> submitTaskNodeList = DagHelper.parsePostNodes(parentNodeCode, skipTaskNodeList, dag, completeTaskList);
List<TaskInstance> taskInstances = new ArrayList<>();
for (String taskNode : submitTaskNodeList) {
TaskNode taskNodeObject = dag.getNode(taskNode);
if (taskInstanceHashMap.containsColumn(taskNodeObject.getCode())) {
continue;
}
TaskInstance task = createTaskInstance(processInstance, taskNodeObject);
taskInstances.add(task);
}
// if previous node success , post node submit
for (TaskInstance task : taskInstances) {
if (readyToSubmitTaskQueue.contains(task)) {
continue;
}
if (completeTaskList.containsKey(Long.toString(task.getTaskCode()))) {
logger.info("task {} has already run success", task.getName());
continue;
}
if (task.getState().typeIsPause() || task.getState().typeIsCancel()) {
logger.info("task {} stopped, the state is {}", task.getName(), task.getState());
} else {
addTaskToStandByList(task);
}
}
submitStandByTask();
updateProcessInstanceState();
}
/**
* determine whether the dependencies of the task node are complete
*
* @return DependResult
*/
private DependResult isTaskDepsComplete(String taskCode) {
Collection<String> startNodes = dag.getBeginNode();
// if vertex,returns true directly
if (startNodes.contains(taskCode)) {
return DependResult.SUCCESS;
}
TaskNode taskNode = dag.getNode(taskCode);
List<String> depCodeList = taskNode.getDepList();
for (String depsNode : depCodeList) {
if (!dag.containsNode(depsNode)
|| forbiddenTaskList.containsKey(depsNode)
|| skipTaskNodeList.containsKey(depsNode)) {
continue;
}
// dependencies must be fully completed
if (!completeTaskList.containsKey(depsNode)) {
return DependResult.WAITING;
}
ExecutionStatus depTaskState = completeTaskList.get(depsNode).getState();
if (depTaskState.typeIsPause() || depTaskState.typeIsCancel()) {
return DependResult.NON_EXEC;
}
// ignore task state if current task is condition
if (taskNode.isConditionsTask()) {
continue;
}
if (!dependTaskSuccess(depsNode, taskCode)) {
return DependResult.FAILED;
}
}
logger.info("taskCode: {} completeDependTaskList: {}", taskCode, Arrays.toString(completeTaskList.keySet().toArray()));
return DependResult.SUCCESS;
}
/**
* depend node is completed, but here need check the condition task branch is the next node
*/
private boolean dependTaskSuccess(String dependNodeName, String nextNodeName) {
if (dag.getNode(dependNodeName).isConditionsTask()) {
//condition task need check the branch to run
List<String> nextTaskList = DagHelper.parseConditionTask(dependNodeName, skipTaskNodeList, dag, completeTaskList);
if (!nextTaskList.contains(nextNodeName)) {
return false;
}
} else {
ExecutionStatus depTaskState = completeTaskList.get(dependNodeName).getState();
if (depTaskState.typeIsFailure()) {
return false;
}
}
return true;
}
/**
* query task instance by complete state
*
* @param state state
* @return task instance list
*/
private List<TaskInstance> getCompleteTaskByState(ExecutionStatus state) {
List<TaskInstance> resultList = new ArrayList<>();
for (Map.Entry<String, TaskInstance> entry : completeTaskList.entrySet()) {
if (entry.getValue().getState() == state) {
resultList.add(entry.getValue());
}
}
return resultList;
}
/**
* where there are ongoing tasks
*
* @param state state
* @return ExecutionStatus
*/
private ExecutionStatus runningState(ExecutionStatus state) {
if (state == ExecutionStatus.READY_STOP
|| state == ExecutionStatus.READY_PAUSE
|| state == ExecutionStatus.WAITING_THREAD
|| state == ExecutionStatus.DELAY_EXECUTION) {
// if the running task is not completed, the state remains unchanged
return state;
} else {
return ExecutionStatus.RUNNING_EXECUTION;
}
}
/**
* exists failure task,contains submit failure、dependency failure,execute failure(retry after)
*
* @return Boolean whether has failed task
*/
private boolean hasFailedTask() {
if (this.taskFailedSubmit) {
return true;
}
if (this.errorTaskList.size() > 0) {
return true;
}
return this.dependFailedTask.size() > 0;
}
/**
* process instance failure
*
* @return Boolean whether process instance failed
*/
private boolean processFailed() {
if (hasFailedTask()) {
if (processInstance.getFailureStrategy() == FailureStrategy.END) {
return true;
}
if (processInstance.getFailureStrategy() == FailureStrategy.CONTINUE) {
return readyToSubmitTaskQueue.size() == 0 || activeTaskProcessorMaps.size() == 0;
}
}
return false;
}
/**
* whether task for waiting thread
*
* @return Boolean whether has waiting thread task
*/
private boolean hasWaitingThreadTask() {
List<TaskInstance> waitingList = getCompleteTaskByState(ExecutionStatus.WAITING_THREAD);
return CollectionUtils.isNotEmpty(waitingList);
}
/**
* prepare for pause
* 1,failed retry task in the preparation queue , returns to failure directly
* 2,exists pause task,complement not completed, pending submission of tasks, return to suspension
* 3,success
*
* @return ExecutionStatus
*/
private ExecutionStatus processReadyPause() {
if (hasRetryTaskInStandBy()) {
return ExecutionStatus.FAILURE;
}
List<TaskInstance> pauseList = getCompleteTaskByState(ExecutionStatus.PAUSE);
if (CollectionUtils.isNotEmpty(pauseList)
|| !isComplementEnd()
|| readyToSubmitTaskQueue.size() > 0) {
return ExecutionStatus.PAUSE;
} else {
return ExecutionStatus.SUCCESS;
}
}
/**
* generate the latest process instance status by the tasks state
*
* @param instance
* @return process instance execution status
*/
private ExecutionStatus getProcessInstanceState(ProcessInstance instance) {
ExecutionStatus state = instance.getState();
if (activeTaskProcessorMaps.size() > 0 || hasRetryTaskInStandBy()) {
// active task and retry task exists
return runningState(state);
}
// process failure
if (processFailed()) {
return ExecutionStatus.FAILURE;
}
// waiting thread
if (hasWaitingThreadTask()) {
return ExecutionStatus.WAITING_THREAD;
}
// pause
if (state == ExecutionStatus.READY_PAUSE) {
return processReadyPause();
}
// stop
if (state == ExecutionStatus.READY_STOP) {
List<TaskInstance> stopList = getCompleteTaskByState(ExecutionStatus.STOP);
List<TaskInstance> killList = getCompleteTaskByState(ExecutionStatus.KILL);
if (CollectionUtils.isNotEmpty(stopList)
|| CollectionUtils.isNotEmpty(killList)
|| !isComplementEnd()) {
return ExecutionStatus.STOP;
} else {
return ExecutionStatus.SUCCESS;
}
}
// success
if (state == ExecutionStatus.RUNNING_EXECUTION) {
List<TaskInstance> killTasks = getCompleteTaskByState(ExecutionStatus.KILL);
if (readyToSubmitTaskQueue.size() > 0) {
//tasks currently pending submission, no retries, indicating that depend is waiting to complete
return ExecutionStatus.RUNNING_EXECUTION;
} else if (CollectionUtils.isNotEmpty(killTasks)) {
// tasks maybe killed manually
return ExecutionStatus.FAILURE;
} else {
// if the waiting queue is empty and the status is in progress, then success
return ExecutionStatus.SUCCESS;
}
}
return state;
}
/**
* whether complement end
*
* @return Boolean whether is complement end
*/
private boolean isComplementEnd() {
if (!processInstance.isComplementData()) {
return true;
}
try {
Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam());
Date endTime = DateUtils.getScheduleDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
return processInstance.getScheduleTime().equals(endTime);
} catch (Exception e) {
logger.error("complement end failed ", e);
return false;
}
}
/**
* updateProcessInstance process instance state
* after each batch of tasks is executed, the status of the process instance is updated
*/
private void updateProcessInstanceState() {
ProcessInstance instance = processService.findProcessInstanceById(processInstance.getId());
ExecutionStatus state = getProcessInstanceState(instance);
if (processInstance.getState() != state) {
logger.info(
"work flow process instance [id: {}, name:{}], state change from {} to {}, cmd type: {}",
processInstance.getId(), processInstance.getName(),
processInstance.getState(), state,
processInstance.getCommandType());
instance.setState(state);
processService.updateProcessInstance(instance);
processInstance = instance;
StateEvent stateEvent = new StateEvent();
stateEvent.setExecutionStatus(processInstance.getState());
stateEvent.setProcessInstanceId(this.processInstance.getId());
stateEvent.setType(StateEventType.PROCESS_STATE_CHANGE);
this.processStateChangeHandler(stateEvent);
}
}
/**
* get task dependency result
*
* @param taskInstance task instance
* @return DependResult
*/
private DependResult getDependResultForTask(TaskInstance taskInstance) {
return isTaskDepsComplete(Long.toString(taskInstance.getTaskCode()));
}
/**
* add task to standby list
*
* @param taskInstance task instance
*/
private void addTaskToStandByList(TaskInstance taskInstance) {
logger.info("add task to stand by list: {}", taskInstance.getName());
try {
if (!readyToSubmitTaskQueue.contains(taskInstance)) {
readyToSubmitTaskQueue.put(taskInstance);
}
} catch (Exception e) {
logger.error("add task instance to readyToSubmitTaskQueue error, taskName: {}", taskInstance.getName(), e);
}
}
/**
* remove task from stand by list
*
* @param taskInstance task instance
*/
private void removeTaskFromStandbyList(TaskInstance taskInstance) {
logger.info("remove task from stand by list, id: {} name:{}",
taskInstance.getId(),
taskInstance.getName());
try {
readyToSubmitTaskQueue.remove(taskInstance);
} catch (Exception e) {
logger.error("remove task instance from readyToSubmitTaskQueue error, task id:{}, Name: {}",
taskInstance.getId(),
taskInstance.getName(), e);
}
}
/**
* has retry task in standby
*
* @return Boolean whether has retry task in standby
*/
private boolean hasRetryTaskInStandBy() {
for (Iterator<TaskInstance> iter = readyToSubmitTaskQueue.iterator(); iter.hasNext(); ) {
if (iter.next().getState().typeIsFailure()) {
return true;
}
}
return false;
}
/**
* close the on going tasks
*/
private void killAllTasks() {
logger.info("kill called on process instance id: {}, num: {}", processInstance.getId(),
activeTaskProcessorMaps.size());
for (int taskId : activeTaskProcessorMaps.keySet()) {
TaskInstance taskInstance = processService.findTaskInstanceById(taskId);
if (taskInstance == null || taskInstance.getState().typeIsFinished()) {
continue;
}
ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(taskId);
taskProcessor.action(TaskAction.STOP);
if (taskProcessor.taskState().typeIsFinished()) {
StateEvent stateEvent = new StateEvent();
stateEvent.setType(StateEventType.TASK_STATE_CHANGE);
stateEvent.setProcessInstanceId(this.processInstance.getId());
stateEvent.setTaskInstanceId(taskInstance.getId());
stateEvent.setExecutionStatus(taskProcessor.taskState());
this.addStateEvent(stateEvent);
}
}
}
public boolean workFlowFinish() {
return this.processInstance.getState().typeIsFinished();
}
/**
* handling the list of tasks to be submitted
*/
private void submitStandByTask() {
try {
int length = readyToSubmitTaskQueue.size();
for (int i = 0; i < length; i++) {
TaskInstance task = readyToSubmitTaskQueue.peek();
if (task == null) {
continue;
}
// stop tasks which is retrying if forced success happens
if (task.taskCanRetry()) {
TaskInstance retryTask = processService.findTaskInstanceById(task.getId());
if (retryTask != null && retryTask.getState().equals(ExecutionStatus.FORCED_SUCCESS)) {
task.setState(retryTask.getState());
logger.info("task: {} has been forced success, put it into complete task list and stop retrying", task.getName());
removeTaskFromStandbyList(task);
completeTaskList.put(Long.toString(task.getTaskCode()), task);
submitPostNode(Long.toString(task.getTaskCode()));
continue;
}
}
//init varPool only this task is the first time running
if (task.isFirstRun()) {
//get pre task ,get all the task varPool to this task
Set<String> preTask = dag.getPreviousNodes(Long.toString(task.getTaskCode()));
getPreVarPool(task, preTask);
}
DependResult dependResult = getDependResultForTask(task);
if (DependResult.SUCCESS == dependResult) {
if (task.retryTaskIntervalOverTime()) {
int originalId = task.getId();
TaskInstance taskInstance = submitTaskExec(task);
if (taskInstance == null) {
this.taskFailedSubmit = true;
} else {
removeTaskFromStandbyList(task);
if (taskInstance.getId() != originalId) {
activeTaskProcessorMaps.remove(originalId);
}
}
}
} else if (DependResult.FAILED == dependResult) {
// if the dependency fails, the current node is not submitted and the state changes to failure.
dependFailedTask.put(Long.toString(task.getTaskCode()), task);
removeTaskFromStandbyList(task);
logger.info("task {},id:{} depend result : {}", task.getName(), task.getId(), dependResult);
} else if (DependResult.NON_EXEC == dependResult) {
// for some reasons(depend task pause/stop) this task would not be submit
removeTaskFromStandbyList(task);
logger.info("remove task {},id:{} , because depend result : {}", task.getName(), task.getId(), dependResult);
}
}
} catch (Exception e) {
logger.error("submit standby task error", e);
}
}
/**
* get recovery task instance
*
* @param taskId task id
* @return recovery task instance
*/
private TaskInstance getRecoveryTaskInstance(String taskId) {
if (!StringUtils.isNotEmpty(taskId)) {
return null;
}
try {
Integer intId = Integer.valueOf(taskId);
TaskInstance task = processService.findTaskInstanceById(intId);
if (task == null) {
logger.error("start node id cannot be found: {}", taskId);
} else {
return task;
}
} catch (Exception e) {
logger.error("get recovery task instance failed ", e);
}
return null;
}
/**
* get start task instance list
*
* @param cmdParam command param
* @return task instance list
*/
private List<TaskInstance> getStartTaskInstanceList(String cmdParam) {
List<TaskInstance> instanceList = new ArrayList<>();
Map<String, String> paramMap = JSONUtils.toMap(cmdParam);
if (paramMap != null && paramMap.containsKey(CMD_PARAM_RECOVERY_START_NODE_STRING)) {
String[] idList = paramMap.get(CMD_PARAM_RECOVERY_START_NODE_STRING).split(Constants.COMMA);
for (String nodeId : idList) {
TaskInstance task = getRecoveryTaskInstance(nodeId);
if (task != null) {
instanceList.add(task);
}
}
}
return instanceList;
}
/**
* parse "StartNodeNameList" from cmd param
*
* @param cmdParam command param
* @return start node name list
*/
private List<String> parseStartNodeName(String cmdParam) {
List<String> startNodeNameList = new ArrayList<>();
Map<String, String> paramMap = JSONUtils.toMap(cmdParam);
if (paramMap == null) {
return startNodeNameList;
}
if (paramMap.containsKey(CMD_PARAM_START_NODES)) {
startNodeNameList = Arrays.asList(paramMap.get(CMD_PARAM_START_NODES).split(Constants.COMMA));
}
return startNodeNameList;
}
/**
* generate start node code list from parsing command param;
* if "StartNodeIdList" exists in command param, return StartNodeIdList
*
* @return recovery node code list
*/
private List<String> getRecoveryNodeCodeList() {
List<String> recoveryNodeCodeList = new ArrayList<>();
if (CollectionUtils.isNotEmpty(recoverNodeIdList)) {
for (TaskInstance task : recoverNodeIdList) {
recoveryNodeCodeList.add(Long.toString(task.getTaskCode()));
}
}
return recoveryNodeCodeList;
}
/**
* generate flow dag
*
* @param totalTaskNodeList total task node list
* @param startNodeNameList start node name list
* @param recoveryNodeCodeList recovery node code list
* @param depNodeType depend node type
* @return ProcessDag process dag
* @throws Exception exception
*/
public ProcessDag generateFlowDag(List<TaskNode> totalTaskNodeList,
List<String> startNodeNameList,
List<String> recoveryNodeCodeList,
TaskDependType depNodeType) throws Exception {
return DagHelper.generateFlowDag(totalTaskNodeList, startNodeNameList, recoveryNodeCodeList, depNodeType);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/conf/home/pages/dag/_source/formModel/_source/referenceFromTask.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<list-box>
<div slot="text">{{ $t("Reference from") }}</div>
<div slot="content" class="copy-from" ref="copyFrom">
<div class="copy-from-content">
<el-input
class="copy-from-input"
v-model="searchVal"
:placeholder="getTaskName(value) || $t('Please choose')"
@focus="inputFocus"
@input="searchValChange"
size="small"
:suffix-icon="
dropdownVisible ? 'el-icon-arrow-up' : 'el-icon-arrow-down'
"
></el-input>
<div class="copy-from-dropdown" v-show="dropdownVisible">
<div class="scroll-box">
<ul v-infinite-scroll="load">
<li
v-for="taskDefinition in taskDefinitions"
:key="taskDefinition.code"
class="dropdown-item"
@click="itemClick(taskDefinition)"
>
{{ taskDefinition.name }}
</li>
</ul>
<p class="dropdown-msg" v-if="loading">{{ $t("Loading...") }}</p>
<p class="dropdown-msg" v-if="noMore">{{ $t("No more...") }}</p>
</div>
</div>
</div>
</div>
</list-box>
</template>
<script>
import ListBox from '../tasks/_source/listBox'
import { mapActions } from 'vuex'
export default {
name: 'copy-from-task',
props: {
taskType: String
},
inject: ['formModel'],
data () {
return {
pageNo: 1,
pageSize: 10,
searchVal: '',
value: '',
loading: false,
noMore: false,
taskDefinitions: [],
dropdownVisible: false
}
},
mounted () {
document.addEventListener('click', this.outsideClick)
this.load()
},
destroyed () {
document.removeEventListener('click', this.outsideClick)
},
methods: {
...mapActions('dag', ['getTaskDefinitions']),
outsideClick (e) {
const elem = this.$refs.copyFrom
if (!elem.contains(e.target) && this.dropdownVisible) {
this.dropdownVisible = false
}
},
searchValChange (val) {
this.load(true)
},
load (override) {
if (this.loading) return
if (override) {
this.noMore = false
this.pageNo = 1
}
if (this.noMore) return
this.loading = true
this.getTaskDefinitions({
pageNo: this.pageNo,
pageSize: this.pageSize,
searchVal: this.searchVal,
taskType: this.taskType
}).then((res) => {
this.taskDefinitions = override ? res.totalList : this.taskDefinitions.concat(res.totalList)
this.pageNo = res.currentPage + 1
this.noMore = this.taskDefinitions.length >= res.total
this.loading = false
})
},
itemClick (taskDefinition) {
this.value = taskDefinition.code
this.searchVal = taskDefinition.name
this.dropdownVisible = false
if (this.formModel) {
const backfillItem = this.formModel.taskToBackfillItem(taskDefinition)
this.formModel.backfillRefresh = false
this.$nextTick(() => {
this.formModel.backfillItem = backfillItem
this.formModel.backfill(backfillItem)
this.formModel.backfillRefresh = true
})
}
},
inputFocus () {
this.searchVal = ''
this.dropdownVisible = true
},
inputBlur () {
this.dropdownVisible = false
},
getTaskName (code) {
const taskDefinition = this.taskDefinitions.find(
(taskDefinition) => taskDefinition.code === code
)
return taskDefinition ? taskDefinition.name : ''
}
},
components: {
ListBox
}
}
</script>
<style lang="scss" scoped>
.copy-from {
position: relative;
&-content {
width: 100%;
}
&-input {
width: 100%;
}
&-dropdown {
width: 100%;
position: absolute;
padding: 6px 0;
top: 42px;
left: 0;
z-index: 10;
border: 1px solid #e4e7ed;
border-radius: 4px;
background-color: #fff;
box-shadow: 0 2px 12px 0 rgba(0, 0, 0, 0.1);
.scroll-box {
width: 100%;
max-height: 200px;
overflow: auto;
}
.dropdown-item {
font-size: 14px;
padding: 0 20px;
position: relative;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
color: #606266;
height: 34px;
line-height: 34px;
box-sizing: border-box;
cursor: pointer;
&:hover {
background-color: #f5f7fa;
}
&.selected {
color: #409eff;
font-weight: 700;
}
}
&:before,
&:after {
content: "";
position: absolute;
display: block;
width: 0;
height: 0;
top: -6px;
left: 35px;
border-width: 6px;
border-top-width: 0;
border-color: transparent;
border-bottom-color: #fff;
border-style: solid;
z-index: 10;
}
&:before {
top: -8px;
left: 33px;
border-width: 8px;
border-top-width: 0;
border-bottom-color: #ebeef5;
z-index: 9;
}
.dropdown-msg {
text-align: center;
color: #666;
font-size: 12px;
line-height: 34px;
margin: 0;
}
}
}
</style>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/conf/home/pages/dag/_source/formModel/formModel.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<div class="form-model-wrapper" v-clickoutside="_handleClose">
<div class="title-box">
<span class="name">{{ $t("Current node settings") }}</span>
<span class="go-subtask">
<!-- Component can't pop up box to do component processing -->
<m-log
v-if="type === 'instance' && taskInstance"
:item="backfillItem"
:task-instance-id="taskInstance.id"
>
<template slot="history"
><a href="javascript:" @click="_seeHistory"
><em class="ansicon el-icon-alarm-clock"></em
><em>{{ $t("View history") }}</em></a
></template
>
<template slot="log"
><a href="javascript:"
><em class="ansicon el-icon-document"></em
><em>{{ $t("View log") }}</em></a
></template
>
</m-log>
<a href="javascript:" @click="_goSubProcess" v-if="_isGoSubProcess"
><em class="ansicon ri-node-tree"></em
><em>{{ $t("Enter this child node") }}</em></a
>
</span>
</div>
<div class="content-box" v-if="isContentBox">
<div class="form-model">
<!-- Reference from task -->
<!-- <reference-from-task :taskType="nodeData.taskType" /> -->
<!-- Node name -->
<m-list-box>
<div slot="text">{{ $t("Node name") }}</div>
<div slot="content">
<el-input
type="text"
v-model="name"
size="small"
:disabled="isDetails"
:placeholder="$t('Please enter name (required)')"
maxlength="100"
@blur="_verifName()"
>
</el-input>
</div>
</m-list-box>
<!-- Running sign -->
<m-list-box>
<div slot="text">{{ $t("Run flag") }}</div>
<div slot="content">
<el-radio-group v-model="runFlag" size="small">
<el-radio :label="'YES'" :disabled="isDetails">{{
$t("Normal")
}}</el-radio>
<el-radio :label="'NO'" :disabled="isDetails">{{
$t("Prohibition execution")
}}</el-radio>
</el-radio-group>
</div>
</m-list-box>
<!-- description -->
<m-list-box>
<div slot="text">{{ $t("Description") }}</div>
<div slot="content">
<el-input
:rows="2"
type="textarea"
:disabled="isDetails"
v-model="desc"
:placeholder="$t('Please enter description')"
>
</el-input>
</div>
</m-list-box>
<!-- Task priority -->
<m-list-box>
<div slot="text">{{ $t("Task priority") }}</div>
<div slot="content">
<span class="label-box" style="width: 193px; display: inline-block">
<m-priority v-model="taskInstancePriority"></m-priority>
</span>
</div>
</m-list-box>
<!-- Worker group and environment -->
<m-list-box>
<div slot="text">{{ $t("Worker group") }}</div>
<div slot="content">
<span class="label-box" style="width: 193px; display: inline-block">
<m-worker-groups v-model="workerGroup"></m-worker-groups>
</span>
<span class="text-b">{{ $t("Environment Name") }}</span>
<m-related-environment
v-model="environmentCode"
:workerGroup="workerGroup"
:isNewCreate="isNewCreate"
v-on:environmentCodeEvent="_onUpdateEnvironmentCode"
></m-related-environment>
</div>
</m-list-box>
<!-- Number of failed retries -->
<m-list-box v-if="nodeData.taskType !== 'SUB_PROCESS'">
<div slot="text">{{ $t("Number of failed retries") }}</div>
<div slot="content">
<m-select-input
v-model="maxRetryTimes"
:list="[0, 1, 2, 3, 4]"
></m-select-input>
<span>({{ $t("Times") }})</span>
<span class="text-b">{{ $t("Failed retry interval") }}</span>
<m-select-input
v-model="retryInterval"
:list="[1, 10, 30, 60, 120]"
></m-select-input>
<span>({{ $t("Minute") }})</span>
</div>
</m-list-box>
<!-- Delay execution time -->
<m-list-box
v-if="
nodeData.taskType !== 'SUB_PROCESS' &&
nodeData.taskType !== 'CONDITIONS' &&
nodeData.taskType !== 'DEPENDENT' &&
nodeData.taskType !== 'SWITCH'
"
>
<div slot="text">{{ $t("Delay execution time") }}</div>
<div slot="content">
<m-select-input
v-model="delayTime"
:list="[0, 1, 5, 10]"
></m-select-input>
<span>({{ $t("Minute") }})</span>
</div>
</m-list-box>
<!-- Branch flow -->
<m-list-box v-if="nodeData.taskType === 'CONDITIONS'">
<div slot="text">{{ $t("State") }}</div>
<div slot="content">
<span class="label-box" style="width: 193px; display: inline-block">
<el-select
style="width: 157px"
size="small"
v-model="successNode"
:disabled="true"
>
<el-option
v-for="item in stateList"
:key="item.value"
:value="item.value"
:label="item.label"
></el-option>
</el-select>
</span>
<span class="text-b" style="padding-left: 38px">{{
$t("Branch flow")
}}</span>
<el-select
style="width: 157px"
size="small"
v-model="successBranch"
clearable
:disabled="isDetails"
>
<el-option
v-for="item in postTasks"
:key="item.code"
:value="item.code"
:label="item.name"
></el-option>
</el-select>
</div>
</m-list-box>
<m-list-box v-if="nodeData.taskType === 'CONDITIONS'">
<div slot="text">{{ $t("State") }}</div>
<div slot="content">
<span class="label-box" style="width: 193px; display: inline-block">
<el-select
style="width: 157px"
size="small"
v-model="failedNode"
:disabled="true"
>
<el-option
v-for="item in stateList"
:key="item.value"
:value="item.value"
:label="item.label"
></el-option>
</el-select>
</span>
<span class="text-b" style="padding-left: 38px">{{
$t("Branch flow")
}}</span>
<el-select
style="width: 157px"
size="small"
v-model="failedBranch"
clearable
:disabled="isDetails"
>
<el-option
v-for="item in postTasks"
:key="item.code"
:value="item.code"
:label="item.name"
></el-option>
</el-select>
</div>
</m-list-box>
<div v-if="backfillRefresh">
<!-- Task timeout alarm -->
<m-timeout-alarm
v-if="nodeData.taskType !== 'DEPENDENT'"
ref="timeout"
:backfill-item="backfillItem"
@on-timeout="_onTimeout"
>
</m-timeout-alarm>
<!-- Dependent timeout alarm -->
<m-dependent-timeout
v-if="nodeData.taskType === 'DEPENDENT'"
ref="dependentTimeout"
:backfill-item="backfillItem"
@on-timeout="_onDependentTimeout"
>
</m-dependent-timeout>
<!-- shell node -->
<m-shell
v-if="nodeData.taskType === 'SHELL'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="SHELL"
:backfill-item="backfillItem"
>
</m-shell>
<!-- sub_process node -->
<m-sub-process
v-if="nodeData.taskType === 'SUB_PROCESS'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
@on-set-process-name="_onSetProcessName"
ref="SUB_PROCESS"
:backfill-item="backfillItem"
>
</m-sub-process>
<!-- procedure node -->
<m-procedure
v-if="nodeData.taskType === 'PROCEDURE'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="PROCEDURE"
:backfill-item="backfillItem"
>
</m-procedure>
<!-- sql node -->
<m-sql
v-if="nodeData.taskType === 'SQL'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="SQL"
:create-node-id="nodeData.id"
:backfill-item="backfillItem"
>
</m-sql>
<!-- spark node -->
<m-spark
v-if="nodeData.taskType === 'SPARK'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="SPARK"
:backfill-item="backfillItem"
>
</m-spark>
<m-flink
v-if="nodeData.taskType === 'FLINK'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="FLINK"
:backfill-item="backfillItem"
>
</m-flink>
<!-- mr node -->
<m-mr
v-if="nodeData.taskType === 'MR'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="MR"
:backfill-item="backfillItem"
>
</m-mr>
<!-- python node -->
<m-python
v-if="nodeData.taskType === 'PYTHON'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="PYTHON"
:backfill-item="backfillItem"
>
</m-python>
<!-- dependent node -->
<m-dependent
v-if="nodeData.taskType === 'DEPENDENT'"
@on-dependent="_onDependent"
@on-cache-dependent="_onCacheDependent"
ref="DEPENDENT"
:backfill-item="backfillItem"
>
</m-dependent>
<m-http
v-if="nodeData.taskType === 'HTTP'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="HTTP"
:backfill-item="backfillItem"
>
</m-http>
<m-datax
v-if="nodeData.taskType === 'DATAX'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="DATAX"
:backfill-item="backfillItem"
>
</m-datax>
<m-pigeon
v-if="nodeData.taskType === 'PIGEON'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
:backfill-item="backfillItem"
ref="PIGEON">
</m-pigeon>
<m-sqoop
v-if="nodeData.taskType === 'SQOOP'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="SQOOP"
:backfill-item="backfillItem"
>
</m-sqoop>
<m-conditions
v-if="nodeData.taskType === 'CONDITIONS'"
ref="CONDITIONS"
@on-dependent="_onDependent"
@on-cache-dependent="_onCacheDependent"
:backfill-item="backfillItem"
:prev-tasks="prevTasks"
>
</m-conditions>
<m-switch
v-if="nodeData.taskType === 'SWITCH'"
ref="SWITCH"
@on-switch-result="_onSwitchResult"
:backfill-item="backfillItem"
:nodeData="nodeData"
:postTasks="postTasks"
></m-switch>
<!-- waterdrop node -->
<m-waterdrop
v-if="nodeData.taskType === 'WATERDROP'"
@on-params="_onParams"
@on-cache-params="_onCacheParams"
ref="WATERDROP"
:backfill-item="backfillItem"
>
</m-waterdrop>
</div>
<!-- Pre-tasks in workflow -->
<m-pre-tasks
ref="preTasks"
v-if="['SHELL', 'SUB_PROCESS'].indexOf(nodeData.taskType) > -1"
:code="code"
/>
</div>
</div>
<div class="bottom-box">
<div class="submit" style="background: #fff">
<el-button type="text" size="small" id="cancelBtn">
{{ $t("Cancel") }}
</el-button>
<el-button
type="primary"
size="small"
round
:loading="spinnerLoading"
@click="ok()"
:disabled="isDetails"
>{{ spinnerLoading ? $t("Loading...") : $t("Confirm add") }}
</el-button>
</div>
</div>
</div>
</template>
<script>
import _ from 'lodash'
import { mapActions, mapState } from 'vuex'
import mLog from './log'
import mMr from './tasks/mr'
import mSql from './tasks/sql'
import i18n from '@/module/i18n'
import mListBox from './tasks/_source/listBox'
import mShell from './tasks/shell'
import mWaterdrop from './tasks/waterdrop'
import mSpark from './tasks/spark'
import mFlink from './tasks/flink'
import mPython from './tasks/python'
import mProcedure from './tasks/procedure'
import mDependent from './tasks/dependent'
import mHttp from './tasks/http'
import mDatax from './tasks/datax'
import mPigeon from './tasks/pigeon'
import mConditions from './tasks/conditions'
import mSwitch from './tasks/switch.vue'
import mSqoop from './tasks/sqoop'
import mSubProcess from './tasks/sub_process'
import mSelectInput from './_source/selectInput'
import mTimeoutAlarm from './_source/timeoutAlarm'
import mDependentTimeout from './_source/dependentTimeout'
import mWorkerGroups from './_source/workerGroups'
import mRelatedEnvironment from './_source/relatedEnvironment'
import mPreTasks from './tasks/pre_tasks'
import clickoutside from '@/module/util/clickoutside'
import disabledState from '@/module/mixin/disabledState'
import mPriority from '@/module/components/priority/priority'
import { findComponentDownward } from '@/module/util/'
// import ReferenceFromTask from './_source/referenceFromTask.vue'
export default {
name: 'form-model',
data () {
return {
// loading
spinnerLoading: false,
// node name
name: '',
// description
desc: '',
// Node echo data
backfillItem: {},
cacheBackfillItem: {},
// Resource(list)
resourcesList: [],
successNode: 'success',
failedNode: 'failed',
successBranch: '',
failedBranch: '',
conditionResult: {
successNode: [],
failedNode: []
},
switchResult: {},
// dependence
dependence: {},
// cache dependence
cacheDependence: {},
// task code
code: 0,
// Current node params data
params: {},
// Running sign
runFlag: 'YES',
// The second echo problem caused by the node data is specifically which node hook caused the unfinished special treatment
isContentBox: false,
// Number of failed retries
maxRetryTimes: '0',
// Failure retry interval
retryInterval: '1',
// Delay execution time
delayTime: '0',
// Task timeout alarm
timeout: {},
// (For Dependent nodes) Wait start timeout alarm
waitStartTimeout: {},
// Task priority
taskInstancePriority: 'MEDIUM',
// worker group id
workerGroup: 'default',
// selected environment
environmentCode: '',
selectedWorkerGroup: '',
stateList: [
{
value: 'success',
label: `${i18n.$t('Success')}`
},
{
value: 'failed',
label: `${i18n.$t('Failed')}`
}
],
// for CONDITIONS and SWITCH
postTasks: [],
prevTasks: [],
// refresh part of the formModel, after set backfillItem outside
backfillRefresh: true,
// whether this is a new Task
isNewCreate: true
}
},
provide () {
return {
formModel: this
}
},
/**
* Click on events that are not generated internally by the component
*/
directives: { clickoutside },
mixins: [disabledState],
props: {
nodeData: Object,
type: {
type: String,
default: ''
}
},
inject: ['dagChart'],
methods: {
...mapActions('dag', ['getTaskInstanceList']),
taskToBackfillItem (task) {
return {
code: task.code,
conditionResult: task.taskParams.conditionResult,
switchResult: task.taskParams.switchResult,
delayTime: task.delayTime,
dependence: task.taskParams.dependence,
desc: task.description,
id: task.id,
maxRetryTimes: task.failRetryTimes,
name: task.name,
params: _.omit(task.taskParams, [
'conditionResult',
'dependence',
'waitStartTimeout',
'switchResult'
]),
retryInterval: task.failRetryInterval,
runFlag: task.flag,
taskInstancePriority: task.taskPriority,
timeout: {
interval: task.timeout,
strategy: task.timeoutNotifyStrategy,
enable: task.timeoutFlag === 'OPEN'
},
type: task.taskType,
waitStartTimeout: task.taskParams.waitStartTimeout,
workerGroup: task.workerGroup,
environmentCode: task.environmentCode
}
},
/**
* depend
*/
_onDependent (o) {
this.dependence = Object.assign(this.dependence, {}, o)
},
_onSwitchResult (o) {
this.switchResult = o
},
/**
* cache dependent
*/
_onCacheDependent (o) {
this.cacheDependence = Object.assign(this.cacheDependence, {}, o)
},
/**
* Task timeout alarm
*/
_onTimeout (o) {
this.timeout = Object.assign(this.timeout, {}, o)
},
/**
* Dependent timeout alarm
*/
_onDependentTimeout (o) {
this.timeout = Object.assign(this.timeout, {}, o.waitCompleteTimeout)
this.waitStartTimeout = Object.assign(
this.waitStartTimeout,
{},
o.waitStartTimeout
)
},
/**
* Click external to close the current component
*/
_handleClose () {
// this.close()
},
/**
* Jump to task instance
*/
_seeHistory () {
this.$emit('seeHistory', this.backfillItem.name)
},
/**
* Enter the child node to judge the process instance or the process definition
* @param type = instance
*/
_goSubProcess () {
if (_.isEmpty(this.backfillItem)) {
this.$message.warning(
`${i18n.$t(
'The newly created sub-Process has not yet been executed and cannot enter the sub-Process'
)}`
)
return
}
if (this.router.history.current.name === 'projects-instance-details') {
if (!this.taskInstance) {
this.$message.warning(
`${i18n.$t(
'The task has not been executed and cannot enter the sub-Process'
)}`
)
return
}
this.store
.dispatch('dag/getSubProcessId', { taskId: this.taskInstance.id })
.then((res) => {
this.$emit('onSubProcess', {
subInstanceId: res.data.subProcessInstanceId,
fromThis: this
})
})
.catch((e) => {
this.$message.error(e.msg || '')
})
} else {
const processDefinitionId =
this.backfillItem.params.processDefinitionId
const process = this.processListS.find(
(def) => def.id === processDefinitionId
)
this.$emit('onSubProcess', {
subProcessCode: process.code,
fromThis: this
})
}
},
_onUpdateWorkerGroup (o) {
this.selectedWorkerGroup = o
},
/**
* return params
*/
_onParams (o) {
this.params = Object.assign({}, o)
},
_onUpdateEnvironmentCode (o) {
this.environmentCode = o
},
/**
* _onCacheParams is reserved
*/
_onCacheParams (o) {
this.params = Object.assign(this.params, {}, o)
},
/**
* verification name
*/
_verifName () {
if (!_.trim(this.name)) {
this.$message.warning(`${i18n.$t('Please enter name (required)')}`)
return false
}
if (
this.successBranch &&
this.successBranch === this.failedBranch
) {
this.$message.warning(
`${i18n.$t(
'Cannot select the same node for successful branch flow and failed branch flow'
)}`
)
return false
}
if (this.name === this.backfillItem.name) {
return true
}
return true
},
_verifWorkGroup () {
let item = this.store.state.security.workerGroupsListAll.find((item) => {
return item.id === this.workerGroup
})
if (item === undefined) {
this.$message.warning(
`${i18n.$t(
'The Worker group no longer exists, please select the correct Worker group!'
)}`
)
return false
}
return true
},
/**
* Global verification procedure
*/
_verification () {
// Verify name
if (!this._verifName()) {
return
}
// verif workGroup
if (!this._verifWorkGroup()) {
return
}
// Verify task alarm parameters
if (this.nodeData.taskType === 'DEPENDENT') {
if (!this.$refs.dependentTimeout._verification()) {
return
}
} else {
if (!this.$refs.timeout._verification()) {
return
}
}
// Verify node parameters
if (!this.$refs[this.nodeData.taskType]._verification()) {
return
}
// set preTask
if (this.$refs.preTasks) {
this.$refs.preTasks.setPreNodes()
}
this.successBranch && (this.conditionResult.successNode[0] = this.successBranch)
this.failedBranch && (this.conditionResult.failedNode[0] = this.failedBranch)
this.$emit('addTaskInfo', {
item: {
code: this.nodeData.id,
name: this.name,
description: this.desc,
taskType: this.nodeData.taskType,
taskParams: {
...this.params,
dependence: this.cacheDependence,
conditionResult: this.conditionResult,
waitStartTimeout: this.waitStartTimeout,
switchResult: this.switchResult
},
flag: this.runFlag,
taskPriority: this.taskInstancePriority,
workerGroup: this.workerGroup,
failRetryTimes: this.maxRetryTimes,
failRetryInterval: this.retryInterval,
timeoutFlag: this.timeout.enable ? 'OPEN' : 'CLOSE',
timeoutNotifyStrategy: this.timeout.strategy,
timeout: this.timeout.interval || 0,
delayTime: this.delayTime,
environmentCode: this.environmentCode || -1,
status: this.status,
branch: this.branch
},
fromThis: this
})
// set run flag
this._setRunFlag()
// set edge label
this._setEdgeLabel()
},
/**
* Sub-workflow selected node echo name
*/
_onSetProcessName (name) {
this.name = name
},
/**
* set run flag
* TODO
*/
_setRunFlag () {},
_setEdgeLabel () {
if (this.successBranch || this.failedBranch) {
const canvas = findComponentDownward(this.dagChart, 'dag-canvas')
const edges = canvas.getEdges()
const successTask = this.postTasks.find(
(t) => t.name === this.successBranch
)
const failedTask = this.postTasks.find(
(t) => t.name === this.failedBranch
)
const sEdge = edges.find(
(edge) =>
successTask &&
edge.sourceId === this.code &&
edge.targetId === successTask.code
)
const fEdge = edges.find(
(edge) =>
failedTask &&
edge.sourceId === this.code &&
edge.targetId === failedTask.code
)
sEdge && canvas.setEdgeLabel(sEdge.id, this.$t('Success'))
fEdge && canvas.setEdgeLabel(fEdge.id, this.$t('Failed'))
}
},
/**
* Submit verification
*/
ok () {
this._verification()
},
/**
* Close and destroy component and component internal events
*/
close () {
let flag = false
// Delete node without storage
if (!this.backfillItem.name) {
flag = true
}
this.isContentBox = false
// flag Whether to delete a node this.$destroy()
this.$emit('close', {
item: this.cacheBackfillItem,
flag: flag,
fromThis: this
})
},
backfill (backfillItem) {
const o = backfillItem
// Non-null objects represent backfill
if (!_.isEmpty(o)) {
this.code = o.code
this.name = o.name
this.taskInstancePriority = o.taskInstancePriority
this.runFlag = o.runFlag || 'YES'
this.desc = o.desc
this.maxRetryTimes = o.maxRetryTimes
this.retryInterval = o.retryInterval
this.delayTime = o.delayTime
if (o.conditionResult) {
this.successBranch = o.conditionResult.successNode[0]
this.failedBranch = o.conditionResult.failedNode[0]
}
if (o.switchResult) {
this.switchResult = o.switchResult
}
// If the workergroup has been deleted, set the default workergroup
for (
let i = 0;
i < this.store.state.security.workerGroupsListAll.length;
i++
) {
let workerGroup = this.store.state.security.workerGroupsListAll[i].id
if (o.workerGroup === workerGroup) {
break
}
}
if (o.workerGroup === undefined) {
this.store
.dispatch('dag/getTaskInstanceList', {
pageSize: 10,
pageNo: 1,
processInstanceId: this.nodeData.instanceId,
name: o.name
})
.then((res) => {
this.workerGroup = res.totalList[0].workerGroup
})
} else {
this.workerGroup = o.workerGroup
}
this.environmentCode = o.environmentCode === -1 ? '' : o.environmentCode
this.params = o.params || {}
this.dependence = o.dependence || {}
this.cacheDependence = o.dependence || {}
} else {
this.workerGroup = this.store.state.security.workerGroupsListAll[0].id
}
this.cacheBackfillItem = JSON.parse(JSON.stringify(o))
this.isContentBox = true
}
},
created () {
// Backfill data
let taskList = this.store.state.dag.tasks
let o = {}
if (taskList.length) {
taskList.forEach((task) => {
if (task.code === this.nodeData.id) {
const backfillItem = this.taskToBackfillItem(task)
o = backfillItem
this.backfillItem = backfillItem
this.isNewCreate = false
}
})
}
this.code = this.nodeData.id
this.backfill(o)
if (this.dagChart) {
const canvas = findComponentDownward(this.dagChart, 'dag-canvas')
const postNodes = canvas.getPostNodes(this.code)
const prevNodes = canvas.getPrevNodes(this.code)
const buildTask = (node) => ({
code: node.id,
name: node.data.taskName,
type: node.data.taskType
})
this.postTasks = postNodes.map(buildTask)
this.prevTasks = prevNodes.map(buildTask)
}
},
mounted () {
let self = this
$('#cancelBtn').mousedown(function (event) {
event.preventDefault()
self.close()
})
},
updated () {},
beforeDestroy () {},
destroyed () {},
computed: {
...mapState('dag', ['processListS', 'taskInstances']),
/**
* Child workflow entry show/hide
*/
_isGoSubProcess () {
return this.nodeData.taskType === 'SUB_PROCESS' && this.name && this.processListS && this.processListS.length > 0
},
taskInstance () {
if (this.taskInstances.length > 0) {
return this.taskInstances.find(
(instance) => instance.taskCode === this.nodeData.id
)
}
return null
}
},
components: {
mListBox,
mMr,
mShell,
mWaterdrop,
mSubProcess,
mProcedure,
mSql,
mLog,
mSpark,
mFlink,
mPython,
mDependent,
mHttp,
mDatax,
mPigeon,
mSqoop,
mConditions,
mSwitch,
mSelectInput,
mTimeoutAlarm,
mDependentTimeout,
mPriority,
mWorkerGroups,
mRelatedEnvironment,
mPreTasks
// ReferenceFromTask
}
}
</script>
<style lang="scss" rel="stylesheet/scss">
@import "./formModel";
.ans-radio-disabled {
.ans-radio-inner:after {
background-color: #6f8391;
}
}
</style>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/taskDefinition/_source/list.vue | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/taskDefinition/index.vue | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/conf/home/router/index.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import Vue from 'vue'
import store from '@/conf/home/store'
import localStore from '@/module/util/localStorage'
import i18n from '@/module/i18n/index.js'
import config from '~/external/config'
import Router from 'vue-router'
Vue.use(Router)
const router = new Router({
routes: [
{
path: '/',
name: 'index',
redirect: {
name: 'home'
}
},
{
path: '/home',
name: 'home',
component: resolve => require(['../pages/home/index'], resolve),
meta: {
title: `${i18n.$t('Home')} - DolphinScheduler`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects',
name: 'projects',
component: resolve => require(['../pages/projects/index'], resolve),
meta: {
title: `${i18n.$t('Project')}`
},
redirect: {
name: 'projects-list'
},
beforeEnter: (to, from, next) => {
const blacklist = ['projects', 'projects-list']
const { projectCode } = to.params || {}
if (!blacklist.includes(to.name) && projectCode && projectCode !== localStore.getItem('projectCode')) {
store.dispatch('projects/getProjectByCode', projectCode).then(res => {
store.commit('dag/setProjectId', res.id)
store.commit('dag/setProjectCode', res.code)
store.commit('dag/setProjectName', res.name)
localStore.setItem('projectId', res.id)
localStore.setItem('projectCode', res.code)
localStore.setItem('projectName', res.name)
next()
}).catch(e => {
next({ name: 'projects-list' })
})
} else {
next()
}
},
children: [
{
path: '/projects/list',
name: 'projects-list',
component: resolve => require(['../pages/projects/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('Project')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/index',
name: 'projects-index',
component: resolve => require(['../pages/projects/pages/index/index'], resolve),
meta: {
title: `${i18n.$t('Project Home')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/kinship',
name: 'projects-kinship',
component: resolve => require(['../pages/projects/pages/kinship/index'], resolve),
meta: {
title: `${i18n.$t('Kinship')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/definition',
name: 'definition',
component: resolve => require(['../pages/projects/pages/definition/index'], resolve),
meta: {
title: `${i18n.$t('Process definition')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
},
redirect: {
name: 'projects-definition-list'
},
children: [
{
path: '/projects/:projectCode/definition/list',
name: 'projects-definition-list',
component: resolve => require(['../pages/projects/pages/definition/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('Process definition')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/definition/list/:code',
name: 'projects-definition-details',
component: resolve => require(['../pages/projects/pages/definition/pages/details/index'], resolve),
meta: {
title: `${i18n.$t('Process definition details')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/definition/create',
name: 'definition-create',
component: resolve => require(['../pages/projects/pages/definition/pages/create/index'], resolve),
meta: {
title: `${i18n.$t('Create process definition')}`
}
},
{
path: '/projects/:projectCode/definition/tree/:code',
name: 'definition-tree-view-index',
component: resolve => require(['../pages/projects/pages/definition/pages/tree/index'], resolve),
meta: {
title: `${i18n.$t('TreeView')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/definition/list/timing/:code',
name: 'definition-timing-details',
component: resolve => require(['../pages/projects/pages/definition/timing/index'], resolve),
meta: {
title: `${i18n.$t('Scheduled task list')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
}
]
},
{
path: '/projects/:projectCode/instance',
name: 'instance',
component: resolve => require(['../pages/projects/pages/instance/index'], resolve),
meta: {
title: `${i18n.$t('Process Instance')}`
},
redirect: {
name: 'projects-instance-list'
},
children: [
{
path: '/projects/:projectCode/instance/list',
name: 'projects-instance-list',
component: resolve => require(['../pages/projects/pages/instance/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('Process Instance')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/instance/list/:id',
name: 'projects-instance-details',
component: resolve => require(['../pages/projects/pages/instance/pages/details/index'], resolve),
meta: {
title: `${i18n.$t('Process instance details')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/instance/gantt/:id',
name: 'instance-gantt-index',
component: resolve => require(['../pages/projects/pages/instance/pages/gantt/index'], resolve),
meta: {
title: `${i18n.$t('Gantt')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
}
]
},
{
path: '/projects/:projectCode/task-instance',
name: 'task-instance',
component: resolve => require(['../pages/projects/pages/taskInstance'], resolve),
meta: {
title: `${i18n.$t('Task Instance')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/task-record',
name: 'task-record',
component: resolve => require(['../pages/projects/pages/taskRecord'], resolve),
meta: {
title: `${i18n.$t('Task record')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/history-task-record',
name: 'history-task-record',
component: resolve => require(['../pages/projects/pages/historyTaskRecord'], resolve),
meta: {
title: `${i18n.$t('History task record')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
}
]
},
{
path: '/resource',
name: 'resource',
component: resolve => require(['../pages/resource/index'], resolve),
redirect: {
name: 'file'
},
meta: {
title: `${i18n.$t('Resources')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
},
children: [
{
path: '/resource/file',
name: 'file',
component: resolve => require(['../pages/resource/pages/file/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('File Manage')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/create',
name: 'resource-file-create',
component: resolve => require(['../pages/resource/pages/file/pages/create/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`
}
},
{
path: '/resource/file/createFolder',
name: 'resource-file-createFolder',
component: resolve => require(['../pages/resource/pages/file/pages/createFolder/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`
}
},
{
path: '/resource/file/subFileFolder/:id',
name: 'resource-file-subFileFolder',
component: resolve => require(['../pages/resource/pages/file/pages/subFileFolder/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/subFile/:id',
name: 'resource-file-subFile',
component: resolve => require(['../pages/resource/pages/file/pages/subFile/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/list/:id',
name: 'resource-file-details',
component: resolve => require(['../pages/resource/pages/file/pages/details/index'], resolve),
meta: {
title: `${i18n.$t('File Details')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/subdirectory/:id',
name: 'resource-file-subdirectory',
component: resolve => require(['../pages/resource/pages/file/pages/subdirectory/index'], resolve),
meta: {
title: `${i18n.$t('File Manage')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/edit/:id',
name: 'resource-file-edit',
component: resolve => require(['../pages/resource/pages/file/pages/edit/index'], resolve),
meta: {
title: `${i18n.$t('File Details')}`
}
},
{
path: '/resource/udf',
name: 'udf',
component: resolve => require(['../pages/resource/pages/udf/index'], resolve),
meta: {
title: `${i18n.$t('UDF manage')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
},
children: [
{
path: '/resource/udf',
name: 'resource-udf',
component: resolve => require(['../pages/resource/pages/udf/pages/resource/index'], resolve),
meta: {
title: `${i18n.$t('UDF Resources')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/udf/subUdfDirectory/:id',
name: 'resource-udf-subUdfDirectory',
component: resolve => require(['../pages/resource/pages/udf/pages/subUdfDirectory/index'], resolve),
meta: {
title: `${i18n.$t('UDF Resources')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/udf/createUdfFolder',
name: 'resource-udf-createUdfFolder',
component: resolve => require(['../pages/resource/pages/udf/pages/createUdfFolder/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`
}
},
{
path: '/resource/udf/subCreateUdfFolder/:id',
name: 'resource-udf-subCreateUdfFolder',
component: resolve => require(['../pages/resource/pages/udf/pages/subUdfFolder/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`
}
},
{
path: '/resource/func',
name: 'resource-func',
component: resolve => require(['../pages/resource/pages/udf/pages/function/index'], resolve),
meta: {
title: `${i18n.$t('UDF Function')}`
}
}
]
}
]
},
{
path: '/datasource',
name: 'datasource',
component: resolve => require(['../pages/datasource/index'], resolve),
meta: {
title: `${i18n.$t('Datasource')}`
},
redirect: {
name: 'datasource-list'
},
children: [
{
path: '/datasource/list',
name: 'datasource-list',
component: resolve => require(['../pages/datasource/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('Datasource')}`
}
}
]
},
{
path: '/security',
name: 'security',
component: resolve => require(['../pages/security/index'], resolve),
meta: {
title: `${i18n.$t('Security')}`
},
redirect: {
name: 'tenement-manage'
},
children: [
{
path: '/security/tenant',
name: 'tenement-manage',
component: resolve => require(['../pages/security/pages/tenement/index'], resolve),
meta: {
title: `${i18n.$t('Tenant Manage')}`
}
},
{
path: '/security/users',
name: 'users-manage',
component: resolve => require(['../pages/security/pages/users/index'], resolve),
meta: {
title: `${i18n.$t('User Manage')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/security/warning-groups',
name: 'warning-groups-manage',
component: resolve => require(['../pages/security/pages/warningGroups/index'], resolve),
meta: {
title: `${i18n.$t('Warning group manage')}`
}
},
{
path: '/security/warning-instance',
name: 'warning-instance-manage',
component: resolve => require(['../pages/security/pages/warningInstance/index'], resolve),
meta: {
title: `${i18n.$t('Warning instance manage')}`
}
},
{
path: '/security/queue',
name: 'queue-manage',
component: resolve => require(['../pages/security/pages/queue/index'], resolve),
meta: {
title: `${i18n.$t('Queue manage')}`
}
},
{
path: '/security/worker-groups',
name: 'worker-groups-manage',
component: resolve => require(['../pages/security/pages/workerGroups/index'], resolve),
meta: {
title: `${i18n.$t('Worker group manage')}`
}
},
{
path: '/security/environments',
name: 'environment-manage',
component: resolve => require(['../pages/security/pages/environment/index'], resolve),
meta: {
title: `${i18n.$t('Environment manage')}`
}
},
{
path: '/security/token',
name: 'token-manage',
component: resolve => require(['../pages/security/pages/token/index'], resolve),
meta: {
title: `${i18n.$t('Token manage')}`
}
}
]
},
{
path: '/user',
name: 'user',
component: resolve => require(['../pages/user/index'], resolve),
meta: {
title: `${i18n.$t('User Center')}`
},
redirect: {
name: 'account'
},
children: [
{
path: '/user/account',
name: 'account',
component: resolve => require(['../pages/user/pages/account/index'], resolve),
meta: {
title: `${i18n.$t('User Information')}`
}
},
{
path: '/user/password',
name: 'password',
component: resolve => require(['../pages/user/pages/password/index'], resolve),
meta: {
title: `${i18n.$t('Edit password')}`
}
},
{
path: '/user/token',
name: 'token',
component: resolve => require(['../pages/user/pages/token/index'], resolve),
meta: {
title: `${i18n.$t('Token manage')}`
}
}
]
},
{
path: '/monitor',
name: 'monitor',
component: resolve => require(['../pages/monitor/index'], resolve),
meta: {
title: 'monitor'
},
redirect: {
name: 'servers-master'
},
children: [
{
path: '/monitor/servers/master',
name: 'servers-master',
component: resolve => require(['../pages/monitor/pages/servers/master'], resolve),
meta: {
title: `${i18n.$t('Service-Master')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/worker',
name: 'servers-worker',
component: resolve => require(['../pages/monitor/pages/servers/worker'], resolve),
meta: {
title: `${i18n.$t('Service-Worker')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/alert',
name: 'servers-alert',
component: resolve => require(['../pages/monitor/pages/servers/alert'], resolve),
meta: {
title: 'Alert',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/rpcserver',
name: 'servers-rpcserver',
component: resolve => require(['../pages/monitor/pages/servers/rpcserver'], resolve),
meta: {
title: 'Rpcserver',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/apiserver',
name: 'servers-apiserver',
component: resolve => require(['../pages/monitor/pages/servers/apiserver'], resolve),
meta: {
title: 'Apiserver',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/db',
name: 'servers-db',
component: resolve => require(['../pages/monitor/pages/servers/db'], resolve),
meta: {
title: 'DB',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/statistics',
name: 'statistics',
component: resolve => require(['../pages/monitor/pages/servers/statistics'], resolve),
meta: {
title: 'statistics',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
}
]
}
]
})
const VueRouterPush = Router.prototype.push
Router.prototype.push = function push (to) {
return VueRouterPush.call(this, to).catch(err => err)
}
router.beforeEach((to, from, next) => {
const $body = $('body')
$body.find('.tooltip.fade.top.in').remove()
if (to.meta.title) {
document.title = `${to.meta.title} - DolphinScheduler`
}
next()
})
export default router
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/conf/home/store/dag/actions.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import _ from 'lodash'
import io from '@/module/io'
export default {
/**
* Task status acquisition
*/
getTaskState ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-instances/${payload}/tasks`, {
processInstanceId: payload
}, res => {
state.taskInstances = res.data.taskList
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Update process definition status
*/
editProcessState ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/process-definition/${payload.code}/release`, {
name: payload.name,
releaseState: payload.releaseState
}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* get process definition versions pagination info
*/
getProcessDefinitionVersionsPage ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-definition/${payload.code}/versions`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* switch process definition version
*/
switchProcessDefinitionVersion ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-definition/${payload.code}/versions/${payload.version}`, {}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* delete process definition version
*/
deleteProcessDefinitionVersion ({ state }, payload) {
return new Promise((resolve, reject) => {
io.delete(`projects/${state.projectCode}/process-definition/${payload.code}/versions/${payload.version}`, {}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Update process instance status
*/
editExecutorsState ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/executors/execute`, {
processInstanceId: payload.processInstanceId,
executeType: payload.executeType
}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Verify that the DGA map name exists
*/
verifDAGName ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-definition/verify-name`, {
name: payload
}, res => {
state.name = payload
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Get process definition DAG diagram details
*/
getProcessDetails ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-definition/${payload}`, {
}, res => {
// process definition code
state.code = res.data.processDefinition.code
// version
state.version = res.data.processDefinition.version
// name
state.name = res.data.processDefinition.name
// description
state.description = res.data.processDefinition.description
// taskRelationJson
state.connects = res.data.processTaskRelationList
// locations
state.locations = JSON.parse(res.data.processDefinition.locations)
// global params
state.globalParams = res.data.processDefinition.globalParamList
// timeout
state.timeout = res.data.processDefinition.timeout
// executionType
state.executionType = res.data.processDefinition.executionType
// tenantId
// tenantCode
state.tenantCode = res.data.processDefinition.tenantCode || 'default'
// tasks info
state.tasks = res.data.taskDefinitionList.map(task => _.pick(task, [
'code',
'name',
'version',
'description',
'delayTime',
'taskType',
'taskParams',
'flag',
'taskPriority',
'workerGroup',
'failRetryTimes',
'failRetryInterval',
'timeoutFlag',
'timeoutNotifyStrategy',
'timeout',
'environmentCode'
]))
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* Get process definition DAG diagram details
*/
copyProcess ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/process-definition/batch-copy`, {
codes: payload.codes,
targetProjectCode: payload.targetProjectCode
}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Get process definition DAG diagram details
*/
moveProcess ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/process-definition/batch-move`, {
codes: payload.codes,
targetProjectCode: payload.targetProjectCode
}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Get all the items created by the logged in user
*/
getAllItems ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('projects/created-and-authed', {}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Get the process instance DAG diagram details
*/
getInstancedetail ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-instances/${payload}`, {
}, res => {
const { processDefinition, processTaskRelationList, taskDefinitionList } = res.data.dagData
// code
state.code = processDefinition.code
// version
state.version = processDefinition.version
// name
state.name = res.data.name
// desc
state.description = processDefinition.description
// connects
state.connects = processTaskRelationList
// locations
state.locations = JSON.parse(processDefinition.locations)
// global params
state.globalParams = processDefinition.globalParamList
// timeout
state.timeout = processDefinition.timeout
// executionType
state.executionType = processDefinition.executionType
// tenantCode
state.tenantCode = res.data.tenantCode || 'default'
// tasks info
state.tasks = taskDefinitionList.map(task => _.pick(task, [
'code',
'name',
'version',
'description',
'delayTime',
'taskType',
'taskParams',
'flag',
'taskPriority',
'workerGroup',
'failRetryTimes',
'failRetryInterval',
'timeoutFlag',
'timeoutNotifyStrategy',
'timeout',
'environmentCode'
]))
// startup parameters
state.startup = _.assign(state.startup, _.pick(res.data, ['commandType', 'failureStrategy', 'processInstancePriority', 'workerGroup', 'warningType', 'warningGroupId', 'receivers', 'receiversCc']))
state.startup.commandParam = JSON.parse(res.data.commandParam)
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* Create process definition
*/
saveDAGchart ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/process-definition`, {
locations: JSON.stringify(state.locations),
name: _.trim(state.name),
taskDefinitionJson: JSON.stringify(state.tasks),
taskRelationJson: JSON.stringify(state.connects),
tenantCode: state.tenantCode,
executionType: state.executionType,
description: _.trim(state.description),
globalParams: JSON.stringify(state.globalParams),
timeout: state.timeout
}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Process definition update
*/
updateDefinition ({ state }, payload) {
return new Promise((resolve, reject) => {
io.put(`projects/${state.projectCode}/process-definition/${payload}`, {
locations: JSON.stringify(state.locations),
name: _.trim(state.name),
taskDefinitionJson: JSON.stringify(state.tasks),
taskRelationJson: JSON.stringify(state.connects),
tenantCode: state.tenantCode,
executionType: state.executionType,
description: _.trim(state.description),
globalParams: JSON.stringify(state.globalParams),
timeout: state.timeout,
releaseState: state.releaseState
}, res => {
resolve(res)
state.isEditDag = false
}).catch(e => {
reject(e)
})
})
},
/**
* Process instance update
*/
updateInstance ({ state }, instanceId) {
return new Promise((resolve, reject) => {
io.put(`projects/${state.projectCode}/process-instances/${instanceId}`, {
syncDefine: state.syncDefine,
globalParams: JSON.stringify(state.globalParams),
locations: JSON.stringify(state.locations),
taskDefinitionJson: JSON.stringify(state.tasks),
taskRelationJson: JSON.stringify(state.connects),
tenantCode: state.tenantCode,
timeout: state.timeout
}, res => {
resolve(res)
state.isEditDag = false
}).catch(e => {
reject(e)
})
})
},
/**
* Get a list of process definitions (sub-workflow usage is not paged)
*/
getProcessList ({ state }, payload) {
return new Promise((resolve, reject) => {
if (state.processListS.length) {
resolve()
return
}
io.get(`projects/${state.projectCode}/process-definition/simple-list`, payload, res => {
state.processListS = res.data
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* Get a list of process definitions (list page usage with pagination)
*/
getProcessListP ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-definition`, payload, res => {
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* Get a list of project
*/
getProjectList ({ state }, payload) {
return new Promise((resolve, reject) => {
if (state.projectListS.length) {
resolve()
return
}
io.get('projects/created-and-authed', payload, res => {
state.projectListS = res.data
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* Get a list of process definitions by project code
*/
getProcessByProjectCode ({ state }, code) {
return new Promise((resolve, reject) => {
io.get(`projects/${code}/process-definition/all`, res => {
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* get datasource
*/
getDatasourceList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('datasources/list', {
type: payload
}, res => {
resolve(res)
}).catch(res => {
reject(res)
})
})
},
/**
* get resources
*/
getResourcesList ({ state }) {
return new Promise((resolve, reject) => {
if (state.resourcesListS.length) {
resolve()
return
}
io.get('resources/list', {
type: 'FILE'
}, res => {
state.resourcesListS = res.data
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* get jar
*/
getResourcesListJar ({ state }) {
return new Promise((resolve, reject) => {
if (state.resourcesListJar.length) {
resolve()
return
}
io.get('resources/query-by-type', {
type: 'FILE'
}, res => {
state.resourcesListJar = res.data
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* Get process instance
*/
getProcessInstance ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-instances`, payload, res => {
state.instanceListS = res.data.totalList
resolve(res.data)
}).catch(res => {
reject(res)
})
})
},
/**
* Get alarm list
*/
getNotifyGroupList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('alert-groups/list', res => {
state.notifyGroupListS = _.map(res.data, v => {
return {
id: v.id,
code: v.groupName,
disabled: false
}
})
resolve(_.cloneDeep(state.notifyGroupListS))
}).catch(res => {
reject(res)
})
})
},
/**
* Process definition startup interface
*/
processStart ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/executors/start-process-instance`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* View log
*/
getLog ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('log/detail', payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Get the process instance id according to the process definition id
* @param taskId
*/
getSubProcessId ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-instances/query-sub-by-parent`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Called before the process definition starts
*/
getStartCheck ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/executors/start-check`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Create timing
*/
createSchedule ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/schedules`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Preview timing
*/
previewSchedule ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/schedules/preview`, payload, res => {
resolve(res.data)
// alert(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* Timing list paging
*/
getScheduleList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/schedules`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Timing online
*/
scheduleOffline ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/schedules/${payload.id}/offline`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Timed offline
*/
scheduleOnline ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/schedules/${payload.id}/online`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Edit timing
*/
updateSchedule ({ state }, payload) {
return new Promise((resolve, reject) => {
io.put(`projects/${state.projectCode}/schedules/${payload.id}`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Delete process instance
*/
deleteInstance ({ state }, payload) {
return new Promise((resolve, reject) => {
io.delete(`projects/${state.projectCode}/process-instances/${payload.processInstanceId}`, {}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Batch delete process instance
*/
batchDeleteInstance ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/process-instances/batch-delete`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Delete definition
*/
deleteDefinition ({ state }, payload) {
return new Promise((resolve, reject) => {
io.delete(`projects/${state.projectCode}/process-definition/${payload.code}`, {}, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Batch delete definition
*/
batchDeleteDefinition ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/process-definition/batch-delete`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* export definition
*/
exportDefinition ({ state }, payload) {
const downloadBlob = (data, fileNameS = 'json') => {
if (!data) {
return
}
const blob = new Blob([data])
const fileName = `${fileNameS}.json`
if ('download' in document.createElement('a')) { // 不是IE浏览器
const url = window.URL.createObjectURL(blob)
const link = document.createElement('a')
link.style.display = 'none'
link.href = url
link.setAttribute('download', fileName)
document.body.appendChild(link)
link.click()
document.body.removeChild(link) // 下载完成移除元素
window.URL.revokeObjectURL(url) // 释放掉blob对象
} else { // IE 10+
window.navigator.msSaveBlob(blob, fileName)
}
}
io.post(`projects/${state.projectCode}/process-definition/batch-export`, { codes: payload.codes }, res => {
downloadBlob(res, payload.fileName)
}, e => {
}, {
responseType: 'blob'
})
},
/**
* Process instance get variable
*/
getViewvariables ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-instances/${payload.processInstanceId}/view-variables`, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Get udfs function based on data source
*/
getUdfList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('resources/udf-func/list', payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Query task instance list
*/
getTaskInstanceList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/task-instances`, payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* Force fail/kill/need_fault_tolerance task success
*/
forceTaskSuccess ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post(`projects/${state.projectCode}/task-instances/${payload.taskInstanceId}/force-success`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* Query task record list
*/
getTaskRecordList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('projects/task-record/list-paging', payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* Query history task record list
*/
getHistoryTaskRecordList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('projects/task-record/history-list-paging', payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* tree chart
*/
getViewTree ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-definition/${payload.code}/view-tree`, { limit: payload.limit }, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* gantt chart
*/
getViewGantt ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-instances/${payload.processInstanceId}/view-gantt`, payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* Query task node list
*/
getProcessTasksList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-definition/${payload.code}/tasks`, payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
getTaskListDefIdAll ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/process-definition/batch-query-tasks`, payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* remove timing
*/
deleteTiming ({ state }, payload) {
return new Promise((resolve, reject) => {
io.delete(`projects/${state.projectCode}/schedules/${payload.scheduleId}`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
getResourceId ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`resources/${payload.id}`, payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
genTaskCodeList ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/task-definition/gen-task-codes`, payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
getTaskDefinitions ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get(`projects/${state.projectCode}/task-definition`, payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/module/components/conditions/conditions.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<div class="conditions-model">
<div class="left">
<slot name="button-group"></slot>
</div>
<div class="right">
<div class="form-box">
<slot name="search-group" v-if="isShow"></slot>
<template v-if="!isShow">
<div class="list">
<el-button size="mini" @click="_ckQuery" icon="el-icon-search"></el-button>
</div>
<div class="list">
<el-input v-model="searchVal"
@keyup.enter="_ckQuery"
size="mini"
:placeholder="$t('Please enter keyword')"
type="text"
style="width:180px;">
</el-input>
</div>
</template>
</div>
</div>
</div>
</template>
<script>
import _ from 'lodash'
export default {
name: 'conditions',
data () {
return {
// search value
searchVal: ''
}
},
props: {
operation: Array
},
methods: {
/**
* emit Query parameter
*/
_ckQuery () {
this.$emit('on-conditions', {
searchVal: _.trim(this.searchVal)
})
}
},
computed: {
// Whether the slot comes in
isShow () {
return this.$slots['search-group']
}
},
created () {
// Routing parameter merging
if (!_.isEmpty(this.$route.query)) {
this.searchVal = this.$route.query.searchVal || ''
}
},
components: {}
}
</script>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/module/components/secondaryMenu/_source/menu.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import i18n from '@/module/i18n'
import config from '~/external/config'
import Permissions from '@/module/permissions'
const menu = {
projects: [
{
name: `${i18n.$t('Project Home')}`,
id: 0,
path: 'projects-index',
isOpen: true,
enabled: true,
icon: 'ri-home-4-line',
children: []
},
{
name: `${i18n.$t('Kinship')}`,
id: 1,
path: 'projects-kinship',
isOpen: true,
enabled: true,
icon: 'ri-node-tree',
children: []
},
{
name: `${i18n.$t('Process')}`,
id: 2,
path: '',
isOpen: true,
enabled: true,
icon: 'el-icon-s-tools',
children: [
{
name: `${i18n.$t('Process definition')}`,
path: 'definition',
id: 0,
enabled: true
},
{
name: `${i18n.$t('Process Instance')}`,
path: 'instance',
id: 1,
enabled: true
},
{
name: `${i18n.$t('Task Instance')}`,
path: 'task-instance',
id: 2,
enabled: true
},
{
name: `${i18n.$t('Task record')}`,
path: 'task-record',
id: 3,
enabled: config.recordSwitch
},
{
name: `${i18n.$t('History task record')}`,
path: 'history-task-record',
id: 4,
enabled: config.recordSwitch
}
]
}
],
security: [
{
name: `${i18n.$t('Tenant Manage')}`,
id: 0,
path: 'tenement-manage',
isOpen: true,
enabled: true,
icon: 'el-icon-user-solid',
children: []
},
{
name: `${i18n.$t('User Manage')}`,
id: 1,
path: 'users-manage',
isOpen: true,
enabled: true,
icon: 'el-icon-user-solid',
children: []
},
{
name: `${i18n.$t('Warning group manage')}`,
id: 2,
path: 'warning-groups-manage',
isOpen: true,
enabled: true,
icon: 'el-icon-warning',
children: []
},
{
name: `${i18n.$t('Warning instance manage')}`,
id: 2,
path: 'warning-instance-manage',
isOpen: true,
enabled: true,
icon: 'el-icon-warning-outline',
children: []
},
{
name: `${i18n.$t('Worker group manage')}`,
id: 4,
path: 'worker-groups-manage',
isOpen: true,
enabled: true,
icon: 'el-icon-s-custom',
children: []
},
{
name: `${i18n.$t('Queue manage')}`,
id: 3,
path: 'queue-manage',
isOpen: true,
enabled: true,
icon: 'el-icon-s-grid',
children: []
},
{
name: `${i18n.$t('Environment manage')}`,
id: 3,
path: 'environment-manage',
isOpen: true,
enabled: true,
icon: 'el-icon-setting',
children: []
},
{
name: `${i18n.$t('Token manage')}`,
id: 2,
path: 'token-manage',
isOpen: true,
icon: 'el-icon-document',
children: [],
enabled: true
}
],
resource: [
{
name: `${i18n.$t('File Manage')}`,
id: 0,
path: 'file',
isOpen: true,
icon: 'el-icon-document-copy',
children: [],
enabled: true
},
{
name: `${i18n.$t('UDF manage')}`,
id: 1,
path: '',
isOpen: true,
icon: 'el-icon-document',
enabled: true,
children: [
{
name: `${i18n.$t('Resource manage')}`,
path: 'resource-udf',
id: 0,
enabled: true
},
{
name: `${i18n.$t('Function manage')}`,
path: 'resource-func',
id: 1,
enabled: true
}
]
}
],
user: [
{
name: `${i18n.$t('User Information')}`,
id: 0,
path: 'account',
isOpen: true,
icon: 'el-icon-user-solid',
children: [],
enabled: true
},
{
name: `${i18n.$t('Edit password')}`,
id: 1,
path: 'password',
isOpen: true,
icon: 'el-icon-key',
children: [],
enabled: true
},
{
name: `${i18n.$t('Token manage')}`,
id: 2,
path: 'token',
isOpen: true,
icon: 'el-icon-s-custom',
children: [],
enabled: Permissions.getAuth()
}
],
monitor: [
{
name: `${i18n.$t('Servers manage')}`,
id: 1,
path: '',
isOpen: true,
enabled: true,
icon: 'el-icon-menu',
children: [
{
name: 'Master',
path: 'servers-master',
id: 0,
enabled: true
},
{
name: 'Worker',
path: 'servers-worker',
id: 1,
enabled: true
},
{
name: 'DB',
path: 'servers-db',
id: 6,
enabled: true
}
]
},
{
name: `${i18n.$t('Statistics manage')}`,
id: 0,
path: '',
isOpen: true,
enabled: true,
icon: 'el-icon-menu',
children: [
{
name: 'Statistics',
path: 'statistics',
id: 0,
enabled: true
}
]
}
]
}
export default type => menu[type]
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
export default {
'User Name': 'User Name',
'Please enter user name': 'Please enter user name',
Password: 'Password',
'Please enter your password': 'Please enter your password',
'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22': 'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22',
Login: 'Login',
Home: 'Home',
'Failed to create node to save': 'Failed to create node to save',
'Global parameters': 'Global parameters',
'Local parameters': 'Local parameters',
'Copy success': 'Copy success',
'The browser does not support automatic copying': 'The browser does not support automatic copying',
'Whether to save the DAG graph': 'Whether to save the DAG graph',
'Current node settings': 'Current node settings',
'View history': 'View history',
'View log': 'View log',
'Force success': 'Force success',
'Enter this child node': 'Enter this child node',
'Node name': 'Node name',
'Please enter name (required)': 'Please enter name (required)',
'Run flag': 'Run flag',
Normal: 'Normal',
'Prohibition execution': 'Prohibition execution',
'Please enter description': 'Please enter description',
'Number of failed retries': 'Number of failed retries',
Times: 'Times',
'Failed retry interval': 'Failed retry interval',
Minute: 'Minute',
'Delay execution time': 'Delay execution time',
'Delay execution': 'Delay execution',
'Forced success': 'Forced success',
Cancel: 'Cancel',
'Confirm add': 'Confirm add',
'The newly created sub-Process has not yet been executed and cannot enter the sub-Process': 'The newly created sub-Process has not yet been executed and cannot enter the sub-Process',
'The task has not been executed and cannot enter the sub-Process': 'The task has not been executed and cannot enter the sub-Process',
'Name already exists': 'Name already exists',
'Download Log': 'Download Log',
'Refresh Log': 'Refresh Log',
'Enter full screen': 'Enter full screen',
'Cancel full screen': 'Cancel full screen',
Close: 'Close',
'Update log success': 'Update log success',
'No more logs': 'No more logs',
'No log': 'No log',
'Loading Log...': 'Loading Log...',
'Set the DAG diagram name': 'Set the DAG diagram name',
'Please enter description(optional)': 'Please enter description(optional)',
'Set global': 'Set global',
'Whether to go online the process definition': 'Whether to go online the process definition',
'Whether to update the process definition': 'Whether to update the process definition',
Add: 'Add',
'DAG graph name cannot be empty': 'DAG graph name cannot be empty',
'Create Datasource': 'Create Datasource',
'Project Home': 'Workflow Monitor',
'Project Manage': 'Project',
'Create Project': 'Create Project',
'Cron Manage': 'Cron Manage',
'Copy Workflow': 'Copy Workflow',
'Tenant Manage': 'Tenant Manage',
'Create Tenant': 'Create Tenant',
'User Manage': 'User Manage',
'Create User': 'Create User',
'User Information': 'User Information',
'Edit Password': 'Edit Password',
Success: 'Success',
Failed: 'Failed',
Delete: 'Delete',
'Please choose': 'Please choose',
'Please enter a positive integer': 'Please enter a positive integer',
'Program Type': 'Program Type',
'Main Class': 'Main Class',
'Main Jar Package': 'Main Jar Package',
'Please enter main jar package': 'Please enter main jar package',
'Please enter main class': 'Please enter main class',
'Main Arguments': 'Main Arguments',
'Please enter main arguments': 'Please enter main arguments',
'Option Parameters': 'Option Parameters',
'Please enter option parameters': 'Please enter option parameters',
Resources: 'Resources',
'Custom Parameters': 'Custom Parameters',
'Custom template': 'Custom template',
Datasource: 'Datasource',
methods: 'methods',
'Please enter the procedure method': 'Please enter the procedure script \n\ncall procedure:{call <procedure-name>[(<arg1>,<arg2>, ...)]}\n\ncall function:{?= call <procedure-name>[(<arg1>,<arg2>, ...)]} ',
'The procedure method script example': 'example:{call <procedure-name>[(?,?, ...)]} or {?= call <procedure-name>[(?,?, ...)]}',
Script: 'Script',
'Please enter script(required)': 'Please enter script(required)',
'Deploy Mode': 'Deploy Mode',
'Driver Cores': 'Driver Cores',
'Please enter Driver cores': 'Please enter Driver cores',
'Driver Memory': 'Driver Memory',
'Please enter Driver memory': 'Please enter Driver memory',
'Executor Number': 'Executor Number',
'Please enter Executor number': 'Please enter Executor number',
'The Executor number should be a positive integer': 'The Executor number should be a positive integer',
'Executor Memory': 'Executor Memory',
'Please enter Executor memory': 'Please enter Executor memory',
'Executor Cores': 'Executor Cores',
'Please enter Executor cores': 'Please enter Executor cores',
'Memory should be a positive integer': 'Memory should be a positive integer',
'Core number should be positive integer': 'Core number should be positive integer',
'Flink Version': 'Flink Version',
'JobManager Memory': 'JobManager Memory',
'Please enter JobManager memory': 'Please enter JobManager memory',
'TaskManager Memory': 'TaskManager Memory',
'Please enter TaskManager memory': 'Please enter TaskManager memory',
'Slot Number': 'Slot Number',
'Please enter Slot number': 'Please enter Slot number',
Parallelism: 'Parallelism',
'Custom Parallelism': 'Configure parallelism',
'Please enter Parallelism': 'Please enter Parallelism',
'Parallelism tip': 'If there are a large number of tasks requiring complement, you can use the custom parallelism to ' +
'set the complement task thread to a reasonable value to avoid too large impact on the server.',
'Parallelism number should be positive integer': 'Parallelism number should be positive integer',
'TaskManager Number': 'TaskManager Number',
'Please enter TaskManager number': 'Please enter TaskManager number',
'App Name': 'App Name',
'Please enter app name(optional)': 'Please enter app name(optional)',
'SQL Type': 'SQL Type',
'Send Email': 'Send Email',
'Log display': 'Log display',
'rows of result': 'rows of result',
Title: 'Title',
'Please enter the title of email': 'Please enter the title of email',
Table: 'Table',
TableMode: 'Table',
Attachment: 'Attachment',
'SQL Parameter': 'SQL Parameter',
'SQL Statement': 'SQL Statement',
'UDF Function': 'UDF Function',
'Please enter a SQL Statement(required)': 'Please enter a SQL Statement(required)',
'Please enter a JSON Statement(required)': 'Please enter a JSON Statement(required)',
'One form or attachment must be selected': 'One form or attachment must be selected',
'Mail subject required': 'Mail subject required',
'Child Node': 'Child Node',
'Please select a sub-Process': 'Please select a sub-Process',
Edit: 'Edit',
'Switch To This Version': 'Switch To This Version',
'Datasource Name': 'Datasource Name',
'Please enter datasource name': 'Please enter datasource name',
IP: 'IP',
'Please enter IP': 'Please enter IP',
Port: 'Port',
'Please enter port': 'Please enter port',
'Database Name': 'Database Name',
'Please enter database name': 'Please enter database name',
'Oracle Connect Type': 'ServiceName or SID',
'Oracle Service Name': 'ServiceName',
'Oracle SID': 'SID',
'jdbc connect parameters': 'jdbc connect parameters',
'Test Connect': 'Test Connect',
'Please enter resource name': 'Please enter resource name',
'Please enter resource folder name': 'Please enter resource folder name',
'Please enter a non-query SQL statement': 'Please enter a non-query SQL statement',
'Please enter IP/hostname': 'Please enter IP/hostname',
'jdbc connection parameters is not a correct JSON format': 'jdbc connection parameters is not a correct JSON format',
'#': '#',
'Datasource Type': 'Datasource Type',
'Datasource Parameter': 'Datasource Parameter',
'Create Time': 'Create Time',
'Update Time': 'Update Time',
Operation: 'Operation',
'Current Version': 'Current Version',
'Click to view': 'Click to view',
'Delete?': 'Delete?',
'Switch Version Successfully': 'Switch Version Successfully',
'Confirm Switch To This Version?': 'Confirm Switch To This Version?',
Confirm: 'Confirm',
'Task status statistics': 'Task Status Statistics',
Number: 'Number',
State: 'State',
'Dry-run flag': 'Dry-run flag',
'Process Status Statistics': 'Process Status Statistics',
'Process Definition Statistics': 'Process Definition Statistics',
'Project Name': 'Project Name',
'Please enter name': 'Please enter name',
'Owned Users': 'Owned Users',
'Process Pid': 'Process Pid',
'Zk registration directory': 'Zk registration directory',
cpuUsage: 'cpuUsage',
memoryUsage: 'memoryUsage',
'Last heartbeat time': 'Last heartbeat time',
'Edit Tenant': 'Edit Tenant',
'OS Tenant Code': 'OS Tenant Code',
'Tenant Name': 'Tenant Name',
Queue: 'Yarn Queue',
'Please select a queue': 'default is tenant association queue',
'Please enter the os tenant code in English': 'Please enter the os tenant code in English',
'Please enter os tenant code in English': 'Please enter os tenant code in English',
'Please enter os tenant code': 'Please enter os tenant code',
'Please enter tenant Name': 'Please enter tenant Name',
'The os tenant code. Only letters or a combination of letters and numbers are allowed': 'The os tenant code. Only letters or a combination of letters and numbers are allowed',
'Edit User': 'Edit User',
Tenant: 'Tenant',
Email: 'Email',
Phone: 'Phone',
'User Type': 'User Type',
'Please enter phone number': 'Please enter phone number',
'Please enter email': 'Please enter email',
'Please enter the correct email format': 'Please enter the correct email format',
'Please enter the correct mobile phone format': 'Please enter the correct mobile phone format',
Project: 'Project',
Authorize: 'Authorize',
'File resources': 'File resources',
'UDF resources': 'UDF resources',
'UDF resources directory': 'UDF resources directory',
'Please select UDF resources directory': 'Please select UDF resources directory',
'Alarm group': 'Alarm group',
'Alarm group required': 'Alarm group required',
'Edit alarm group': 'Edit alarm group',
'Create alarm group': 'Create alarm group',
'Create Alarm Instance': 'Create Alarm Instance',
'Edit Alarm Instance': 'Edit Alarm Instance',
'Group Name': 'Group Name',
'Alarm instance name': 'Alarm instance name',
'Alarm plugin name': 'Alarm plugin name',
'Select plugin': 'Select plugin',
'Select Alarm plugin': 'Please select an Alarm plugin',
'Please enter group name': 'Please enter group name',
'Instance parameter exception': 'Instance parameter exception',
'Group Type': 'Group Type',
'Alarm plugin instance': 'Alarm plugin instance',
'Select Alarm plugin instance': 'Please select an Alarm plugin instance',
Remarks: 'Remarks',
SMS: 'SMS',
'Managing Users': 'Managing Users',
Permission: 'Permission',
Administrator: 'Administrator',
'Confirm Password': 'Confirm Password',
'Please enter confirm password': 'Please enter confirm password',
'Password cannot be in Chinese': 'Password cannot be in Chinese',
'Please enter a password (6-22) character password': 'Please enter a password (6-22) character password',
'Confirmation password cannot be in Chinese': 'Confirmation password cannot be in Chinese',
'Please enter a confirmation password (6-22) character password': 'Please enter a confirmation password (6-22) character password',
'The password is inconsistent with the confirmation password': 'The password is inconsistent with the confirmation password',
'Please select the datasource': 'Please select the datasource',
'Please select resources': 'Please select resources',
Query: 'Query',
'Non Query': 'Non Query',
'prop(required)': 'prop(required)',
'value(optional)': 'value(optional)',
'value(required)': 'value(required)',
'prop is empty': 'prop is empty',
'value is empty': 'value is empty',
'prop is repeat': 'prop is repeat',
'Start Time': 'Start Time',
'End Time': 'End Time',
crontab: 'crontab',
'Failure Strategy': 'Failure Strategy',
online: 'online',
offline: 'offline',
'Task Status': 'Task Status',
'Process Instance': 'Process Instance',
'Task Instance': 'Task Instance',
'Select date range': 'Select date range',
startDate: 'startDate',
endDate: 'endDate',
Date: 'Date',
Waiting: 'Waiting',
Execution: 'Execution',
Finish: 'Finish',
'Create File': 'Create File',
'Create folder': 'Create folder',
'File Name': 'File Name',
'Folder Name': 'Folder Name',
'File Format': 'File Format',
'Folder Format': 'Folder Format',
'File Content': 'File Content',
'Upload File Size': 'Upload File size cannot exceed 1g',
Create: 'Create',
'Please enter the resource content': 'Please enter the resource content',
'Resource content cannot exceed 3000 lines': 'Resource content cannot exceed 3000 lines',
'File Details': 'File Details',
'Download Details': 'Download Details',
Return: 'Return',
Save: 'Save',
'File Manage': 'File Manage',
'Upload Files': 'Upload Files',
'Create UDF Function': 'Create UDF Function',
'Upload UDF Resources': 'Upload UDF Resources',
'Service-Master': 'Service-Master',
'Service-Worker': 'Service-Worker',
'Process Name': 'Process Name',
Executor: 'Executor',
'Run Type': 'Run Type',
'Scheduling Time': 'Scheduling Time',
'Run Times': 'Run Times',
host: 'host',
'fault-tolerant sign': 'fault-tolerant sign',
Rerun: 'Rerun',
'Recovery Failed': 'Recovery Failed',
Stop: 'Stop',
Pause: 'Pause',
'Recovery Suspend': 'Recovery Suspend',
Gantt: 'Gantt',
'Node Type': 'Node Type',
'Submit Time': 'Submit Time',
Duration: 'Duration',
'Retry Count': 'Retry Count',
'Task Name': 'Task Name',
'Task Date': 'Task Date',
'Source Table': 'Source Table',
'Record Number': 'Record Number',
'Target Table': 'Target Table',
'Online viewing type is not supported': 'Online viewing type is not supported',
Size: 'Size',
Rename: 'Rename',
Download: 'Download',
Export: 'Export',
'Version Info': 'Version Info',
Submit: 'Submit',
'Edit UDF Function': 'Edit UDF Function',
type: 'type',
'UDF Function Name': 'UDF Function Name',
FILE: 'FILE',
UDF: 'UDF',
'File Subdirectory': 'File Subdirectory',
'Please enter a function name': 'Please enter a function name',
'Package Name': 'Package Name',
'Please enter a Package name': 'Please enter a Package name',
Parameter: 'Parameter',
'Please enter a parameter': 'Please enter a parameter',
'UDF Resources': 'UDF Resources',
'Upload Resources': 'Upload Resources',
Instructions: 'Instructions',
'Please enter a instructions': 'Please enter a instructions',
'Please enter a UDF function name': 'Please enter a UDF function name',
'Select UDF Resources': 'Select UDF Resources',
'Class Name': 'Class Name',
'Jar Package': 'Jar Package',
'Library Name': 'Library Name',
'UDF Resource Name': 'UDF Resource Name',
'File Size': 'File Size',
Description: 'Description',
'Drag Nodes and Selected Items': 'Drag Nodes and Selected Items',
'Select Line Connection': 'Select Line Connection',
'Delete selected lines or nodes': 'Delete selected lines or nodes',
'Full Screen': 'Full Screen',
Unpublished: 'Unpublished',
'Start Process': 'Start Process',
'Execute from the current node': 'Execute from the current node',
'Recover tolerance fault process': 'Recover tolerance fault process',
'Resume the suspension process': 'Resume the suspension process',
'Execute from the failed nodes': 'Execute from the failed nodes',
'Complement Data': 'Complement Data',
'Scheduling execution': 'Scheduling execution',
'Recovery waiting thread': 'Recovery waiting thread',
'Submitted successfully': 'Submitted successfully',
Executing: 'Executing',
'Ready to pause': 'Ready to pause',
'Ready to stop': 'Ready to stop',
'Need fault tolerance': 'Need fault tolerance',
Kill: 'Kill',
'Waiting for thread': 'Waiting for thread',
'Waiting for dependence': 'Waiting for dependence',
Start: 'Start',
Copy: 'Copy',
'Copy name': 'Copy name',
'Copy path': 'Copy path',
'Please enter keyword': 'Please enter keyword',
'File Upload': 'File Upload',
'Drag the file into the current upload window': 'Drag the file into the current upload window',
'Drag area upload': 'Drag area upload',
Upload: 'Upload',
'ReUpload File': 'ReUpload File',
'Please enter file name': 'Please enter file name',
'Please select the file to upload': 'Please select the file to upload',
'Resources manage': 'Resources',
Security: 'Security',
Logout: 'Logout',
'No data': 'No data',
'Uploading...': 'Uploading...',
'Loading...': 'Loading...',
List: 'List',
'Unable to download without proper url': 'Unable to download without proper url',
Process: 'Process',
'Process definition': 'Process definition',
'Task record': 'Task record',
'Warning group manage': 'Warning group manage',
'Warning instance manage': 'Warning instance manage',
'Servers manage': 'Servers manage',
'UDF manage': 'UDF manage',
'Resource manage': 'Resource manage',
'Function manage': 'Function manage',
'Edit password': 'Edit password',
'Ordinary users': 'Ordinary users',
'Create process': 'Create process',
'Import process': 'Import process',
'Timing state': 'Timing state',
Timing: 'Timing',
Timezone: 'Timezone',
TreeView: 'TreeView',
'Mailbox already exists! Recipients and copyers cannot repeat': 'Mailbox already exists! Recipients and copyers cannot repeat',
'Mailbox input is illegal': 'Mailbox input is illegal',
'Please set the parameters before starting': 'Please set the parameters before starting',
Continue: 'Continue',
End: 'End',
'Node execution': 'Node execution',
'Backward execution': 'Backward execution',
'Forward execution': 'Forward execution',
'Execute only the current node': 'Execute only the current node',
'Notification strategy': 'Notification strategy',
'Notification group': 'Notification group',
'Please select a notification group': 'Please select a notification group',
'Whether it is a complement process?': 'Whether it is a complement process?',
'Schedule date': 'Schedule date',
'Mode of execution': 'Mode of execution',
'Serial execution': 'Serial execution',
'Parallel execution': 'Parallel execution',
'Set parameters before timing': 'Set parameters before timing',
'Start and stop time': 'Start and stop time',
'Please select time': 'Please select time',
'Please enter crontab': 'Please enter crontab',
none_1: 'none',
success_1: 'success',
failure_1: 'failure',
All_1: 'All',
Toolbar: 'Toolbar',
'View variables': 'View variables',
'Format DAG': 'Format DAG',
'Refresh DAG status': 'Refresh DAG status',
Return_1: 'Return',
'Please enter format': 'Please enter format',
'connection parameter': 'connection parameter',
'Process definition details': 'Process definition details',
'Create process definition': 'Create process definition',
'Scheduled task list': 'Scheduled task list',
'Process instance details': 'Process instance details',
'Create Resource': 'Create Resource',
'User Center': 'User Center',
AllStatus: 'All',
None: 'None',
Name: 'Name',
'Process priority': 'Process priority',
'Task priority': 'Task priority',
'Task timeout alarm': 'Task timeout alarm',
'Timeout strategy': 'Timeout strategy',
'Timeout alarm': 'Timeout alarm',
'Timeout failure': 'Timeout failure',
'Timeout period': 'Timeout period',
'Waiting Dependent complete': 'Waiting Dependent complete',
'Waiting Dependent start': 'Waiting Dependent start',
'Check interval': 'Check interval',
'Timeout must be longer than check interval': 'Timeout must be longer than check interval',
'Timeout strategy must be selected': 'Timeout strategy must be selected',
'Timeout must be a positive integer': 'Timeout must be a positive integer',
'Add dependency': 'Add dependency',
'Whether dry-run': 'Whether dry-run',
and: 'and',
or: 'or',
month: 'month',
week: 'week',
day: 'day',
hour: 'hour',
Running: 'Running',
'Waiting for dependency to complete': 'Waiting for dependency to complete',
Selected: 'Selected',
CurrentHour: 'CurrentHour',
Last1Hour: 'Last1Hour',
Last2Hours: 'Last2Hours',
Last3Hours: 'Last3Hours',
Last24Hours: 'Last24Hours',
today: 'today',
Last1Days: 'Last1Days',
Last2Days: 'Last2Days',
Last3Days: 'Last3Days',
Last7Days: 'Last7Days',
ThisWeek: 'ThisWeek',
LastWeek: 'LastWeek',
LastMonday: 'LastMonday',
LastTuesday: 'LastTuesday',
LastWednesday: 'LastWednesday',
LastThursday: 'LastThursday',
LastFriday: 'LastFriday',
LastSaturday: 'LastSaturday',
LastSunday: 'LastSunday',
ThisMonth: 'ThisMonth',
LastMonth: 'LastMonth',
LastMonthBegin: 'LastMonthBegin',
LastMonthEnd: 'LastMonthEnd',
'Refresh status succeeded': 'Refresh status succeeded',
'Queue manage': 'Yarn Queue manage',
'Create queue': 'Create queue',
'Edit queue': 'Edit queue',
'Datasource manage': 'Datasource',
'History task record': 'History task record',
'Please go online': 'Please go online',
'Queue value': 'Queue value',
'Please enter queue value': 'Please enter queue value',
'Worker group manage': 'Worker group manage',
'Create worker group': 'Create worker group',
'Edit worker group': 'Edit worker group',
'Token manage': 'Token manage',
'Create token': 'Create token',
'Edit token': 'Edit token',
Addresses: 'Addresses',
'Worker Addresses': 'Worker Addresses',
'Please select the worker addresses': 'Please select the worker addresses',
'Failure time': 'Failure time',
'Expiration time': 'Expiration time',
User: 'User',
'Please enter token': 'Please enter token',
'Generate token': 'Generate token',
Monitor: 'Monitor',
Group: 'Group',
'Queue statistics': 'Queue statistics',
'Command status statistics': 'Command status statistics',
'Task kill': 'Task Kill',
'Task queue': 'Task queue',
'Error command count': 'Error command count',
'Normal command count': 'Normal command count',
Manage: ' Manage',
'Number of connections': 'Number of connections',
Sent: 'Sent',
Received: 'Received',
'Min latency': 'Min latency',
'Avg latency': 'Avg latency',
'Max latency': 'Max latency',
'Node count': 'Node count',
'Query time': 'Query time',
'Node self-test status': 'Node self-test status',
'Health status': 'Health status',
'Max connections': 'Max connections',
'Threads connections': 'Threads connections',
'Max used connections': 'Max used connections',
'Threads running connections': 'Threads running connections',
'Worker group': 'Worker group',
'Please enter a positive integer greater than 0': 'Please enter a positive integer greater than 0',
'Pre Statement': 'Pre Statement',
'Post Statement': 'Post Statement',
'Statement cannot be empty': 'Statement cannot be empty',
'Process Define Count': 'Work flow Define Count',
'Process Instance Running Count': 'Process Instance Running Count',
'command number of waiting for running': 'command number of waiting for running',
'failure command number': 'failure command number',
'tasks number of waiting running': 'tasks number of waiting running',
'task number of ready to kill': 'task number of ready to kill',
'Statistics manage': 'Statistics Manage',
statistics: 'Statistics',
'select tenant': 'select tenant',
'Please enter Principal': 'Please enter Principal',
'Please enter the kerberos authentication parameter java.security.krb5.conf': 'Please enter the kerberos authentication parameter java.security.krb5.conf',
'Please enter the kerberos authentication parameter login.user.keytab.username': 'Please enter the kerberos authentication parameter login.user.keytab.username',
'Please enter the kerberos authentication parameter login.user.keytab.path': 'Please enter the kerberos authentication parameter login.user.keytab.path',
'The start time must not be the same as the end': 'The start time must not be the same as the end',
'Startup parameter': 'Startup parameter',
'Startup type': 'Startup type',
'warning of timeout': 'warning of timeout',
'Next five execution times': 'Next five execution times',
'Execute time': 'Execute time',
'Complement range': 'Complement range',
'Http Url': 'Http Url',
'Http Method': 'Http Method',
'Http Parameters': 'Http Parameters',
'Http Parameters Key': 'Http Parameters Key',
'Http Parameters Position': 'Http Parameters Position',
'Http Parameters Value': 'Http Parameters Value',
'Http Check Condition': 'Http Check Condition',
'Http Condition': 'Http Condition',
'Please Enter Http Url': 'Please Enter Http Url(required)',
'Please Enter Http Condition': 'Please Enter Http Condition',
'There is no data for this period of time': 'There is no data for this period of time',
'Worker addresses cannot be empty': 'Worker addresses cannot be empty',
'Please generate token': 'Please generate token',
'Please Select token': 'Please select the expiration time of token',
'Spark Version': 'Spark Version',
TargetDataBase: 'target database',
TargetTable: 'target table',
TargetJobName: 'target job name',
'Please enter Pigeon job name': 'Please enter Pigeon job name',
'Please enter the table of target': 'Please enter the table of target',
'Please enter a Target Table(required)': 'Please enter a Target Table(required)',
SpeedByte: 'speed(byte count)',
SpeedRecord: 'speed(record count)',
'0 means unlimited by byte': '0 means unlimited',
'0 means unlimited by count': '0 means unlimited',
'Modify User': 'Modify User',
'Whether directory': 'Whether directory',
Yes: 'Yes',
No: 'No',
'Hadoop Custom Params': 'Hadoop Params',
'Sqoop Advanced Parameters': 'Sqoop Params',
'Sqoop Job Name': 'Job Name',
'Please enter Mysql Database(required)': 'Please enter Mysql Database(required)',
'Please enter Mysql Table(required)': 'Please enter Mysql Table(required)',
'Please enter Columns (Comma separated)': 'Please enter Columns (Comma separated)',
'Please enter Target Dir(required)': 'Please enter Target Dir(required)',
'Please enter Export Dir(required)': 'Please enter Export Dir(required)',
'Please enter Hive Database(required)': 'Please enter Hive Databasec(required)',
'Please enter Hive Table(required)': 'Please enter Hive Table(required)',
'Please enter Hive Partition Keys': 'Please enter Hive Partition Key',
'Please enter Hive Partition Values': 'Please enter Partition Value',
'Please enter Replace Delimiter': 'Please enter Replace Delimiter',
'Please enter Fields Terminated': 'Please enter Fields Terminated',
'Please enter Lines Terminated': 'Please enter Lines Terminated',
'Please enter Concurrency': 'Please enter Concurrency',
'Please enter Update Key': 'Please enter Update Key',
'Please enter Job Name(required)': 'Please enter Job Name(required)',
'Please enter Custom Shell(required)': 'Please enter Custom Shell(required)',
Direct: 'Direct',
Type: 'Type',
ModelType: 'ModelType',
ColumnType: 'ColumnType',
Database: 'Database',
Column: 'Column',
'Map Column Hive': 'Map Column Hive',
'Map Column Java': 'Map Column Java',
'Export Dir': 'Export Dir',
'Hive partition Keys': 'Hive partition Keys',
'Hive partition Values': 'Hive partition Values',
FieldsTerminated: 'FieldsTerminated',
LinesTerminated: 'LinesTerminated',
IsUpdate: 'IsUpdate',
UpdateKey: 'UpdateKey',
UpdateMode: 'UpdateMode',
'Target Dir': 'Target Dir',
DeleteTargetDir: 'DeleteTargetDir',
FileType: 'FileType',
CompressionCodec: 'CompressionCodec',
CreateHiveTable: 'CreateHiveTable',
DropDelimiter: 'DropDelimiter',
OverWriteSrc: 'OverWriteSrc',
ReplaceDelimiter: 'ReplaceDelimiter',
Concurrency: 'Concurrency',
Form: 'Form',
OnlyUpdate: 'OnlyUpdate',
AllowInsert: 'AllowInsert',
'Data Source': 'Data Source',
'Data Target': 'Data Target',
'All Columns': 'All Columns',
'Some Columns': 'Some Columns',
'Branch flow': 'Branch flow',
'Custom Job': 'Custom Job',
'Custom Script': 'Custom Script',
'Cannot select the same node for successful branch flow and failed branch flow': 'Cannot select the same node for successful branch flow and failed branch flow',
'Successful branch flow and failed branch flow are required': 'conditions node Successful and failed branch flow are required',
'No resources exist': 'No resources exist',
'Please delete all non-existing resources': 'Please delete all non-existing resources',
'Unauthorized or deleted resources': 'Unauthorized or deleted resources',
'Please delete all non-existent resources': 'Please delete all non-existent resources',
Kinship: 'Workflow relationship',
Reset: 'Reset',
KinshipStateActive: 'Active',
KinshipState1: 'Online',
KinshipState0: 'Workflow is not online',
KinshipState10: 'Scheduling is not online',
'Dag label display control': 'Dag label display control',
Enable: 'Enable',
Disable: 'Disable',
'The Worker group no longer exists, please select the correct Worker group!': 'The Worker group no longer exists, please select the correct Worker group!',
'Please confirm whether the workflow has been saved before downloading': 'Please confirm whether the workflow has been saved before downloading',
'User name length is between 3 and 39': 'User name length is between 3 and 39',
'Timeout Settings': 'Timeout Settings',
'Connect Timeout': 'Connect Timeout',
'Socket Timeout': 'Socket Timeout',
'Connect timeout be a positive integer': 'Connect timeout be a positive integer',
'Socket Timeout be a positive integer': 'Socket Timeout be a positive integer',
ms: 'ms',
'Please Enter Url': 'Please Enter Url eg. 127.0.0.1:7077',
Master: 'Master',
'Please select the waterdrop resources': 'Please select the waterdrop resources',
zkDirectory: 'zkDirectory',
'Directory detail': 'Directory detail',
'Connection name': 'Connection name',
'Current connection settings': 'Current connection settings',
'Please save the DAG before formatting': 'Please save the DAG before formatting',
'Batch copy': 'Batch copy',
'Related items': 'Related items',
'Project name is required': 'Project name is required',
'Batch move': 'Batch move',
Version: 'Version',
'Pre tasks': 'Pre tasks',
'Running Memory': 'Running Memory',
'Max Memory': 'Max Memory',
'Min Memory': 'Min Memory',
'The workflow canvas is abnormal and cannot be saved, please recreate': 'The workflow canvas is abnormal and cannot be saved, please recreate',
Info: 'Info',
'Datasource userName': 'owner',
'Resource userName': 'owner',
'Environment manage': 'Environment manage',
'Create environment': 'Create environment',
'Edit environment': 'Edit environment',
'Environment value': 'Environment value',
'Environment Name': 'Environment Name',
'Environment Code': 'Environment Code',
'Environment Config': 'Environment Config',
'Environment Desc': 'Environment Desc',
'Environment Worker Group': 'Worker Groups',
'Please enter environment config': 'Please enter environment config',
'Please enter environment desc': 'Please enter environment desc',
'Please select worker groups': 'Please select worker groups',
condition: 'condition',
'The condition content cannot be empty': 'The condition content cannot be empty',
'Reference from': 'Reference from',
'No more...': 'No more...',
'Process execute type': 'Process execute type',
parallel: 'parallel',
'Serial wait': 'Serial wait',
'Serial discard': 'Serial discard',
'Serial priority': 'Serial priority',
'Recover serial wait': 'Recover serial wait',
IsEnableProxy: 'Enable Proxy',
WebHook: 'WebHook',
Keyword: 'Keyword',
Proxy: 'Proxy',
receivers: 'Receivers',
receiverCcs: 'ReceiverCcs',
transportProtocol: 'Transport Protocol',
serverHost: 'SMTP Host',
serverPort: 'SMTP Port',
sender: 'Sender',
enableSmtpAuth: 'SMTP Auth',
starttlsEnable: 'SMTP STARTTLS Enable',
sslEnable: 'SMTP SSL Enable',
smtpSslTrust: 'SMTP SSL Trust',
url: 'URL',
requestType: 'Request Type',
headerParams: 'Headers',
bodyParams: 'Body',
contentField: 'Content Field',
path: 'Script Path',
userParams: 'User Params',
corpId: 'CorpId',
secret: 'Secret',
userSendMsg: 'UserSendMsg',
agentId: 'AgentId',
users: 'Users',
Username: 'Username',
showType: 'Show Type'
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,600 | [Feature][API+UI] edit DAG as list page | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
there would be a high resource occupancy when one DAG contains too many tasks.
Using the list page to edit DAG could solve this problem
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6600 | https://github.com/apache/dolphinscheduler/pull/6762 | 94f6c5c27ee88d5ec6f7f3856b2bc2aebab738e8 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | "2021-10-25T13:21:06Z" | java | "2021-11-15T05:54:28Z" | dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
export default {
'User Name': '用户名',
'Please enter user name': '请输入用户名',
Password: '密码',
'Please enter your password': '请输入密码',
'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22': '密码至少包含数字,字母和字符的两种组合,长度在6-22之间',
Login: '登录',
Home: '首页',
'Failed to create node to save': '未创建节点保存失败',
'Global parameters': '全局参数',
'Local parameters': '局部参数',
'Copy success': '复制成功',
'The browser does not support automatic copying': '该浏览器不支持自动复制',
'Whether to save the DAG graph': '是否保存DAG图',
'Current node settings': '当前节点设置',
'View history': '查看历史',
'View log': '查看日志',
'Force success': '强制成功',
'Enter this child node': '进入该子节点',
'Node name': '节点名称',
'Please enter name (required)': '请输入名称(必填)',
'Run flag': '运行标志',
Normal: '正常',
'Prohibition execution': '禁止执行',
'Please enter description': '请输入描述',
'Number of failed retries': '失败重试次数',
Times: '次',
'Failed retry interval': '失败重试间隔',
Minute: '分',
'Delay execution time': '延时执行时间',
'Delay execution': '延时执行',
'Forced success': '强制成功',
Cancel: '取消',
'Confirm add': '确认添加',
'The newly created sub-Process has not yet been executed and cannot enter the sub-Process': '新创建子工作流还未执行,不能进入子工作流',
'The task has not been executed and cannot enter the sub-Process': '该任务还未执行,不能进入子工作流',
'Name already exists': '名称已存在请重新输入',
'Download Log': '下载日志',
'Refresh Log': '刷新日志',
'Enter full screen': '进入全屏',
'Cancel full screen': '取消全屏',
Close: '关闭',
'Update log success': '更新日志成功',
'No more logs': '暂无更多日志',
'No log': '暂无日志',
'Loading Log...': '正在努力请求日志中...',
'Set the DAG diagram name': '设置DAG图名称',
'Please enter description(optional)': '请输入描述(选填)',
'Set global': '设置全局',
'Whether to go online the process definition': '是否上线流程定义',
'Whether to update the process definition': '是否更新流程定义',
Add: '添加',
'DAG graph name cannot be empty': 'DAG图名称不能为空',
'Create Datasource': '创建数据源',
'Project Home': '工作流监控',
'Project Manage': '项目管理',
'Create Project': '创建项目',
'Cron Manage': '定时管理',
'Copy Workflow': '复制工作流',
'Tenant Manage': '租户管理',
'Create Tenant': '创建租户',
'User Manage': '用户管理',
'Create User': '创建用户',
'User Information': '用户信息',
'Edit Password': '密码修改',
Success: '成功',
Failed: '失败',
Delete: '删除',
'Please choose': '请选择',
'Please enter a positive integer': '请输入正整数',
'Program Type': '程序类型',
'Main Class': '主函数的Class',
'Main Jar Package': '主Jar包',
'Please enter main jar package': '请选择主Jar包',
'Please enter main class': '请填写主函数的Class',
'Main Arguments': '主程序参数',
'Please enter main arguments': '请输入主程序参数',
'Option Parameters': '选项参数',
'Please enter option parameters': '请输入选项参数',
Resources: '资源',
'Custom Parameters': '自定义参数',
'Custom template': '自定义模版',
Datasource: '数据源',
methods: '方法',
'Please enter the procedure method': '请输入存储脚本 \n\n调用存储过程:{call <procedure-name>[(<arg1>,<arg2>, ...)]}\n\n调用存储函数:{?= call <procedure-name>[(<arg1>,<arg2>, ...)]} ',
'The procedure method script example': '示例:{call <procedure-name>[(?,?, ...)]} 或 {?= call <procedure-name>[(?,?, ...)]}',
Script: '脚本',
'Please enter script(required)': '请输入脚本(必填)',
'Deploy Mode': '部署方式',
'Driver Cores': 'Driver核心数',
'Please enter Driver cores': '请输入Driver核心数',
'Driver Memory': 'Driver内存数',
'Please enter Driver memory': '请输入Driver内存数',
'Executor Number': 'Executor数量',
'Please enter Executor number': '请输入Executor数量',
'The Executor number should be a positive integer': 'Executor数量为正整数',
'Executor Memory': 'Executor内存数',
'Please enter Executor memory': '请输入Executor内存数',
'Executor Cores': 'Executor核心数',
'Please enter Executor cores': '请输入Executor核心数',
'Memory should be a positive integer': '内存数为数字',
'Core number should be positive integer': '核心数为正整数',
'Flink Version': 'Flink版本',
'JobManager Memory': 'JobManager内存数',
'Please enter JobManager memory': '请输入JobManager内存数',
'TaskManager Memory': 'TaskManager内存数',
'Please enter TaskManager memory': '请输入TaskManager内存数',
'Slot Number': 'Slot数量',
'Please enter Slot number': '请输入Slot数量',
Parallelism: '并行度',
'Custom Parallelism': '自定义并行度',
'Please enter Parallelism': '请输入并行度',
'Parallelism number should be positive integer': '并行度必须为正整数',
'Parallelism tip': '如果存在大量任务需要补数时,可以利用自定义并行度将补数的任务线程设置成合理的数值,避免对服务器造成过大的影响',
'TaskManager Number': 'TaskManager数量',
'Please enter TaskManager number': '请输入TaskManager数量',
'App Name': '任务名称',
'Please enter app name(optional)': '请输入任务名称(选填)',
'SQL Type': 'sql类型',
'Send Email': '发送邮件',
'Log display': '日志显示',
'rows of result': '行查询结果',
Title: '主题',
'Please enter the title of email': '请输入邮件主题',
Table: '表名',
TableMode: '表格',
Attachment: '附件',
'SQL Parameter': 'sql参数',
'SQL Statement': 'sql语句',
'UDF Function': 'UDF函数',
'Please enter a SQL Statement(required)': '请输入sql语句(必填)',
'Please enter a JSON Statement(required)': '请输入json语句(必填)',
'One form or attachment must be selected': '表格、附件必须勾选一个',
'Mail subject required': '邮件主题必填',
'Child Node': '子节点',
'Please select a sub-Process': '请选择子工作流',
Edit: '编辑',
'Switch To This Version': '切换到该版本',
'Datasource Name': '数据源名称',
'Please enter datasource name': '请输入数据源名称',
IP: 'IP主机名',
'Please enter IP': '请输入IP主机名',
Port: '端口',
'Please enter port': '请输入端口',
'Database Name': '数据库名',
'Please enter database name': '请输入数据库名',
'Oracle Connect Type': '服务名或SID',
'Oracle Service Name': '服务名',
'Oracle SID': 'SID',
'jdbc connect parameters': 'jdbc连接参数',
'Test Connect': '测试连接',
'Please enter resource name': '请输入数据源名称',
'Please enter resource folder name': '请输入资源文件夹名称',
'Please enter a non-query SQL statement': '请输入非查询sql语句',
'Please enter IP/hostname': '请输入IP/主机名',
'jdbc connection parameters is not a correct JSON format': 'jdbc连接参数不是一个正确的JSON格式',
'#': '编号',
'Datasource Type': '数据源类型',
'Datasource Parameter': '数据源参数',
'Create Time': '创建时间',
'Update Time': '更新时间',
Operation: '操作',
'Current Version': '当前版本',
'Click to view': '点击查看',
'Delete?': '确定删除吗?',
'Switch Version Successfully': '切换版本成功',
'Confirm Switch To This Version?': '确定切换到该版本吗?',
Confirm: '确定',
'Task status statistics': '任务状态统计',
Number: '数量',
State: '状态',
'Dry-run flag': '空跑标识',
'Process Status Statistics': '流程状态统计',
'Process Definition Statistics': '流程定义统计',
'Project Name': '项目名称',
'Please enter name': '请输入名称',
'Owned Users': '所属用户',
'Process Pid': '进程Pid',
'Zk registration directory': 'zk注册目录',
cpuUsage: 'cpuUsage',
memoryUsage: 'memoryUsage',
'Last heartbeat time': '最后心跳时间',
'Edit Tenant': '编辑租户',
'OS Tenant Code': '操作系统租户',
'Tenant Name': '租户名称',
Queue: '队列',
'Please select a queue': '默认为租户关联队列',
'Please enter the os tenant code in English': '请输入操作系统租户只允许英文',
'Please enter os tenant code in English': '请输入英文操作系统租户',
'Please enter os tenant code': '请输入操作系统租户',
'Please enter tenant Name': '请输入租户名称',
'The os tenant code. Only letters or a combination of letters and numbers are allowed': '操作系统租户只允许字母或字母与数字组合',
'Edit User': '编辑用户',
Tenant: '租户',
Email: '邮件',
Phone: '手机',
'User Type': '用户类型',
'Please enter phone number': '请输入手机',
'Please enter email': '请输入邮箱',
'Please enter the correct email format': '请输入正确的邮箱格式',
'Please enter the correct mobile phone format': '请输入正确的手机格式',
Project: '项目',
Authorize: '授权',
'File resources': '文件资源',
'UDF resources': 'UDF资源',
'UDF resources directory': 'UDF资源目录',
'Please select UDF resources directory': '请选择UDF资源目录',
'Alarm group': '告警组',
'Alarm group required': '告警组必填',
'Edit alarm group': '编辑告警组',
'Create alarm group': '创建告警组',
'Create Alarm Instance': '创建告警实例',
'Edit Alarm Instance': '编辑告警实例',
'Group Name': '组名称',
'Alarm instance name': '告警实例名称',
'Alarm plugin name': '告警插件名称',
'Select plugin': '选择插件',
'Select Alarm plugin': '请选择告警插件',
'Please enter group name': '请输入组名称',
'Instance parameter exception': '实例参数异常',
'Group Type': '组类型',
'Alarm plugin instance': '告警插件实例',
'Select Alarm plugin instance': '请选择告警插件实例',
Remarks: '备注',
SMS: '短信',
'Managing Users': '管理用户',
Permission: '权限',
Administrator: '管理员',
'Confirm Password': '确认密码',
'Please enter confirm password': '请输入确认密码',
'Password cannot be in Chinese': '密码不能为中文',
'Please enter a password (6-22) character password': '请输入密码(6-22)字符密码',
'Confirmation password cannot be in Chinese': '确认密码不能为中文',
'Please enter a confirmation password (6-22) character password': '请输入确认密码(6-22)字符密码',
'The password is inconsistent with the confirmation password': '密码与确认密码不一致,请重新确认',
'Please select the datasource': '请选择数据源',
'Please select resources': '请选择资源',
Query: '查询',
'Non Query': '非查询',
'prop(required)': 'prop(必填)',
'value(optional)': 'value(选填)',
'value(required)': 'value(必填)',
'prop is empty': 'prop不能为空',
'value is empty': 'value不能为空',
'prop is repeat': 'prop中有重复',
'Start Time': '开始时间',
'End Time': '结束时间',
crontab: 'crontab',
'Failure Strategy': '失败策略',
online: '上线',
offline: '下线',
'Task Status': '任务状态',
'Process Instance': '工作流实例',
'Task Instance': '任务实例',
'Select date range': '选择日期区间',
startDate: '开始日期',
endDate: '结束日期',
Date: '日期',
Waiting: '等待',
Execution: '执行中',
Finish: '完成',
'Create File': '创建文件',
'Create folder': '创建文件夹',
'File Name': '文件名称',
'Folder Name': '文件夹名称',
'File Format': '文件格式',
'Folder Format': '文件夹格式',
'File Content': '文件内容',
'Upload File Size': '文件大小不能超过1G',
Create: '创建',
'Please enter the resource content': '请输入资源内容',
'Resource content cannot exceed 3000 lines': '资源内容不能超过3000行',
'File Details': '文件详情',
'Download Details': '下载详情',
Return: '返回',
Save: '保存',
'File Manage': '文件管理',
'Upload Files': '上传文件',
'Create UDF Function': '创建UDF函数',
'Upload UDF Resources': '上传UDF资源',
'Service-Master': '服务管理-Master',
'Service-Worker': '服务管理-Worker',
'Process Name': '工作流名称',
Executor: '执行用户',
'Run Type': '运行类型',
'Scheduling Time': '调度时间',
'Run Times': '运行次数',
host: 'host',
'fault-tolerant sign': '容错标识',
Rerun: '重跑',
'Recovery Failed': '恢复失败',
Stop: '停止',
Pause: '暂停',
'Recovery Suspend': '恢复运行',
Gantt: '甘特图',
'Node Type': '节点类型',
'Submit Time': '提交时间',
Duration: '运行时长',
'Retry Count': '重试次数',
'Task Name': '任务名称',
'Task Date': '任务日期',
'Source Table': '源表',
'Record Number': '记录数',
'Target Table': '目标表',
'Online viewing type is not supported': '不支持在线查看类型',
Size: '大小',
Rename: '重命名',
Download: '下载',
Export: '导出',
'Version Info': '版本信息',
Submit: '提交',
'Edit UDF Function': '编辑UDF函数',
type: '类型',
'UDF Function Name': 'UDF函数名称',
FILE: '文件',
UDF: 'UDF',
'File Subdirectory': '文件子目录',
'Please enter a function name': '请输入函数名',
'Package Name': '包名类名',
'Please enter a Package name': '请输入包名类名',
Parameter: '参数',
'Please enter a parameter': '请输入参数',
'UDF Resources': 'UDF资源',
'Upload Resources': '上传资源',
Instructions: '使用说明',
'Please enter a instructions': '请输入使用说明',
'Please enter a UDF function name': '请输入UDF函数名称',
'Select UDF Resources': '请选择UDF资源',
'Class Name': '类名',
'Jar Package': 'jar包',
'Library Name': '库名',
'UDF Resource Name': 'UDF资源名称',
'File Size': '文件大小',
Description: '描述',
'Drag Nodes and Selected Items': '拖动节点和选中项',
'Select Line Connection': '选择线条连接',
'Delete selected lines or nodes': '删除选中的线或节点',
'Full Screen': '全屏',
Unpublished: '未发布',
'Start Process': '启动工作流',
'Execute from the current node': '从当前节点开始执行',
'Recover tolerance fault process': '恢复被容错的工作流',
'Resume the suspension process': '恢复运行流程',
'Execute from the failed nodes': '从失败节点开始执行',
'Complement Data': '补数',
'Scheduling execution': '调度执行',
'Recovery waiting thread': '恢复等待线程',
'Submitted successfully': '提交成功',
Executing: '正在执行',
'Ready to pause': '准备暂停',
'Ready to stop': '准备停止',
'Need fault tolerance': '需要容错',
Kill: 'Kill',
'Waiting for thread': '等待线程',
'Waiting for dependence': '等待依赖',
Start: '运行',
Copy: '复制节点',
'Copy name': '复制名称',
'Copy path': '复制路径',
'Please enter keyword': '请输入关键词',
'File Upload': '文件上传',
'Drag the file into the current upload window': '请将文件拖拽到当前上传窗口内!',
'Drag area upload': '拖动区域上传',
Upload: '上传',
'ReUpload File': '重新上传文件',
'Please enter file name': '请输入文件名',
'Please select the file to upload': '请选择要上传的文件',
'Resources manage': '资源中心',
Security: '安全中心',
Logout: '退出',
'No data': '查询无数据',
'Uploading...': '文件上传中',
'Loading...': '正在努力加载中...',
List: '列表',
'Unable to download without proper url': '无下载url无法下载',
Process: '工作流',
'Process definition': '工作流定义',
'Task record': '任务记录',
'Warning group manage': '告警组管理',
'Warning instance manage': '告警实例管理',
'Servers manage': '服务管理',
'UDF manage': 'UDF管理',
'Resource manage': '资源管理',
'Function manage': '函数管理',
'Edit password': '修改密码',
'Ordinary users': '普通用户',
'Create process': '创建工作流',
'Import process': '导入工作流',
'Timing state': '定时状态',
Timing: '定时',
Timezone: '时区',
TreeView: '树形图',
'Mailbox already exists! Recipients and copyers cannot repeat': '邮箱已存在!收件人和抄送人不能重复',
'Mailbox input is illegal': '邮箱输入不合法',
'Please set the parameters before starting': '启动前请先设置参数',
Continue: '继续',
End: '结束',
'Node execution': '节点执行',
'Backward execution': '向后执行',
'Forward execution': '向前执行',
'Execute only the current node': '仅执行当前节点',
'Notification strategy': '通知策略',
'Notification group': '通知组',
'Please select a notification group': '请选择通知组',
'Whether it is a complement process?': '是否补数',
'Schedule date': '调度日期',
'Mode of execution': '执行方式',
'Serial execution': '串行执行',
'Parallel execution': '并行执行',
'Set parameters before timing': '定时前请先设置参数',
'Start and stop time': '起止时间',
'Please select time': '请选择时间',
'Please enter crontab': '请输入crontab',
none_1: '都不发',
success_1: '成功发',
failure_1: '失败发',
All_1: '成功或失败都发',
Toolbar: '工具栏',
'View variables': '查看变量',
'Format DAG': '格式化DAG',
'Refresh DAG status': '刷新DAG状态',
Return_1: '返回上一节点',
'Please enter format': '请输入格式为',
'connection parameter': '连接参数',
'Process definition details': '流程定义详情',
'Create process definition': '创建流程定义',
'Scheduled task list': '定时任务列表',
'Process instance details': '流程实例详情',
'Create Resource': '创建资源',
'User Center': '用户中心',
AllStatus: '全部状态',
None: '无',
Name: '名称',
'Process priority': '流程优先级',
'Task priority': '任务优先级',
'Task timeout alarm': '任务超时告警',
'Timeout strategy': '超时策略',
'Timeout alarm': '超时告警',
'Timeout failure': '超时失败',
'Timeout period': '超时时长',
'Waiting Dependent complete': '等待依赖完成',
'Waiting Dependent start': '等待依赖启动',
'Check interval': '检查间隔',
'Timeout must be longer than check interval': '超时时间必须比检查间隔长',
'Timeout strategy must be selected': '超时策略必须选一个',
'Timeout must be a positive integer': '超时时长必须为正整数',
'Add dependency': '添加依赖',
'Whether dry-run': '是否空跑',
and: '且',
or: '或',
month: '月',
week: '周',
day: '日',
hour: '时',
Running: '正在运行',
'Waiting for dependency to complete': '等待依赖完成',
Selected: '已选',
CurrentHour: '当前小时',
Last1Hour: '前1小时',
Last2Hours: '前2小时',
Last3Hours: '前3小时',
Last24Hours: '前24小时',
today: '今天',
Last1Days: '昨天',
Last2Days: '前两天',
Last3Days: '前三天',
Last7Days: '前七天',
ThisWeek: '本周',
LastWeek: '上周',
LastMonday: '上周一',
LastTuesday: '上周二',
LastWednesday: '上周三',
LastThursday: '上周四',
LastFriday: '上周五',
LastSaturday: '上周六',
LastSunday: '上周日',
ThisMonth: '本月',
LastMonth: '上月',
LastMonthBegin: '上月初',
LastMonthEnd: '上月末',
'Refresh status succeeded': '刷新状态成功',
'Queue manage': 'Yarn 队列管理',
'Create queue': '创建队列',
'Edit queue': '编辑队列',
'Datasource manage': '数据源中心',
'History task record': '历史任务记录',
'Please go online': '不要忘记上线',
'Queue value': '队列值',
'Please enter queue value': '请输入队列值',
'Worker group manage': 'Worker分组管理',
'Create worker group': '创建Worker分组',
'Edit worker group': '编辑Worker分组',
'Token manage': '令牌管理',
'Create token': '创建令牌',
'Edit token': '编辑令牌',
Addresses: '地址',
'Worker Addresses': 'Worker地址',
'Please select the worker addresses': '请选择Worker地址',
'Failure time': '失效时间',
'Expiration time': '失效时间',
User: '用户',
'Please enter token': '请输入令牌',
'Generate token': '生成令牌',
Monitor: '监控中心',
Group: '分组',
'Queue statistics': '队列统计',
'Command status statistics': '命令状态统计',
'Task kill': '等待kill任务',
'Task queue': '等待执行任务',
'Error command count': '错误指令数',
'Normal command count': '正确指令数',
Manage: '管理',
'Number of connections': '连接数',
Sent: '发送量',
Received: '接收量',
'Min latency': '最低延时',
'Avg latency': '平均延时',
'Max latency': '最大延时',
'Node count': '节点数',
'Query time': '当前查询时间',
'Node self-test status': '节点自检状态',
'Health status': '健康状态',
'Max connections': '最大连接数',
'Threads connections': '当前连接数',
'Max used connections': '同时使用连接最大数',
'Threads running connections': '数据库当前活跃连接数',
'Worker group': 'Worker分组',
'Please enter a positive integer greater than 0': '请输入大于 0 的正整数',
'Pre Statement': '前置sql',
'Post Statement': '后置sql',
'Statement cannot be empty': '语句不能为空',
'Process Define Count': '工作流定义数',
'Process Instance Running Count': '正在运行的流程数',
'command number of waiting for running': '待执行的命令数',
'failure command number': '执行失败的命令数',
'tasks number of waiting running': '待运行任务数',
'task number of ready to kill': '待杀死任务数',
'Statistics manage': '统计管理',
statistics: '统计',
'select tenant': '选择租户',
'Please enter Principal': '请输入Principal',
'Please enter the kerberos authentication parameter java.security.krb5.conf': '请输入kerberos认证参数 java.security.krb5.conf',
'Please enter the kerberos authentication parameter login.user.keytab.username': '请输入kerberos认证参数 login.user.keytab.username',
'Please enter the kerberos authentication parameter login.user.keytab.path': '请输入kerberos认证参数 login.user.keytab.path',
'The start time must not be the same as the end': '开始时间和结束时间不能相同',
'Startup parameter': '启动参数',
'Startup type': '启动类型',
'warning of timeout': '超时告警',
'Next five execution times': '接下来五次执行时间',
'Execute time': '执行时间',
'Complement range': '补数范围',
'Http Url': '请求地址',
'Http Method': '请求类型',
'Http Parameters': '请求参数',
'Http Parameters Key': '参数名',
'Http Parameters Position': '参数位置',
'Http Parameters Value': '参数值',
'Http Check Condition': '校验条件',
'Http Condition': '校验内容',
'Please Enter Http Url': '请填写请求地址(必填)',
'Please Enter Http Condition': '请填写校验内容',
'There is no data for this period of time': '该时间段无数据',
'Worker addresses cannot be empty': 'Worker地址不能为空',
'Please generate token': '请生成Token',
'Please Select token': '请选择Token失效时间',
'Spark Version': 'Spark版本',
TargetDataBase: '目标库',
TargetTable: '目标表',
TargetJobName: '目标任务名',
'Please enter Pigeon job name': '请输入Pigeon任务名',
'Please enter the table of target': '请输入目标表名',
'Please enter a Target Table(required)': '请输入目标表(必填)',
SpeedByte: '限流(字节数)',
SpeedRecord: '限流(记录数)',
'0 means unlimited by byte': 'KB,0代表不限制',
'0 means unlimited by count': '0代表不限制',
'Modify User': '修改用户',
'Whether directory': '是否文件夹',
Yes: '是',
No: '否',
'Hadoop Custom Params': 'Hadoop参数',
'Sqoop Advanced Parameters': 'Sqoop参数',
'Sqoop Job Name': '任务名称',
'Please enter Mysql Database(required)': '请输入Mysql数据库(必填)',
'Please enter Mysql Table(required)': '请输入Mysql表名(必填)',
'Please enter Columns (Comma separated)': '请输入列名,用 , 隔开',
'Please enter Target Dir(required)': '请输入目标路径(必填)',
'Please enter Export Dir(required)': '请输入数据源路径(必填)',
'Please enter Hive Database(required)': '请输入Hive数据库(必填)',
'Please enter Hive Table(required)': '请输入Hive表名(必填)',
'Please enter Hive Partition Keys': '请输入分区键',
'Please enter Hive Partition Values': '请输入分区值',
'Please enter Replace Delimiter': '请输入替换分隔符',
'Please enter Fields Terminated': '请输入列分隔符',
'Please enter Lines Terminated': '请输入行分隔符',
'Please enter Concurrency': '请输入并发度',
'Please enter Update Key': '请输入更新列',
'Please enter Job Name(required)': '请输入任务名称(必填)',
'Please enter Custom Shell(required)': '请输入自定义脚本',
Direct: '流向',
Type: '类型',
ModelType: '模式',
ColumnType: '列类型',
Database: '数据库',
Column: '列',
'Map Column Hive': 'Hive类型映射',
'Map Column Java': 'Java类型映射',
'Export Dir': '数据源路径',
'Hive partition Keys': 'Hive 分区键',
'Hive partition Values': 'Hive 分区值',
FieldsTerminated: '列分隔符',
LinesTerminated: '行分隔符',
IsUpdate: '是否更新',
UpdateKey: '更新列',
UpdateMode: '更新类型',
'Target Dir': '目标路径',
DeleteTargetDir: '是否删除目录',
FileType: '保存格式',
CompressionCodec: '压缩类型',
CreateHiveTable: '是否创建新表',
DropDelimiter: '是否删除分隔符',
OverWriteSrc: '是否覆盖数据源',
ReplaceDelimiter: '替换分隔符',
Concurrency: '并发度',
Form: '表单',
OnlyUpdate: '只更新',
AllowInsert: '无更新便插入',
'Data Source': '数据来源',
'Data Target': '数据目的',
'All Columns': '全表导入',
'Some Columns': '选择列',
'Branch flow': '分支流转',
'Custom Job': '自定义任务',
'Custom Script': '自定义脚本',
'Cannot select the same node for successful branch flow and failed branch flow': '成功分支流转和失败分支流转不能选择同一个节点',
'Successful branch flow and failed branch flow are required': 'conditions节点成功和失败分支流转必填',
'No resources exist': '不存在资源',
'Please delete all non-existing resources': '请删除所有不存在资源',
'Unauthorized or deleted resources': '未授权或已删除资源',
'Please delete all non-existent resources': '请删除所有未授权或已删除资源',
Kinship: '工作流关系',
Reset: '重置',
KinshipStateActive: '当前选择',
KinshipState1: '已上线',
KinshipState0: '工作流未上线',
KinshipState10: '调度未上线',
'Dag label display control': 'Dag节点名称显隐',
Enable: '启用',
Disable: '停用',
'The Worker group no longer exists, please select the correct Worker group!': '该Worker分组已经不存在,请选择正确的Worker分组!',
'Please confirm whether the workflow has been saved before downloading': '下载前请确定工作流是否已保存',
'User name length is between 3 and 39': '用户名长度在3~39之间',
'Timeout Settings': '超时设置',
'Connect Timeout': '连接超时',
'Socket Timeout': 'Socket超时',
'Connect timeout be a positive integer': '连接超时必须为数字',
'Socket Timeout be a positive integer': 'Socket超时必须为数字',
ms: '毫秒',
'Please Enter Url': '请直接填写地址,例如:127.0.0.1:7077',
Master: 'Master',
'Please select the waterdrop resources': '请选择waterdrop配置文件',
zkDirectory: 'zk注册目录',
'Directory detail': '查看目录详情',
'Connection name': '连线名',
'Current connection settings': '当前连线设置',
'Please save the DAG before formatting': '格式化前请先保存DAG',
'Batch copy': '批量复制',
'Related items': '关联项目',
'Project name is required': '项目名称必填',
'Batch move': '批量移动',
Version: '版本',
'Pre tasks': '前置任务',
'Running Memory': '运行内存',
'Max Memory': '最大内存',
'Min Memory': '最小内存',
'The workflow canvas is abnormal and cannot be saved, please recreate': '该工作流画布异常,无法保存,请重新创建',
Info: '提示',
'Datasource userName': '所属用户',
'Resource userName': '所属用户',
'Environment manage': '环境管理',
'Create environment': '创建环境',
'Edit environment': '编辑',
'Environment value': 'Environment value',
'Environment Name': '环境名称',
'Environment Code': '环境编码',
'Environment Config': '环境配置',
'Environment Desc': '详细描述',
'Environment Worker Group': 'Worker组',
'Please enter environment config': '请输入环境配置信息',
'Please enter environment desc': '请输入详细描述',
'Please select worker groups': '请选择Worker分组',
condition: '条件',
'The condition content cannot be empty': '条件内容不能为空',
'Reference from': '使用已有任务',
'No more...': '没有更多了...',
'Process execute type': '执行策略',
parallel: '并行',
'Serial wait': '串行等待',
'Serial discard': '串行抛弃',
'Serial priority': '串行优先',
'Recover serial wait': '串行恢复',
IsEnableProxy: '启用代理',
WebHook: 'Web钩子',
Keyword: '密钥',
Proxy: '代理',
receivers: '收件人',
receiverCcs: '抄送人',
transportProtocol: '邮件协议',
serverHost: 'SMTP服务器',
serverPort: 'SMTP端口',
sender: '发件人',
enableSmtpAuth: '请求认证',
starttlsEnable: 'STARTTLS连接',
sslEnable: 'SSL连接',
smtpSslTrust: 'SSL证书信任',
url: 'URL',
requestType: '请求方式',
headerParams: '请求头',
bodyParams: '请求体',
contentField: '内容字段',
path: '脚本路径',
userParams: '自定义参数',
corpId: '企业ID',
secret: '密钥',
teamSendMsg: '群发信息',
userSendMsg: '群员信息',
agentId: '应用ID',
users: '群员',
Username: '用户名',
showType: '内容展示类型'
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,668 | [Feature][UI] Data source selection drop-down box optimization | ![图片](https://user-images.githubusercontent.com/58576849/122723939-109c1280-d2a6-11eb-8005-22fe09f3e872.png)
![图片](https://user-images.githubusercontent.com/58576849/122724209-4b9e4600-d2a6-11eb-9d4e-8c8b8a419adf.png)
这个下拉框去修改的时候 没有默认展示当前选中的 每次都是从第一条开始往下滑动
能否优化为 点击下拉框时 就定位在回显的那条数据上
或者
优化修改此类下拉框支持 搜索
| https://github.com/apache/dolphinscheduler/issues/5668 | https://github.com/apache/dolphinscheduler/pull/6851 | 36c19748a6da24f23f59db7d08d61004e6dcbac5 | 44b24cd3df8a42f0809beb019baacd072ef800f7 | "2021-06-21T07:35:36Z" | java | "2021-11-15T09:21:34Z" | dolphinscheduler-ui/src/js/conf/home/pages/dag/_source/formModel/tasks/_source/datasource.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<div class="datasource-model">
<div class="select-listpp">
<el-select v-model="type"
style="width: 160px;"
size="small"
@change="_handleTypeChanged"
:disabled="isDetails">
<el-option
v-for="city in typeList"
:key="city.code"
:value="city.code"
:label="city.code">
</el-option>
</el-select>
<el-select :placeholder="$t('Please select the datasource')"
v-model="datasource"
style="width: 288px;"
size="small"
:disabled="isDetails">
<el-option
v-for="city in datasourceList"
:key="city.id"
:value="city.id"
:label="city.code">
</el-option>
</el-select>
</div>
</div>
</template>
<script>
import _ from 'lodash'
import i18n from '@/module/i18n'
import disabledState from '@/module/mixin/disabledState'
export default {
name: 'datasource',
data () {
return {
// Data source type
type: '',
// Data source type(List)
typeList: [],
// data source
datasource: '',
// data source(List)
datasourceList: []
}
},
mixins: [disabledState],
props: {
data: Object,
supportType: Array
},
methods: {
/**
* Verify data source
*/
_verifDatasource () {
if (!this.datasource) {
this.$message.warning(`${i18n.$t('Please select the datasource')}`)
return false
}
this.$emit('on-dsData', {
type: this.type,
datasource: this.datasource
})
return true
},
/**
* Get the corresponding datasource data according to type
*/
_getDatasourceData () {
return new Promise((resolve, reject) => {
this.store.dispatch('dag/getDatasourceList', this.type).then(res => {
this.datasourceList = _.map(res.data, v => {
return {
id: v.id,
code: v.name,
disabled: false
}
})
resolve()
})
})
},
/**
* Brush type
*/
_handleTypeChanged (value) {
this.type = value
this._getDatasourceData().then(res => {
this.datasource = this.datasourceList.length && this.datasourceList[0].id || ''
this.$emit('on-dsData', {
type: this.type,
datasource: this.datasource
})
})
}
},
computed: {
cacheParams () {
return {
type: this.type,
datasource: this.datasource
}
}
},
// Watch the cacheParams
watch: {
datasource (val) {
this.$emit('on-dsData', {
type: this.type,
datasource: val
})
}
},
created () {
let supportType = this.supportType || []
this.typeList = this.data.typeList || _.cloneDeep(this.store.state.dag.dsTypeListS)
// Have a specified data source
if (supportType.length) {
let is = (type) => {
return !!_.filter(supportType, v => v === type).length
}
this.typeList = _.filter(this.typeList, v => is(v.code))
}
this.type = _.cloneDeep(this.data.type) || this.typeList[0].code
// init data
this._getDatasourceData().then(res => {
if (_.isEmpty(this.data)) {
this.$nextTick(() => {
this.datasource = this.datasourceList[0].id
})
} else {
this.$nextTick(() => {
this.datasource = this.data.datasource
})
}
this.$emit('on-dsData', {
type: this.type,
datasource: this.datasource
})
})
},
mounted () {
},
components: { }
}
</script>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,597 | [Feature][Install] Upgrade dolphinscheduler version 1. X to version 2.0 | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
we want upgrade dolphinscheduler from version 1.x to 2.0
1. some db table structure changes.
2. some definition data need to be translated
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6597 | https://github.com/apache/dolphinscheduler/pull/6858 | 44b24cd3df8a42f0809beb019baacd072ef800f7 | 1160b53940c8cc4457b1604514c5c6ba1a13b153 | "2021-10-25T08:59:23Z" | java | "2021-11-15T12:27:19Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.service.process;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_EMPTY_SUB_PROCESS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_FATHER_PARAMS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID;
import static org.apache.dolphinscheduler.common.Constants.LOCAL_PARAMS;
import static java.util.stream.Collectors.toSet;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.AuthorizationType;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.Direct;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.spi.enums.ResourceType;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.DateInterval;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.ProcessDag;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.process.ResourceInfo;
import org.apache.dolphinscheduler.common.task.AbstractParameters;
import org.apache.dolphinscheduler.common.task.TaskTimeoutParameter;
import org.apache.dolphinscheduler.common.task.subprocess.SubProcessParameters;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.ParameterUtils;
import org.apache.dolphinscheduler.common.utils.SnowFlakeUtils;
import org.apache.dolphinscheduler.common.utils.SnowFlakeUtils.SnowFlakeException;
import org.apache.dolphinscheduler.common.utils.TaskParametersUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.DagData;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.Environment;
import org.apache.dolphinscheduler.dao.entity.ErrorCommand;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.ProcessInstanceMap;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog;
import org.apache.dolphinscheduler.dao.entity.Project;
import org.apache.dolphinscheduler.dao.entity.ProjectUser;
import org.apache.dolphinscheduler.dao.entity.Resource;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.UdfFunc;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.CommandMapper;
import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper;
import org.apache.dolphinscheduler.dao.mapper.EnvironmentMapper;
import org.apache.dolphinscheduler.dao.mapper.ErrorCommandMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.ResourceMapper;
import org.apache.dolphinscheduler.dao.mapper.ResourceUserMapper;
import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.TenantMapper;
import org.apache.dolphinscheduler.dao.mapper.UdfFuncMapper;
import org.apache.dolphinscheduler.dao.mapper.UserMapper;
import org.apache.dolphinscheduler.dao.utils.DagHelper;
import org.apache.dolphinscheduler.remote.command.StateEventChangeCommand;
import org.apache.dolphinscheduler.remote.processor.StateEventCallbackService;
import org.apache.dolphinscheduler.remote.utils.Host;
import org.apache.dolphinscheduler.service.log.LogClientService;
import org.apache.dolphinscheduler.service.quartz.cron.CronUtils;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang.StringUtils;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date;
import java.util.EnumMap;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Objects;
import java.util.Set;
import java.util.stream.Collectors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
import com.facebook.presto.jdbc.internal.guava.collect.Lists;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.node.ObjectNode;
/**
* process relative dao that some mappers in this.
*/
@Component
public class ProcessService {
private final Logger logger = LoggerFactory.getLogger(getClass());
private final int[] stateArray = new int[]{ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.DELAY_EXECUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal()};
@Autowired
private UserMapper userMapper;
@Autowired
private ProcessDefinitionMapper processDefineMapper;
@Autowired
private ProcessDefinitionLogMapper processDefineLogMapper;
@Autowired
private ProcessInstanceMapper processInstanceMapper;
@Autowired
private DataSourceMapper dataSourceMapper;
@Autowired
private ProcessInstanceMapMapper processInstanceMapMapper;
@Autowired
private TaskInstanceMapper taskInstanceMapper;
@Autowired
private CommandMapper commandMapper;
@Autowired
private ScheduleMapper scheduleMapper;
@Autowired
private UdfFuncMapper udfFuncMapper;
@Autowired
private ResourceMapper resourceMapper;
@Autowired
private ResourceUserMapper resourceUserMapper;
@Autowired
private ErrorCommandMapper errorCommandMapper;
@Autowired
private TenantMapper tenantMapper;
@Autowired
private ProjectMapper projectMapper;
@Autowired
private TaskDefinitionMapper taskDefinitionMapper;
@Autowired
private TaskDefinitionLogMapper taskDefinitionLogMapper;
@Autowired
private ProcessTaskRelationMapper processTaskRelationMapper;
@Autowired
private ProcessTaskRelationLogMapper processTaskRelationLogMapper;
@Autowired
StateEventCallbackService stateEventCallbackService;
@Autowired
private EnvironmentMapper environmentMapper;
/**
* handle Command (construct ProcessInstance from Command) , wrapped in transaction
*
* @param logger logger
* @param host host
* @param command found command
* @param processDefinitionCacheMaps
* @return process instance
*/
public ProcessInstance handleCommand(Logger logger, String host, Command command, HashMap<String, ProcessDefinition> processDefinitionCacheMaps) {
ProcessInstance processInstance = constructProcessInstance(command, host, processDefinitionCacheMaps);
// cannot construct process instance, return null
if (processInstance == null) {
logger.error("scan command, command parameter is error: {}", command);
moveToErrorCommand(command, "process instance is null");
return null;
}
processInstance.setCommandType(command.getCommandType());
processInstance.addHistoryCmd(command.getCommandType());
//if the processDefination is serial
ProcessDefinition processDefinition = this.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
if (processDefinition.getExecutionType().typeIsSerial()) {
saveSerialProcess(processInstance,processDefinition);
if (processInstance.getState() != ExecutionStatus.SUBMITTED_SUCCESS) {
this.setSubProcessParam(processInstance);
this.commandMapper.deleteById(command.getId());
return null;
}
} else {
saveProcessInstance(processInstance);
}
this.setSubProcessParam(processInstance);
this.commandMapper.deleteById(command.getId());
return processInstance;
}
private void saveSerialProcess(ProcessInstance processInstance,ProcessDefinition processDefinition) {
processInstance.setState(ExecutionStatus.SERIAL_WAIT);
saveProcessInstance(processInstance);
//serial wait
//when we get the running instance(or waiting instance) only get the priority instance(by id)
if (processDefinition.getExecutionType().typeIsSerialWait()) {
while (true) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isEmpty(runningProcessInstances)) {
processInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS);
saveProcessInstance(processInstance);
return;
}
ProcessInstance runningProcess = runningProcessInstances.get(0);
if (this.processInstanceMapper.updateNextProcessIdById(processInstance.getId(), runningProcess.getId())) {
return;
}
}
} else if (processDefinition.getExecutionType().typeIsSerialDiscard()) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isEmpty(runningProcessInstances)) {
processInstance.setState(ExecutionStatus.STOP);
saveProcessInstance(processInstance);
}
} else if (processDefinition.getExecutionType().typeIsSerialPriority()) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isNotEmpty(runningProcessInstances)) {
for (ProcessInstance info : runningProcessInstances) {
info.setCommandType(CommandType.STOP);
info.addHistoryCmd(CommandType.STOP);
info.setState(ExecutionStatus.READY_STOP);
int update = updateProcessInstance(info);
// determine whether the process is normal
if (update > 0) {
String host = info.getHost();
String address = host.split(":")[0];
int port = Integer.parseInt(host.split(":")[1]);
StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand(
info.getId(), 0, info.getState(), info.getId(), 0
);
try {
stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command());
} catch (Exception e) {
logger.error("sendResultError");
}
}
}
}
}
}
/**
* save error command, and delete original command
*
* @param command command
* @param message message
*/
public void moveToErrorCommand(Command command, String message) {
ErrorCommand errorCommand = new ErrorCommand(command, message);
this.errorCommandMapper.insert(errorCommand);
this.commandMapper.deleteById(command.getId());
}
/**
* set process waiting thread
*
* @param command command
* @param processInstance processInstance
* @return process instance
*/
private ProcessInstance setWaitingThreadProcess(Command command, ProcessInstance processInstance) {
processInstance.setState(ExecutionStatus.WAITING_THREAD);
if (command.getCommandType() != CommandType.RECOVER_WAITING_THREAD) {
processInstance.addHistoryCmd(command.getCommandType());
}
saveProcessInstance(processInstance);
this.setSubProcessParam(processInstance);
createRecoveryWaitingThreadCommand(command, processInstance);
return null;
}
/**
* insert one command
*
* @param command command
* @return create result
*/
public int createCommand(Command command) {
int result = 0;
if (command != null) {
result = commandMapper.insert(command);
}
return result;
}
/**
* get command page
*
* @param pageSize
* @param pageNumber
* @return
*/
public List<Command> findCommandPage(int pageSize, int pageNumber) {
return commandMapper.queryCommandPage(pageSize, pageNumber * pageSize);
}
/**
* check the input command exists in queue list
*
* @param command command
* @return create command result
*/
public boolean verifyIsNeedCreateCommand(Command command) {
boolean isNeedCreate = true;
EnumMap<CommandType, Integer> cmdTypeMap = new EnumMap<>(CommandType.class);
cmdTypeMap.put(CommandType.REPEAT_RUNNING, 1);
cmdTypeMap.put(CommandType.RECOVER_SUSPENDED_PROCESS, 1);
cmdTypeMap.put(CommandType.START_FAILURE_TASK_PROCESS, 1);
CommandType commandType = command.getCommandType();
if (cmdTypeMap.containsKey(commandType)) {
ObjectNode cmdParamObj = JSONUtils.parseObject(command.getCommandParam());
int processInstanceId = cmdParamObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt();
List<Command> commands = commandMapper.selectList(null);
// for all commands
for (Command tmpCommand : commands) {
if (cmdTypeMap.containsKey(tmpCommand.getCommandType())) {
ObjectNode tempObj = JSONUtils.parseObject(tmpCommand.getCommandParam());
if (tempObj != null && processInstanceId == tempObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt()) {
isNeedCreate = false;
break;
}
}
}
}
return isNeedCreate;
}
/**
* find process instance detail by id
*
* @param processId processId
* @return process instance
*/
public ProcessInstance findProcessInstanceDetailById(int processId) {
return processInstanceMapper.queryDetailById(processId);
}
/**
* get task node list by definitionId
*/
public List<TaskDefinition> getTaskNodeListByDefinition(long defineCode) {
ProcessDefinition processDefinition = processDefineMapper.queryByCode(defineCode);
if (processDefinition == null) {
logger.error("process define not exists");
return new ArrayList<>();
}
List<ProcessTaskRelationLog> processTaskRelations = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion());
Set<TaskDefinition> taskDefinitionSet = new HashSet<>();
for (ProcessTaskRelationLog processTaskRelation : processTaskRelations) {
if (processTaskRelation.getPostTaskCode() > 0) {
taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion()));
}
}
List<TaskDefinitionLog> taskDefinitionLogs = taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet);
return new ArrayList<>(taskDefinitionLogs);
}
/**
* find process instance by id
*
* @param processId processId
* @return process instance
*/
public ProcessInstance findProcessInstanceById(int processId) {
return processInstanceMapper.selectById(processId);
}
/**
* find process define by id.
*
* @param processDefinitionId processDefinitionId
* @return process definition
*/
public ProcessDefinition findProcessDefineById(int processDefinitionId) {
return processDefineMapper.selectById(processDefinitionId);
}
/**
* find process define by code and version.
*
* @param processDefinitionCode processDefinitionCode
* @return process definition
*/
public ProcessDefinition findProcessDefinition(Long processDefinitionCode, int version) {
ProcessDefinition processDefinition = processDefineMapper.queryByCode(processDefinitionCode);
if (processDefinition == null || processDefinition.getVersion() != version) {
processDefinition = processDefineLogMapper.queryByDefinitionCodeAndVersion(processDefinitionCode, version);
if (processDefinition != null) {
processDefinition.setId(0);
}
}
return processDefinition;
}
/**
* find process define by code.
*
* @param processDefinitionCode processDefinitionCode
* @return process definition
*/
public ProcessDefinition findProcessDefinitionByCode(Long processDefinitionCode) {
return processDefineMapper.queryByCode(processDefinitionCode);
}
/**
* delete work process instance by id
*
* @param processInstanceId processInstanceId
* @return delete process instance result
*/
public int deleteWorkProcessInstanceById(int processInstanceId) {
return processInstanceMapper.deleteById(processInstanceId);
}
/**
* delete all sub process by parent instance id
*
* @param processInstanceId processInstanceId
* @return delete all sub process instance result
*/
public int deleteAllSubWorkProcessByParentId(int processInstanceId) {
List<Integer> subProcessIdList = processInstanceMapMapper.querySubIdListByParentId(processInstanceId);
for (Integer subId : subProcessIdList) {
deleteAllSubWorkProcessByParentId(subId);
deleteWorkProcessMapByParentId(subId);
removeTaskLogFile(subId);
deleteWorkProcessInstanceById(subId);
}
return 1;
}
/**
* remove task log file
*
* @param processInstanceId processInstanceId
*/
public void removeTaskLogFile(Integer processInstanceId) {
List<TaskInstance> taskInstanceList = findValidTaskListByProcessId(processInstanceId);
if (CollectionUtils.isEmpty(taskInstanceList)) {
return;
}
try (LogClientService logClient = new LogClientService()) {
for (TaskInstance taskInstance : taskInstanceList) {
String taskLogPath = taskInstance.getLogPath();
if (StringUtils.isEmpty(taskInstance.getHost())) {
continue;
}
int port = Constants.RPC_PORT;
String ip = "";
try {
ip = Host.of(taskInstance.getHost()).getIp();
} catch (Exception e) {
// compatible old version
ip = taskInstance.getHost();
}
// remove task log from loggerserver
logClient.removeTaskLog(ip, port, taskLogPath);
}
}
}
/**
* recursive query sub process definition id by parent id.
*
* @param parentCode parentCode
* @param ids ids
*/
public void recurseFindSubProcess(long parentCode, List<Long> ids) {
List<TaskDefinition> taskNodeList = this.getTaskNodeListByDefinition(parentCode);
if (taskNodeList != null && !taskNodeList.isEmpty()) {
for (TaskDefinition taskNode : taskNodeList) {
String parameter = taskNode.getTaskParams();
ObjectNode parameterJson = JSONUtils.parseObject(parameter);
if (parameterJson.get(CMD_PARAM_SUB_PROCESS_DEFINE_CODE) != null) {
SubProcessParameters subProcessParam = JSONUtils.parseObject(parameter, SubProcessParameters.class);
ids.add(subProcessParam.getProcessDefinitionCode());
recurseFindSubProcess(subProcessParam.getProcessDefinitionCode(), ids);
}
}
}
}
/**
* create recovery waiting thread command when thread pool is not enough for the process instance.
* sub work process instance need not to create recovery command.
* create recovery waiting thread command and delete origin command at the same time.
* if the recovery command is exists, only update the field update_time
*
* @param originCommand originCommand
* @param processInstance processInstance
*/
public void createRecoveryWaitingThreadCommand(Command originCommand, ProcessInstance processInstance) {
// sub process doesnot need to create wait command
if (processInstance.getIsSubProcess() == Flag.YES) {
if (originCommand != null) {
commandMapper.deleteById(originCommand.getId());
}
return;
}
Map<String, String> cmdParam = new HashMap<>();
cmdParam.put(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD, String.valueOf(processInstance.getId()));
// process instance quit by "waiting thread" state
if (originCommand == null) {
Command command = new Command(
CommandType.RECOVER_WAITING_THREAD,
processInstance.getTaskDependType(),
processInstance.getFailureStrategy(),
processInstance.getExecutorId(),
processInstance.getProcessDefinition().getCode(),
JSONUtils.toJsonString(cmdParam),
processInstance.getWarningType(),
processInstance.getWarningGroupId(),
processInstance.getScheduleTime(),
processInstance.getWorkerGroup(),
processInstance.getEnvironmentCode(),
processInstance.getProcessInstancePriority(),
processInstance.getDryRun(),
processInstance.getId(),
processInstance.getProcessDefinitionVersion()
);
saveCommand(command);
return;
}
// update the command time if current command if recover from waiting
if (originCommand.getCommandType() == CommandType.RECOVER_WAITING_THREAD) {
originCommand.setUpdateTime(new Date());
saveCommand(originCommand);
} else {
// delete old command and create new waiting thread command
commandMapper.deleteById(originCommand.getId());
originCommand.setId(0);
originCommand.setCommandType(CommandType.RECOVER_WAITING_THREAD);
originCommand.setUpdateTime(new Date());
originCommand.setCommandParam(JSONUtils.toJsonString(cmdParam));
originCommand.setProcessInstancePriority(processInstance.getProcessInstancePriority());
saveCommand(originCommand);
}
}
/**
* get schedule time from command
*
* @param command command
* @param cmdParam cmdParam map
* @return date
*/
private Date getScheduleTime(Command command, Map<String, String> cmdParam) {
Date scheduleTime = command.getScheduleTime();
if (scheduleTime == null
&& cmdParam != null
&& cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) {
Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE));
Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
List<Schedule> schedules = queryReleaseSchedulerListByProcessDefinitionCode(command.getProcessDefinitionCode());
List<Date> complementDateList = CronUtils.getSelfFireDateList(start, end, schedules);
if (complementDateList.size() > 0) {
scheduleTime = complementDateList.get(0);
} else {
logger.error("set scheduler time error: complement date list is empty, command: {}",
command.toString());
}
}
return scheduleTime;
}
/**
* generate a new work process instance from command.
*
* @param processDefinition processDefinition
* @param command command
* @param cmdParam cmdParam map
* @return process instance
*/
private ProcessInstance generateNewProcessInstance(ProcessDefinition processDefinition,
Command command,
Map<String, String> cmdParam) {
ProcessInstance processInstance = new ProcessInstance(processDefinition);
processInstance.setProcessDefinitionCode(processDefinition.getCode());
processInstance.setProcessDefinitionVersion(processDefinition.getVersion());
processInstance.setState(ExecutionStatus.RUNNING_EXECUTION);
processInstance.setRecovery(Flag.NO);
processInstance.setStartTime(new Date());
processInstance.setRunTimes(1);
processInstance.setMaxTryTimes(0);
processInstance.setCommandParam(command.getCommandParam());
processInstance.setCommandType(command.getCommandType());
processInstance.setIsSubProcess(Flag.NO);
processInstance.setTaskDependType(command.getTaskDependType());
processInstance.setFailureStrategy(command.getFailureStrategy());
processInstance.setExecutorId(command.getExecutorId());
WarningType warningType = command.getWarningType() == null ? WarningType.NONE : command.getWarningType();
processInstance.setWarningType(warningType);
Integer warningGroupId = command.getWarningGroupId() == null ? 0 : command.getWarningGroupId();
processInstance.setWarningGroupId(warningGroupId);
processInstance.setDryRun(command.getDryRun());
if (command.getScheduleTime() != null) {
processInstance.setScheduleTime(command.getScheduleTime());
}
processInstance.setCommandStartTime(command.getStartTime());
processInstance.setLocations(processDefinition.getLocations());
// reset global params while there are start parameters
setGlobalParamIfCommanded(processDefinition, cmdParam);
// curing global params
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
getCommandTypeIfComplement(processInstance, command),
processInstance.getScheduleTime()));
// set process instance priority
processInstance.setProcessInstancePriority(command.getProcessInstancePriority());
String workerGroup = StringUtils.isBlank(command.getWorkerGroup()) ? Constants.DEFAULT_WORKER_GROUP : command.getWorkerGroup();
processInstance.setWorkerGroup(workerGroup);
processInstance.setEnvironmentCode(Objects.isNull(command.getEnvironmentCode()) ? -1 : command.getEnvironmentCode());
processInstance.setTimeout(processDefinition.getTimeout());
processInstance.setTenantId(processDefinition.getTenantId());
return processInstance;
}
private void setGlobalParamIfCommanded(ProcessDefinition processDefinition, Map<String, String> cmdParam) {
// get start params from command param
Map<String, String> startParamMap = new HashMap<>();
if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_START_PARAMS)) {
String startParamJson = cmdParam.get(Constants.CMD_PARAM_START_PARAMS);
startParamMap = JSONUtils.toMap(startParamJson);
}
Map<String, String> fatherParamMap = new HashMap<>();
if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_FATHER_PARAMS)) {
String fatherParamJson = cmdParam.get(Constants.CMD_PARAM_FATHER_PARAMS);
fatherParamMap = JSONUtils.toMap(fatherParamJson);
}
startParamMap.putAll(fatherParamMap);
// set start param into global params
if (startParamMap.size() > 0
&& processDefinition.getGlobalParamMap() != null) {
for (Map.Entry<String, String> param : processDefinition.getGlobalParamMap().entrySet()) {
String val = startParamMap.get(param.getKey());
if (val != null) {
param.setValue(val);
}
}
}
}
/**
* get process tenant
* there is tenant id in definition, use the tenant of the definition.
* if there is not tenant id in the definiton or the tenant not exist
* use definition creator's tenant.
*
* @param tenantId tenantId
* @param userId userId
* @return tenant
*/
public Tenant getTenantForProcess(int tenantId, int userId) {
Tenant tenant = null;
if (tenantId >= 0) {
tenant = tenantMapper.queryById(tenantId);
}
if (userId == 0) {
return null;
}
if (tenant == null) {
User user = userMapper.selectById(userId);
tenant = tenantMapper.queryById(user.getTenantId());
}
return tenant;
}
/**
* get an environment
* use the code of the environment to find a environment.
*
* @param environmentCode environmentCode
* @return Environment
*/
public Environment findEnvironmentByCode(Long environmentCode) {
Environment environment = null;
if (environmentCode >= 0) {
environment = environmentMapper.queryByEnvironmentCode(environmentCode);
}
return environment;
}
/**
* check command parameters is valid
*
* @param command command
* @param cmdParam cmdParam map
* @return whether command param is valid
*/
private Boolean checkCmdParam(Command command, Map<String, String> cmdParam) {
if (command.getTaskDependType() == TaskDependType.TASK_ONLY || command.getTaskDependType() == TaskDependType.TASK_PRE) {
if (cmdParam == null
|| !cmdParam.containsKey(Constants.CMD_PARAM_START_NODES)
|| cmdParam.get(Constants.CMD_PARAM_START_NODES).isEmpty()) {
logger.error("command node depend type is {}, but start nodes is null ", command.getTaskDependType());
return false;
}
}
return true;
}
/**
* construct process instance according to one command.
*
* @param command command
* @param host host
* @param processDefinitionCacheMaps
* @return process instance
*/
private ProcessInstance constructProcessInstance(Command command, String host, HashMap<String, ProcessDefinition> processDefinitionCacheMaps) {
ProcessInstance processInstance;
ProcessDefinition processDefinition;
CommandType commandType = command.getCommandType();
String key = String.format("%d-%d", command.getProcessDefinitionCode(), command.getProcessDefinitionVersion());
if (processDefinitionCacheMaps.containsKey(key)) {
processDefinition = processDefinitionCacheMaps.get(key);
} else {
processDefinition = this.findProcessDefinition(command.getProcessDefinitionCode(), command.getProcessDefinitionVersion());
if (processDefinition != null) {
processDefinitionCacheMaps.put(key, processDefinition);
}
}
if (processDefinition == null) {
logger.error("cannot find the work process define! define code : {}", command.getProcessDefinitionCode());
return null;
}
Map<String, String> cmdParam = JSONUtils.toMap(command.getCommandParam());
int processInstanceId = command.getProcessInstanceId();
if (processInstanceId == 0) {
processInstance = generateNewProcessInstance(processDefinition, command, cmdParam);
} else {
processInstance = this.findProcessInstanceDetailById(processInstanceId);
if (processInstance == null) {
return processInstance;
}
}
if (cmdParam != null) {
CommandType commandTypeIfComplement = getCommandTypeIfComplement(processInstance, command);
// reset global params while repeat running is needed by cmdParam
if (commandTypeIfComplement == CommandType.REPEAT_RUNNING) {
setGlobalParamIfCommanded(processDefinition, cmdParam);
}
// Recalculate global parameters after rerun.
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
commandTypeIfComplement,
processInstance.getScheduleTime()));
processInstance.setProcessDefinition(processDefinition);
}
//reset command parameter
if (processInstance.getCommandParam() != null) {
Map<String, String> processCmdParam = JSONUtils.toMap(processInstance.getCommandParam());
for (Map.Entry<String, String> entry : processCmdParam.entrySet()) {
if (!cmdParam.containsKey(entry.getKey())) {
cmdParam.put(entry.getKey(), entry.getValue());
}
}
}
// reset command parameter if sub process
if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) {
processInstance.setCommandParam(command.getCommandParam());
}
if (Boolean.FALSE.equals(checkCmdParam(command, cmdParam))) {
logger.error("command parameter check failed!");
return null;
}
if (command.getScheduleTime() != null) {
processInstance.setScheduleTime(command.getScheduleTime());
}
processInstance.setHost(host);
ExecutionStatus runStatus = ExecutionStatus.RUNNING_EXECUTION;
int runTime = processInstance.getRunTimes();
switch (commandType) {
case START_PROCESS:
break;
case START_FAILURE_TASK_PROCESS:
// find failed tasks and init these tasks
List<Integer> failedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.FAILURE);
List<Integer> toleranceList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.NEED_FAULT_TOLERANCE);
List<Integer> killedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.KILL);
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
failedList.addAll(killedList);
failedList.addAll(toleranceList);
for (Integer taskId : failedList) {
initTaskInstance(this.findTaskInstanceById(taskId));
}
cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING,
String.join(Constants.COMMA, convertIntListToString(failedList)));
processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam));
processInstance.setRunTimes(runTime + 1);
break;
case START_CURRENT_TASK_PROCESS:
break;
case RECOVER_WAITING_THREAD:
break;
case RECOVER_SUSPENDED_PROCESS:
// find pause tasks and init task's state
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
List<Integer> suspendedNodeList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.PAUSE);
List<Integer> stopNodeList = findTaskIdByInstanceState(processInstance.getId(),
ExecutionStatus.KILL);
suspendedNodeList.addAll(stopNodeList);
for (Integer taskId : suspendedNodeList) {
// initialize the pause state
initTaskInstance(this.findTaskInstanceById(taskId));
}
cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING, String.join(",", convertIntListToString(suspendedNodeList)));
processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam));
processInstance.setRunTimes(runTime + 1);
break;
case RECOVER_TOLERANCE_FAULT_PROCESS:
// recover tolerance fault process
processInstance.setRecovery(Flag.YES);
runStatus = processInstance.getState();
break;
case COMPLEMENT_DATA:
// delete all the valid tasks when complement data if id is not null
if (processInstance.getId() != 0) {
List<TaskInstance> taskInstanceList = this.findValidTaskListByProcessId(processInstance.getId());
for (TaskInstance taskInstance : taskInstanceList) {
taskInstance.setFlag(Flag.NO);
this.updateTaskInstance(taskInstance);
}
}
break;
case REPEAT_RUNNING:
// delete the recover task names from command parameter
if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) {
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam));
}
// delete all the valid tasks when repeat running
List<TaskInstance> validTaskList = findValidTaskListByProcessId(processInstance.getId());
for (TaskInstance taskInstance : validTaskList) {
taskInstance.setFlag(Flag.NO);
updateTaskInstance(taskInstance);
}
processInstance.setStartTime(new Date());
processInstance.setEndTime(null);
processInstance.setRunTimes(runTime + 1);
initComplementDataParam(processDefinition, processInstance, cmdParam);
break;
case SCHEDULER:
break;
default:
break;
}
processInstance.setState(runStatus);
return processInstance;
}
/**
* get process definition by command
* If it is a fault-tolerant command, get the specified version of ProcessDefinition through ProcessInstance
* Otherwise, get the latest version of ProcessDefinition
*
* @return ProcessDefinition
*/
private ProcessDefinition getProcessDefinitionByCommand(long processDefinitionCode, Map<String, String> cmdParam) {
if (cmdParam != null) {
int processInstanceId = 0;
if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING)) {
processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING));
} else if (cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) {
processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_SUB_PROCESS));
} else if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD)) {
processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD));
}
if (processInstanceId != 0) {
ProcessInstance processInstance = this.findProcessInstanceDetailById(processInstanceId);
if (processInstance == null) {
return null;
}
return processDefineLogMapper.queryByDefinitionCodeAndVersion(
processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
}
}
return processDefineMapper.queryByCode(processDefinitionCode);
}
/**
* return complement data if the process start with complement data
*
* @param processInstance processInstance
* @param command command
* @return command type
*/
private CommandType getCommandTypeIfComplement(ProcessInstance processInstance, Command command) {
if (CommandType.COMPLEMENT_DATA == processInstance.getCmdTypeIfComplement()) {
return CommandType.COMPLEMENT_DATA;
} else {
return command.getCommandType();
}
}
/**
* initialize complement data parameters
*
* @param processDefinition processDefinition
* @param processInstance processInstance
* @param cmdParam cmdParam
*/
private void initComplementDataParam(ProcessDefinition processDefinition,
ProcessInstance processInstance,
Map<String, String> cmdParam) {
if (!processInstance.isComplementData()) {
return;
}
Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE));
Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
List<Schedule> listSchedules = queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode());
List<Date> complementDate = CronUtils.getSelfFireDateList(start, end, listSchedules);
if (complementDate.size() > 0
&& Flag.NO == processInstance.getIsSubProcess()) {
processInstance.setScheduleTime(complementDate.get(0));
}
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime()));
}
/**
* set sub work process parameters.
* handle sub work process instance, update relation table and command parameters
* set sub work process flag, extends parent work process command parameters
*
* @param subProcessInstance subProcessInstance
*/
public void setSubProcessParam(ProcessInstance subProcessInstance) {
String cmdParam = subProcessInstance.getCommandParam();
if (StringUtils.isEmpty(cmdParam)) {
return;
}
Map<String, String> paramMap = JSONUtils.toMap(cmdParam);
// write sub process id into cmd param.
if (paramMap.containsKey(CMD_PARAM_SUB_PROCESS)
&& CMD_PARAM_EMPTY_SUB_PROCESS.equals(paramMap.get(CMD_PARAM_SUB_PROCESS))) {
paramMap.remove(CMD_PARAM_SUB_PROCESS);
paramMap.put(CMD_PARAM_SUB_PROCESS, String.valueOf(subProcessInstance.getId()));
subProcessInstance.setCommandParam(JSONUtils.toJsonString(paramMap));
subProcessInstance.setIsSubProcess(Flag.YES);
this.saveProcessInstance(subProcessInstance);
}
// copy parent instance user def params to sub process..
String parentInstanceId = paramMap.get(CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID);
if (StringUtils.isNotEmpty(parentInstanceId)) {
ProcessInstance parentInstance = findProcessInstanceDetailById(Integer.parseInt(parentInstanceId));
if (parentInstance != null) {
subProcessInstance.setGlobalParams(
joinGlobalParams(parentInstance.getGlobalParams(), subProcessInstance.getGlobalParams()));
this.saveProcessInstance(subProcessInstance);
} else {
logger.error("sub process command params error, cannot find parent instance: {} ", cmdParam);
}
}
ProcessInstanceMap processInstanceMap = JSONUtils.parseObject(cmdParam, ProcessInstanceMap.class);
if (processInstanceMap == null || processInstanceMap.getParentProcessInstanceId() == 0) {
return;
}
// update sub process id to process map table
processInstanceMap.setProcessInstanceId(subProcessInstance.getId());
this.updateWorkProcessInstanceMap(processInstanceMap);
}
/**
* join parent global params into sub process.
* only the keys doesn't in sub process global would be joined.
*
* @param parentGlobalParams parentGlobalParams
* @param subGlobalParams subGlobalParams
* @return global params join
*/
private String joinGlobalParams(String parentGlobalParams, String subGlobalParams) {
List<Property> parentPropertyList = JSONUtils.toList(parentGlobalParams, Property.class);
List<Property> subPropertyList = JSONUtils.toList(subGlobalParams, Property.class);
Map<String, String> subMap = subPropertyList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue));
for (Property parent : parentPropertyList) {
if (!subMap.containsKey(parent.getProp())) {
subPropertyList.add(parent);
}
}
return JSONUtils.toJsonString(subPropertyList);
}
/**
* initialize task instance
*
* @param taskInstance taskInstance
*/
private void initTaskInstance(TaskInstance taskInstance) {
if (!taskInstance.isSubProcess()
&& (taskInstance.getState().typeIsCancel() || taskInstance.getState().typeIsFailure())) {
taskInstance.setFlag(Flag.NO);
updateTaskInstance(taskInstance);
return;
}
taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS);
updateTaskInstance(taskInstance);
}
/**
* retry submit task to db
*
* @param taskInstance
* @param commitRetryTimes
* @param commitInterval
* @return
*/
public TaskInstance submitTask(TaskInstance taskInstance, int commitRetryTimes, int commitInterval) {
int retryTimes = 1;
boolean submitDB = false;
TaskInstance task = null;
while (retryTimes <= commitRetryTimes) {
try {
if (!submitDB) {
// submit task to db
task = submitTask(taskInstance);
if (task != null && task.getId() != 0) {
submitDB = true;
break;
}
}
if (!submitDB) {
logger.error("task commit to db failed , taskId {} has already retry {} times, please check the database", taskInstance.getId(), retryTimes);
}
Thread.sleep(commitInterval);
} catch (Exception e) {
logger.error("task commit to mysql failed", e);
}
retryTimes += 1;
}
return task;
}
/**
* submit task to db
* submit sub process to command
*
* @param taskInstance taskInstance
* @return task instance
*/
@Transactional(rollbackFor = Exception.class)
public TaskInstance submitTask(TaskInstance taskInstance) {
ProcessInstance processInstance = this.findProcessInstanceDetailById(taskInstance.getProcessInstanceId());
logger.info("start submit task : {}, instance id:{}, state: {}",
taskInstance.getName(), taskInstance.getProcessInstanceId(), processInstance.getState());
//submit to db
TaskInstance task = submitTaskInstanceToDB(taskInstance, processInstance);
if (task == null) {
logger.error("end submit task to db error, task name:{}, process id:{} state: {} ",
taskInstance.getName(), taskInstance.getProcessInstance(), processInstance.getState());
return task;
}
if (!task.getState().typeIsFinished()) {
createSubWorkProcess(processInstance, task);
}
logger.info("end submit task to db successfully:{} {} state:{} complete, instance id:{} state: {} ",
taskInstance.getId(), taskInstance.getName(), task.getState(), processInstance.getId(), processInstance.getState());
return task;
}
/**
* set work process instance map
* consider o
* repeat running does not generate new sub process instance
* set map {parent instance id, task instance id, 0(child instance id)}
*
* @param parentInstance parentInstance
* @param parentTask parentTask
* @return process instance map
*/
private ProcessInstanceMap setProcessInstanceMap(ProcessInstance parentInstance, TaskInstance parentTask) {
ProcessInstanceMap processMap = findWorkProcessMapByParent(parentInstance.getId(), parentTask.getId());
if (processMap != null) {
return processMap;
}
if (parentInstance.getCommandType() == CommandType.REPEAT_RUNNING) {
// update current task id to map
processMap = findPreviousTaskProcessMap(parentInstance, parentTask);
if (processMap != null) {
processMap.setParentTaskInstanceId(parentTask.getId());
updateWorkProcessInstanceMap(processMap);
return processMap;
}
}
// new task
processMap = new ProcessInstanceMap();
processMap.setParentProcessInstanceId(parentInstance.getId());
processMap.setParentTaskInstanceId(parentTask.getId());
createWorkProcessInstanceMap(processMap);
return processMap;
}
/**
* find previous task work process map.
*
* @param parentProcessInstance parentProcessInstance
* @param parentTask parentTask
* @return process instance map
*/
private ProcessInstanceMap findPreviousTaskProcessMap(ProcessInstance parentProcessInstance,
TaskInstance parentTask) {
Integer preTaskId = 0;
List<TaskInstance> preTaskList = this.findPreviousTaskListByWorkProcessId(parentProcessInstance.getId());
for (TaskInstance task : preTaskList) {
if (task.getName().equals(parentTask.getName())) {
preTaskId = task.getId();
ProcessInstanceMap map = findWorkProcessMapByParent(parentProcessInstance.getId(), preTaskId);
if (map != null) {
return map;
}
}
}
logger.info("sub process instance is not found,parent task:{},parent instance:{}",
parentTask.getId(), parentProcessInstance.getId());
return null;
}
/**
* create sub work process command
*
* @param parentProcessInstance parentProcessInstance
* @param task task
*/
public void createSubWorkProcess(ProcessInstance parentProcessInstance, TaskInstance task) {
if (!task.isSubProcess()) {
return;
}
//check create sub work flow firstly
ProcessInstanceMap instanceMap = findWorkProcessMapByParent(parentProcessInstance.getId(), task.getId());
if (null != instanceMap && CommandType.RECOVER_TOLERANCE_FAULT_PROCESS == parentProcessInstance.getCommandType()) {
// recover failover tolerance would not create a new command when the sub command already have been created
return;
}
instanceMap = setProcessInstanceMap(parentProcessInstance, task);
ProcessInstance childInstance = null;
if (instanceMap.getProcessInstanceId() != 0) {
childInstance = findProcessInstanceById(instanceMap.getProcessInstanceId());
}
Command subProcessCommand = createSubProcessCommand(parentProcessInstance, childInstance, instanceMap, task);
updateSubProcessDefinitionByParent(parentProcessInstance, subProcessCommand.getProcessDefinitionCode());
initSubInstanceState(childInstance);
createCommand(subProcessCommand);
logger.info("sub process command created: {} ", subProcessCommand);
}
/**
* complement data needs transform parent parameter to child.
*/
private String getSubWorkFlowParam(ProcessInstanceMap instanceMap, ProcessInstance parentProcessInstance, Map<String, String> fatherParams) {
// set sub work process command
String processMapStr = JSONUtils.toJsonString(instanceMap);
Map<String, String> cmdParam = JSONUtils.toMap(processMapStr);
if (parentProcessInstance.isComplementData()) {
Map<String, String> parentParam = JSONUtils.toMap(parentProcessInstance.getCommandParam());
String endTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE);
String startTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE);
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, endTime);
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, startTime);
processMapStr = JSONUtils.toJsonString(cmdParam);
}
if (fatherParams.size() != 0) {
cmdParam.put(CMD_PARAM_FATHER_PARAMS, JSONUtils.toJsonString(fatherParams));
processMapStr = JSONUtils.toJsonString(cmdParam);
}
return processMapStr;
}
public Map<String, String> getGlobalParamMap(String globalParams) {
List<Property> propList;
Map<String, String> globalParamMap = new HashMap<>();
if (StringUtils.isNotEmpty(globalParams)) {
propList = JSONUtils.toList(globalParams, Property.class);
globalParamMap = propList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue));
}
return globalParamMap;
}
/**
* create sub work process command
*/
public Command createSubProcessCommand(ProcessInstance parentProcessInstance,
ProcessInstance childInstance,
ProcessInstanceMap instanceMap,
TaskInstance task) {
CommandType commandType = getSubCommandType(parentProcessInstance, childInstance);
Map<String, String> subProcessParam = JSONUtils.toMap(task.getTaskParams());
int childDefineCode = Integer.parseInt(subProcessParam.get(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE));
ProcessDefinition subProcessDefinition = processDefineMapper.queryByCode(childDefineCode);
Object localParams = subProcessParam.get(Constants.LOCAL_PARAMS);
List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class);
Map<String, String> globalMap = this.getGlobalParamMap(parentProcessInstance.getGlobalParams());
Map<String, String> fatherParams = new HashMap<>();
if (CollectionUtils.isNotEmpty(allParam)) {
for (Property info : allParam) {
fatherParams.put(info.getProp(), globalMap.get(info.getProp()));
}
}
String processParam = getSubWorkFlowParam(instanceMap, parentProcessInstance, fatherParams);
int subProcessInstanceId = childInstance == null ? 0 : childInstance.getId();
return new Command(
commandType,
TaskDependType.TASK_POST,
parentProcessInstance.getFailureStrategy(),
parentProcessInstance.getExecutorId(),
subProcessDefinition.getCode(),
processParam,
parentProcessInstance.getWarningType(),
parentProcessInstance.getWarningGroupId(),
parentProcessInstance.getScheduleTime(),
task.getWorkerGroup(),
task.getEnvironmentCode(),
parentProcessInstance.getProcessInstancePriority(),
parentProcessInstance.getDryRun(),
subProcessInstanceId,
subProcessDefinition.getVersion()
);
}
/**
* initialize sub work flow state
* child instance state would be initialized when 'recovery from pause/stop/failure'
*/
private void initSubInstanceState(ProcessInstance childInstance) {
if (childInstance != null) {
childInstance.setState(ExecutionStatus.RUNNING_EXECUTION);
updateProcessInstance(childInstance);
}
}
/**
* get sub work flow command type
* child instance exist: child command = fatherCommand
* child instance not exists: child command = fatherCommand[0]
*/
private CommandType getSubCommandType(ProcessInstance parentProcessInstance, ProcessInstance childInstance) {
CommandType commandType = parentProcessInstance.getCommandType();
if (childInstance == null) {
String fatherHistoryCommand = parentProcessInstance.getHistoryCmd();
commandType = CommandType.valueOf(fatherHistoryCommand.split(Constants.COMMA)[0]);
}
return commandType;
}
/**
* update sub process definition
*
* @param parentProcessInstance parentProcessInstance
* @param childDefinitionCode childDefinitionId
*/
private void updateSubProcessDefinitionByParent(ProcessInstance parentProcessInstance, long childDefinitionCode) {
ProcessDefinition fatherDefinition = this.findProcessDefinition(parentProcessInstance.getProcessDefinitionCode(),
parentProcessInstance.getProcessDefinitionVersion());
ProcessDefinition childDefinition = this.findProcessDefinitionByCode(childDefinitionCode);
if (childDefinition != null && fatherDefinition != null) {
childDefinition.setWarningGroupId(fatherDefinition.getWarningGroupId());
processDefineMapper.updateById(childDefinition);
}
}
/**
* submit task to mysql
*
* @param taskInstance taskInstance
* @param processInstance processInstance
* @return task instance
*/
public TaskInstance submitTaskInstanceToDB(TaskInstance taskInstance, ProcessInstance processInstance) {
ExecutionStatus processInstanceState = processInstance.getState();
if (taskInstance.getState().typeIsFailure()) {
if (taskInstance.isSubProcess()) {
taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1);
} else {
if (processInstanceState != ExecutionStatus.READY_STOP
&& processInstanceState != ExecutionStatus.READY_PAUSE) {
// failure task set invalid
taskInstance.setFlag(Flag.NO);
updateTaskInstance(taskInstance);
// crate new task instance
if (taskInstance.getState() != ExecutionStatus.NEED_FAULT_TOLERANCE) {
taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1);
}
taskInstance.setSubmitTime(null);
taskInstance.setLogPath(null);
taskInstance.setExecutePath(null);
taskInstance.setStartTime(null);
taskInstance.setEndTime(null);
taskInstance.setFlag(Flag.YES);
taskInstance.setHost(null);
taskInstance.setId(0);
}
}
}
taskInstance.setExecutorId(processInstance.getExecutorId());
taskInstance.setProcessInstancePriority(processInstance.getProcessInstancePriority());
taskInstance.setState(getSubmitTaskState(taskInstance, processInstanceState));
if (taskInstance.getSubmitTime() == null) {
taskInstance.setSubmitTime(new Date());
}
if (taskInstance.getFirstSubmitTime() == null) {
taskInstance.setFirstSubmitTime(taskInstance.getSubmitTime());
}
boolean saveResult = saveTaskInstance(taskInstance);
if (!saveResult) {
return null;
}
return taskInstance;
}
/**
* get submit task instance state by the work process state
* cannot modify the task state when running/kill/submit success, or this
* task instance is already exists in task queue .
* return pause if work process state is ready pause
* return stop if work process state is ready stop
* if all of above are not satisfied, return submit success
*
* @param taskInstance taskInstance
* @param processInstanceState processInstanceState
* @return process instance state
*/
public ExecutionStatus getSubmitTaskState(TaskInstance taskInstance, ExecutionStatus processInstanceState) {
ExecutionStatus state = taskInstance.getState();
// running, delayed or killed
// the task already exists in task queue
// return state
if (
state == ExecutionStatus.RUNNING_EXECUTION
|| state == ExecutionStatus.DELAY_EXECUTION
|| state == ExecutionStatus.KILL
) {
return state;
}
//return pasue /stop if process instance state is ready pause / stop
// or return submit success
if (processInstanceState == ExecutionStatus.READY_PAUSE) {
state = ExecutionStatus.PAUSE;
} else if (processInstanceState == ExecutionStatus.READY_STOP
|| !checkProcessStrategy(taskInstance)) {
state = ExecutionStatus.KILL;
} else {
state = ExecutionStatus.SUBMITTED_SUCCESS;
}
return state;
}
/**
* check process instance strategy
*
* @param taskInstance taskInstance
* @return check strategy result
*/
private boolean checkProcessStrategy(TaskInstance taskInstance) {
ProcessInstance processInstance = this.findProcessInstanceById(taskInstance.getProcessInstanceId());
FailureStrategy failureStrategy = processInstance.getFailureStrategy();
if (failureStrategy == FailureStrategy.CONTINUE) {
return true;
}
List<TaskInstance> taskInstances = this.findValidTaskListByProcessId(taskInstance.getProcessInstanceId());
for (TaskInstance task : taskInstances) {
if (task.getState() == ExecutionStatus.FAILURE
&& task.getRetryTimes() >= task.getMaxRetryTimes()) {
return false;
}
}
return true;
}
/**
* insert or update work process instance to data base
*
* @param processInstance processInstance
*/
public void saveProcessInstance(ProcessInstance processInstance) {
if (processInstance == null) {
logger.error("save error, process instance is null!");
return;
}
if (processInstance.getId() != 0) {
processInstanceMapper.updateById(processInstance);
} else {
processInstanceMapper.insert(processInstance);
}
}
/**
* insert or update command
*
* @param command command
* @return save command result
*/
public int saveCommand(Command command) {
if (command.getId() != 0) {
return commandMapper.updateById(command);
} else {
return commandMapper.insert(command);
}
}
/**
* insert or update task instance
*
* @param taskInstance taskInstance
* @return save task instance result
*/
public boolean saveTaskInstance(TaskInstance taskInstance) {
if (taskInstance.getId() != 0) {
return updateTaskInstance(taskInstance);
} else {
return createTaskInstance(taskInstance);
}
}
/**
* insert task instance
*
* @param taskInstance taskInstance
* @return create task instance result
*/
public boolean createTaskInstance(TaskInstance taskInstance) {
int count = taskInstanceMapper.insert(taskInstance);
return count > 0;
}
/**
* update task instance
*
* @param taskInstance taskInstance
* @return update task instance result
*/
public boolean updateTaskInstance(TaskInstance taskInstance) {
int count = taskInstanceMapper.updateById(taskInstance);
return count > 0;
}
/**
* find task instance by id
*
* @param taskId task id
* @return task intance
*/
public TaskInstance findTaskInstanceById(Integer taskId) {
return taskInstanceMapper.selectById(taskId);
}
/**
* package task instance,associate processInstance and processDefine
*
* @param taskInstId taskInstId
* @return task instance
*/
public TaskInstance getTaskInstanceDetailByTaskId(int taskInstId) {
// get task instance
TaskInstance taskInstance = findTaskInstanceById(taskInstId);
if (taskInstance == null) {
return null;
}
setTaskInstanceDetail(taskInstance);
return taskInstance;
}
/**
* package task instance,associate processInstance and processDefine
*
* @param taskInstance taskInstance
* @return task instance
*/
public void setTaskInstanceDetail(TaskInstance taskInstance) {
// get process instance
ProcessInstance processInstance = findProcessInstanceDetailById(taskInstance.getProcessInstanceId());
// get process define
ProcessDefinition processDefine = findProcessDefinition(processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion());
taskInstance.setProcessInstance(processInstance);
taskInstance.setProcessDefine(processDefine);
TaskDefinition taskDefinition = taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(
taskInstance.getTaskCode(),
taskInstance.getTaskDefinitionVersion());
updateTaskDefinitionResources(taskDefinition);
taskInstance.setTaskDefine(taskDefinition);
}
/**
* Update {@link ResourceInfo} information in {@link TaskDefinition}
*
* @param taskDefinition the given {@link TaskDefinition}
*/
private void updateTaskDefinitionResources(TaskDefinition taskDefinition) {
Map<String, Object> taskParameters = JSONUtils.parseObject(
taskDefinition.getTaskParams(),
new TypeReference<Map<String, Object>>() { });
if (taskParameters != null) {
// if contains mainJar field, query resource from database
// Flink, Spark, MR
if (taskParameters.containsKey("mainJar")) {
Object mainJarObj = taskParameters.get("mainJar");
ResourceInfo mainJar = JSONUtils.parseObject(
JSONUtils.toJsonString(mainJarObj),
ResourceInfo.class);
ResourceInfo resourceInfo = updateResourceInfo(mainJar);
if (resourceInfo != null) {
taskParameters.put("mainJar", resourceInfo);
}
}
// update resourceList information
if (taskParameters.containsKey("resourceList")) {
String resourceListStr = JSONUtils.toJsonString(taskParameters.get("resourceList"));
List<ResourceInfo> resourceInfos = JSONUtils.toList(resourceListStr, ResourceInfo.class);
List<ResourceInfo> updatedResourceInfos = resourceInfos
.stream()
.map(this::updateResourceInfo)
.filter(Objects::nonNull)
.collect(Collectors.toList());
taskParameters.put("resourceList", updatedResourceInfos);
}
// set task parameters
taskDefinition.setTaskParams(JSONUtils.toJsonString(taskParameters));
}
}
/**
* update {@link ResourceInfo} by given original ResourceInfo
*
* @param res origin resource info
* @return {@link ResourceInfo}
*/
private ResourceInfo updateResourceInfo(ResourceInfo res) {
ResourceInfo resourceInfo = null;
// only if mainJar is not null and does not contains "resourceName" field
if (res != null) {
int resourceId = res.getId();
if (resourceId <= 0) {
logger.error("invalid resourceId, {}", resourceId);
return null;
}
resourceInfo = new ResourceInfo();
// get resource from database, only one resource should be returned
Resource resource = getResourceById(resourceId);
resourceInfo.setId(resourceId);
resourceInfo.setRes(resource.getFileName());
resourceInfo.setResourceName(resource.getFullName());
if (logger.isInfoEnabled()) {
logger.info("updated resource info {}",
JSONUtils.toJsonString(resourceInfo));
}
}
return resourceInfo;
}
/**
* get id list by task state
*
* @param instanceId instanceId
* @param state state
* @return task instance states
*/
public List<Integer> findTaskIdByInstanceState(int instanceId, ExecutionStatus state) {
return taskInstanceMapper.queryTaskByProcessIdAndState(instanceId, state.ordinal());
}
/**
* find valid task list by process definition id
*
* @param processInstanceId processInstanceId
* @return task instance list
*/
public List<TaskInstance> findValidTaskListByProcessId(Integer processInstanceId) {
return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.YES);
}
/**
* find previous task list by work process id
*
* @param processInstanceId processInstanceId
* @return task instance list
*/
public List<TaskInstance> findPreviousTaskListByWorkProcessId(Integer processInstanceId) {
return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.NO);
}
/**
* update work process instance map
*
* @param processInstanceMap processInstanceMap
* @return update process instance result
*/
public int updateWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) {
return processInstanceMapMapper.updateById(processInstanceMap);
}
/**
* create work process instance map
*
* @param processInstanceMap processInstanceMap
* @return create process instance result
*/
public int createWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) {
int count = 0;
if (processInstanceMap != null) {
return processInstanceMapMapper.insert(processInstanceMap);
}
return count;
}
/**
* find work process map by parent process id and parent task id.
*
* @param parentWorkProcessId parentWorkProcessId
* @param parentTaskId parentTaskId
* @return process instance map
*/
public ProcessInstanceMap findWorkProcessMapByParent(Integer parentWorkProcessId, Integer parentTaskId) {
return processInstanceMapMapper.queryByParentId(parentWorkProcessId, parentTaskId);
}
/**
* delete work process map by parent process id
*
* @param parentWorkProcessId parentWorkProcessId
* @return delete process map result
*/
public int deleteWorkProcessMapByParentId(int parentWorkProcessId) {
return processInstanceMapMapper.deleteByParentProcessId(parentWorkProcessId);
}
/**
* find sub process instance
*
* @param parentProcessId parentProcessId
* @param parentTaskId parentTaskId
* @return process instance
*/
public ProcessInstance findSubProcessInstance(Integer parentProcessId, Integer parentTaskId) {
ProcessInstance processInstance = null;
ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryByParentId(parentProcessId, parentTaskId);
if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) {
return processInstance;
}
processInstance = findProcessInstanceById(processInstanceMap.getProcessInstanceId());
return processInstance;
}
/**
* find parent process instance
*
* @param subProcessId subProcessId
* @return process instance
*/
public ProcessInstance findParentProcessInstance(Integer subProcessId) {
ProcessInstance processInstance = null;
ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(subProcessId);
if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) {
return processInstance;
}
processInstance = findProcessInstanceById(processInstanceMap.getParentProcessInstanceId());
return processInstance;
}
/**
* change task state
*
* @param state state
* @param startTime startTime
* @param host host
* @param executePath executePath
* @param logPath logPath
* @param taskInstId taskInstId
*/
public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state, Date startTime, String host,
String executePath,
String logPath,
int taskInstId) {
taskInstance.setState(state);
taskInstance.setStartTime(startTime);
taskInstance.setHost(host);
taskInstance.setExecutePath(executePath);
taskInstance.setLogPath(logPath);
saveTaskInstance(taskInstance);
}
/**
* update process instance
*
* @param processInstance processInstance
* @return update process instance result
*/
public int updateProcessInstance(ProcessInstance processInstance) {
return processInstanceMapper.updateById(processInstance);
}
/**
* change task state
*
* @param state state
* @param endTime endTime
* @param taskInstId taskInstId
* @param varPool varPool
*/
public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state,
Date endTime,
int processId,
String appIds,
int taskInstId,
String varPool) {
taskInstance.setPid(processId);
taskInstance.setAppLink(appIds);
taskInstance.setState(state);
taskInstance.setEndTime(endTime);
taskInstance.setVarPool(varPool);
changeOutParam(taskInstance);
saveTaskInstance(taskInstance);
}
/**
* for show in page of taskInstance
*
* @param taskInstance
*/
public void changeOutParam(TaskInstance taskInstance) {
if (StringUtils.isEmpty(taskInstance.getVarPool())) {
return;
}
List<Property> properties = JSONUtils.toList(taskInstance.getVarPool(), Property.class);
if (CollectionUtils.isEmpty(properties)) {
return;
}
//if the result more than one line,just get the first .
Map<String, Object> taskParams = JSONUtils.parseObject(taskInstance.getTaskParams(), new TypeReference<Map<String, Object>>() {});
Object localParams = taskParams.get(LOCAL_PARAMS);
if (localParams == null) {
return;
}
List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class);
Map<String, String> outProperty = new HashMap<>();
for (Property info : properties) {
if (info.getDirect() == Direct.OUT) {
outProperty.put(info.getProp(), info.getValue());
}
}
for (Property info : allParam) {
if (info.getDirect() == Direct.OUT) {
String paramName = info.getProp();
info.setValue(outProperty.get(paramName));
}
}
taskParams.put(LOCAL_PARAMS, allParam);
taskInstance.setTaskParams(JSONUtils.toJsonString(taskParams));
}
/**
* convert integer list to string list
*
* @param intList intList
* @return string list
*/
public List<String> convertIntListToString(List<Integer> intList) {
if (intList == null) {
return new ArrayList<>();
}
List<String> result = new ArrayList<>(intList.size());
for (Integer intVar : intList) {
result.add(String.valueOf(intVar));
}
return result;
}
/**
* query schedule by id
*
* @param id id
* @return schedule
*/
public Schedule querySchedule(int id) {
return scheduleMapper.selectById(id);
}
/**
* query Schedule by processDefinitionCode
*
* @param processDefinitionCode processDefinitionCode
* @see Schedule
*/
public List<Schedule> queryReleaseSchedulerListByProcessDefinitionCode(long processDefinitionCode) {
return scheduleMapper.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode);
}
/**
* query need failover process instance
*
* @param host host
* @return process instance list
*/
public List<ProcessInstance> queryNeedFailoverProcessInstances(String host) {
return processInstanceMapper.queryByHostAndStatus(host, stateArray);
}
/**
* process need failover process instance
*
* @param processInstance processInstance
*/
@Transactional(rollbackFor = RuntimeException.class)
public void processNeedFailoverProcessInstances(ProcessInstance processInstance) {
//1 update processInstance host is null
processInstance.setHost(Constants.NULL);
processInstanceMapper.updateById(processInstance);
ProcessDefinition processDefinition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
//2 insert into recover command
Command cmd = new Command();
cmd.setProcessDefinitionCode(processDefinition.getCode());
cmd.setProcessDefinitionVersion(processDefinition.getVersion());
cmd.setProcessInstanceId(processInstance.getId());
cmd.setCommandParam(String.format("{\"%s\":%d}", Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING, processInstance.getId()));
cmd.setExecutorId(processInstance.getExecutorId());
cmd.setCommandType(CommandType.RECOVER_TOLERANCE_FAULT_PROCESS);
createCommand(cmd);
}
/**
* query all need failover task instances by host
*
* @param host host
* @return task instance list
*/
public List<TaskInstance> queryNeedFailoverTaskInstances(String host) {
return taskInstanceMapper.queryByHostAndStatus(host,
stateArray);
}
/**
* find data source by id
*
* @param id id
* @return datasource
*/
public DataSource findDataSourceById(int id) {
return dataSourceMapper.selectById(id);
}
/**
* update process instance state by id
*
* @param processInstanceId processInstanceId
* @param executionStatus executionStatus
* @return update process result
*/
public int updateProcessInstanceState(Integer processInstanceId, ExecutionStatus executionStatus) {
ProcessInstance instance = processInstanceMapper.selectById(processInstanceId);
instance.setState(executionStatus);
return processInstanceMapper.updateById(instance);
}
/**
* find process instance by the task id
*
* @param taskId taskId
* @return process instance
*/
public ProcessInstance findProcessInstanceByTaskId(int taskId) {
TaskInstance taskInstance = taskInstanceMapper.selectById(taskId);
if (taskInstance != null) {
return processInstanceMapper.selectById(taskInstance.getProcessInstanceId());
}
return null;
}
/**
* find udf function list by id list string
*
* @param ids ids
* @return udf function list
*/
public List<UdfFunc> queryUdfFunListByIds(int[] ids) {
return udfFuncMapper.queryUdfByIdStr(ids, null);
}
/**
* find tenant code by resource name
*
* @param resName resource name
* @param resourceType resource type
* @return tenant code
*/
public String queryTenantCodeByResName(String resName, ResourceType resourceType) {
// in order to query tenant code successful although the version is older
String fullName = resName.startsWith("/") ? resName : String.format("/%s", resName);
List<Resource> resourceList = resourceMapper.queryResource(fullName, resourceType.ordinal());
if (CollectionUtils.isEmpty(resourceList)) {
return StringUtils.EMPTY;
}
int userId = resourceList.get(0).getUserId();
User user = userMapper.selectById(userId);
if (Objects.isNull(user)) {
return StringUtils.EMPTY;
}
Tenant tenant = tenantMapper.selectById(user.getTenantId());
if (Objects.isNull(tenant)) {
return StringUtils.EMPTY;
}
return tenant.getTenantCode();
}
/**
* find schedule list by process define codes.
*
* @param codes codes
* @return schedule list
*/
public List<Schedule> selectAllByProcessDefineCode(long[] codes) {
return scheduleMapper.selectAllByProcessDefineArray(codes);
}
/**
* find last scheduler process instance in the date interval
*
* @param definitionCode definitionCode
* @param dateInterval dateInterval
* @return process instance
*/
public ProcessInstance findLastSchedulerProcessInterval(Long definitionCode, DateInterval dateInterval) {
return processInstanceMapper.queryLastSchedulerProcess(definitionCode,
dateInterval.getStartTime(),
dateInterval.getEndTime());
}
/**
* find last manual process instance interval
*
* @param definitionCode process definition code
* @param dateInterval dateInterval
* @return process instance
*/
public ProcessInstance findLastManualProcessInterval(Long definitionCode, DateInterval dateInterval) {
return processInstanceMapper.queryLastManualProcess(definitionCode,
dateInterval.getStartTime(),
dateInterval.getEndTime());
}
/**
* find last running process instance
*
* @param definitionCode process definition code
* @param startTime start time
* @param endTime end time
* @return process instance
*/
public ProcessInstance findLastRunningProcess(Long definitionCode, Date startTime, Date endTime) {
return processInstanceMapper.queryLastRunningProcess(definitionCode,
startTime,
endTime,
stateArray);
}
/**
* query user queue by process instance id
*
* @param processInstanceId processInstanceId
* @return queue
*/
public String queryUserQueueByProcessInstanceId(int processInstanceId) {
String queue = "";
ProcessInstance processInstance = processInstanceMapper.selectById(processInstanceId);
if (processInstance == null) {
return queue;
}
User executor = userMapper.selectById(processInstance.getExecutorId());
if (executor != null) {
queue = executor.getQueue();
}
return queue;
}
/**
* query project name and user name by processInstanceId.
*
* @param processInstanceId processInstanceId
* @return projectName and userName
*/
public ProjectUser queryProjectWithUserByProcessInstanceId(int processInstanceId) {
return projectMapper.queryProjectWithUserByProcessInstanceId(processInstanceId);
}
/**
* get task worker group
*
* @param taskInstance taskInstance
* @return workerGroupId
*/
public String getTaskWorkerGroup(TaskInstance taskInstance) {
String workerGroup = taskInstance.getWorkerGroup();
if (StringUtils.isNotBlank(workerGroup)) {
return workerGroup;
}
int processInstanceId = taskInstance.getProcessInstanceId();
ProcessInstance processInstance = findProcessInstanceById(processInstanceId);
if (processInstance != null) {
return processInstance.getWorkerGroup();
}
logger.info("task : {} will use default worker group", taskInstance.getId());
return Constants.DEFAULT_WORKER_GROUP;
}
/**
* get have perm project list
*
* @param userId userId
* @return project list
*/
public List<Project> getProjectListHavePerm(int userId) {
List<Project> createProjects = projectMapper.queryProjectCreatedByUser(userId);
List<Project> authedProjects = projectMapper.queryAuthedProjectListByUserId(userId);
if (createProjects == null) {
createProjects = new ArrayList<>();
}
if (authedProjects != null) {
createProjects.addAll(authedProjects);
}
return createProjects;
}
/**
* list unauthorized udf function
*
* @param userId user id
* @param needChecks data source id array
* @return unauthorized udf function list
*/
public <T> List<T> listUnauthorized(int userId, T[] needChecks, AuthorizationType authorizationType) {
List<T> resultList = new ArrayList<>();
if (Objects.nonNull(needChecks) && needChecks.length > 0) {
Set<T> originResSet = new HashSet<>(Arrays.asList(needChecks));
switch (authorizationType) {
case RESOURCE_FILE_ID:
case UDF_FILE:
List<Resource> ownUdfResources = resourceMapper.listAuthorizedResourceById(userId, needChecks);
addAuthorizedResources(ownUdfResources, userId);
Set<Integer> authorizedResourceFiles = ownUdfResources.stream().map(Resource::getId).collect(toSet());
originResSet.removeAll(authorizedResourceFiles);
break;
case RESOURCE_FILE_NAME:
List<Resource> ownResources = resourceMapper.listAuthorizedResource(userId, needChecks);
addAuthorizedResources(ownResources, userId);
Set<String> authorizedResources = ownResources.stream().map(Resource::getFullName).collect(toSet());
originResSet.removeAll(authorizedResources);
break;
case DATASOURCE:
Set<Integer> authorizedDatasources = dataSourceMapper.listAuthorizedDataSource(userId, needChecks).stream().map(DataSource::getId).collect(toSet());
originResSet.removeAll(authorizedDatasources);
break;
case UDF:
Set<Integer> authorizedUdfs = udfFuncMapper.listAuthorizedUdfFunc(userId, needChecks).stream().map(UdfFunc::getId).collect(toSet());
originResSet.removeAll(authorizedUdfs);
break;
default:
break;
}
resultList.addAll(originResSet);
}
return resultList;
}
/**
* get user by user id
*
* @param userId user id
* @return User
*/
public User getUserById(int userId) {
return userMapper.selectById(userId);
}
/**
* get resource by resource id
*
* @param resourceId resource id
* @return Resource
*/
public Resource getResourceById(int resourceId) {
return resourceMapper.selectById(resourceId);
}
/**
* list resources by ids
*
* @param resIds resIds
* @return resource list
*/
public List<Resource> listResourceByIds(Integer[] resIds) {
return resourceMapper.listResourceByIds(resIds);
}
/**
* format task app id in task instance
*/
public String formatTaskAppId(TaskInstance taskInstance) {
ProcessInstance processInstance = findProcessInstanceById(taskInstance.getProcessInstanceId());
if (processInstance == null) {
return "";
}
ProcessDefinition definition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
if (definition == null) {
return "";
}
return String.format("%s_%s_%s", definition.getId(), processInstance.getId(), taskInstance.getId());
}
/**
* switch process definition version to process definition log version
*/
public int switchVersion(ProcessDefinition processDefinition, ProcessDefinitionLog processDefinitionLog) {
if (null == processDefinition || null == processDefinitionLog) {
return Constants.DEFINITION_FAILURE;
}
processDefinitionLog.setId(processDefinition.getId());
processDefinitionLog.setReleaseState(ReleaseState.OFFLINE);
processDefinitionLog.setFlag(Flag.YES);
int result = processDefineMapper.updateById(processDefinitionLog);
if (result > 0) {
result = switchProcessTaskRelationVersion(processDefinitionLog);
if (result <= 0) {
return Constants.DEFINITION_FAILURE;
}
}
return result;
}
public int switchProcessTaskRelationVersion(ProcessDefinition processDefinition) {
List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode());
if (!processTaskRelationList.isEmpty()) {
processTaskRelationMapper.deleteByCode(processDefinition.getProjectCode(), processDefinition.getCode());
}
List<ProcessTaskRelationLog> processTaskRelationLogList = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion());
return processTaskRelationMapper.batchInsert(processTaskRelationLogList);
}
/**
* get resource ids
*
* @param taskDefinition taskDefinition
* @return resource ids
*/
public String getResourceIds(TaskDefinition taskDefinition) {
Set<Integer> resourceIds = null;
AbstractParameters params = TaskParametersUtils.getParameters(taskDefinition.getTaskType(), taskDefinition.getTaskParams());
if (params != null && CollectionUtils.isNotEmpty(params.getResourceFilesList())) {
resourceIds = params.getResourceFilesList().
stream()
.filter(t -> t.getId() != 0)
.map(ResourceInfo::getId)
.collect(Collectors.toSet());
}
if (CollectionUtils.isEmpty(resourceIds)) {
return StringUtils.EMPTY;
}
return StringUtils.join(resourceIds, ",");
}
public int saveTaskDefine(User operator, long projectCode, List<TaskDefinitionLog> taskDefinitionLogs) {
Date now = new Date();
List<TaskDefinitionLog> newTaskDefinitionLogs = new ArrayList<>();
List<TaskDefinitionLog> updateTaskDefinitionLogs = new ArrayList<>();
for (TaskDefinitionLog taskDefinitionLog : taskDefinitionLogs) {
taskDefinitionLog.setProjectCode(projectCode);
taskDefinitionLog.setUpdateTime(now);
taskDefinitionLog.setOperateTime(now);
taskDefinitionLog.setOperator(operator.getId());
taskDefinitionLog.setResourceIds(getResourceIds(taskDefinitionLog));
if (taskDefinitionLog.getCode() > 0 && taskDefinitionLog.getVersion() > 0) {
TaskDefinitionLog definitionCodeAndVersion = taskDefinitionLogMapper
.queryByDefinitionCodeAndVersion(taskDefinitionLog.getCode(), taskDefinitionLog.getVersion());
if (definitionCodeAndVersion != null) {
if (!taskDefinitionLog.equals(definitionCodeAndVersion)) {
taskDefinitionLog.setUserId(definitionCodeAndVersion.getUserId());
Integer version = taskDefinitionLogMapper.queryMaxVersionForDefinition(taskDefinitionLog.getCode());
taskDefinitionLog.setVersion(version + 1);
taskDefinitionLog.setCreateTime(definitionCodeAndVersion.getCreateTime());
updateTaskDefinitionLogs.add(taskDefinitionLog);
}
continue;
}
}
taskDefinitionLog.setUserId(operator.getId());
taskDefinitionLog.setVersion(Constants.VERSION_FIRST);
taskDefinitionLog.setCreateTime(now);
if (taskDefinitionLog.getCode() == 0) {
try {
taskDefinitionLog.setCode(SnowFlakeUtils.getInstance().nextId());
} catch (SnowFlakeException e) {
logger.error("Task code get error, ", e);
return Constants.DEFINITION_FAILURE;
}
}
newTaskDefinitionLogs.add(taskDefinitionLog);
}
int insertResult = 0;
int updateResult = 0;
for (TaskDefinitionLog taskDefinitionToUpdate : updateTaskDefinitionLogs) {
TaskDefinition task = taskDefinitionMapper.queryByCode(taskDefinitionToUpdate.getCode());
if (task == null) {
newTaskDefinitionLogs.add(taskDefinitionToUpdate);
} else {
insertResult += taskDefinitionLogMapper.insert(taskDefinitionToUpdate);
taskDefinitionToUpdate.setId(task.getId());
updateResult += taskDefinitionMapper.updateById(taskDefinitionToUpdate);
}
}
if (!newTaskDefinitionLogs.isEmpty()) {
updateResult += taskDefinitionMapper.batchInsert(newTaskDefinitionLogs);
insertResult += taskDefinitionLogMapper.batchInsert(newTaskDefinitionLogs);
}
return (insertResult & updateResult) > 0 ? 1 : Constants.EXIT_CODE_SUCCESS;
}
/**
* save processDefinition (including create or update processDefinition)
*/
public int saveProcessDefine(User operator, ProcessDefinition processDefinition, Boolean isFromProcessDefine) {
ProcessDefinitionLog processDefinitionLog = new ProcessDefinitionLog(processDefinition);
Integer version = processDefineLogMapper.queryMaxVersionForDefinition(processDefinition.getCode());
int insertVersion = version == null || version == 0 ? Constants.VERSION_FIRST : version + 1;
processDefinitionLog.setVersion(insertVersion);
processDefinitionLog.setReleaseState(isFromProcessDefine ? ReleaseState.OFFLINE : ReleaseState.ONLINE);
processDefinitionLog.setOperator(operator.getId());
processDefinitionLog.setOperateTime(processDefinition.getUpdateTime());
int insertLog = processDefineLogMapper.insert(processDefinitionLog);
int result;
if (0 == processDefinition.getId()) {
result = processDefineMapper.insert(processDefinitionLog);
} else {
processDefinitionLog.setId(processDefinition.getId());
result = processDefineMapper.updateById(processDefinitionLog);
}
return (insertLog & result) > 0 ? insertVersion : 0;
}
/**
* save task relations
*/
public int saveTaskRelation(User operator, long projectCode, long processDefinitionCode, int processDefinitionVersion,
List<ProcessTaskRelationLog> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) {
Map<Long, TaskDefinitionLog> taskDefinitionLogMap = null;
if (CollectionUtils.isNotEmpty(taskDefinitionLogs)) {
taskDefinitionLogMap = taskDefinitionLogs.stream()
.collect(Collectors.toMap(TaskDefinition::getCode, taskDefinitionLog -> taskDefinitionLog));
}
Date now = new Date();
for (ProcessTaskRelationLog processTaskRelationLog : taskRelationList) {
processTaskRelationLog.setProjectCode(projectCode);
processTaskRelationLog.setProcessDefinitionCode(processDefinitionCode);
processTaskRelationLog.setProcessDefinitionVersion(processDefinitionVersion);
if (taskDefinitionLogMap != null) {
TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(processTaskRelationLog.getPreTaskCode());
if (taskDefinitionLog != null) {
processTaskRelationLog.setPreTaskVersion(taskDefinitionLog.getVersion());
}
processTaskRelationLog.setPostTaskVersion(taskDefinitionLogMap.get(processTaskRelationLog.getPostTaskCode()).getVersion());
}
processTaskRelationLog.setCreateTime(now);
processTaskRelationLog.setUpdateTime(now);
processTaskRelationLog.setOperator(operator.getId());
processTaskRelationLog.setOperateTime(now);
}
List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode);
if (!processTaskRelationList.isEmpty()) {
Set<Integer> processTaskRelationSet = processTaskRelationList.stream().map(ProcessTaskRelation::hashCode).collect(toSet());
Set<Integer> taskRelationSet = taskRelationList.stream().map(ProcessTaskRelationLog::hashCode).collect(toSet());
boolean result = CollectionUtils.isEqualCollection(processTaskRelationSet, taskRelationSet);
if (result) {
return Constants.EXIT_CODE_SUCCESS;
}
processTaskRelationMapper.deleteByCode(projectCode, processDefinitionCode);
}
int result = processTaskRelationMapper.batchInsert(taskRelationList);
int resultLog = processTaskRelationLogMapper.batchInsert(taskRelationList);
return (result & resultLog) > 0 ? Constants.EXIT_CODE_SUCCESS : Constants.EXIT_CODE_FAILURE;
}
public boolean isTaskOnline(long taskCode) {
List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByTaskCode(taskCode);
if (!processTaskRelationList.isEmpty()) {
Set<Long> processDefinitionCodes = processTaskRelationList
.stream()
.map(ProcessTaskRelation::getProcessDefinitionCode)
.collect(Collectors.toSet());
List<ProcessDefinition> processDefinitionList = processDefineMapper.queryByCodes(processDefinitionCodes);
// check process definition is already online
for (ProcessDefinition processDefinition : processDefinitionList) {
if (processDefinition.getReleaseState() == ReleaseState.ONLINE) {
return true;
}
}
}
return false;
}
/**
* Generate the DAG Graph based on the process definition id
*
* @param processDefinition process definition
* @return dag graph
*/
public DAG<String, TaskNode, TaskNodeRelation> genDagGraph(ProcessDefinition processDefinition) {
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode());
List<TaskNode> taskNodeList = transformTask(processTaskRelations, Lists.newArrayList());
ProcessDag processDag = DagHelper.getProcessDag(taskNodeList, new ArrayList<>(processTaskRelations));
// Generate concrete Dag to be executed
return DagHelper.buildDagGraph(processDag);
}
/**
* generate DagData
*/
public DagData genDagData(ProcessDefinition processDefinition) {
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode());
List<TaskDefinitionLog> taskDefinitionLogList = genTaskDefineList(processTaskRelations);
List<TaskDefinition> taskDefinitions = taskDefinitionLogList.stream()
.map(taskDefinitionLog -> JSONUtils.parseObject(JSONUtils.toJsonString(taskDefinitionLog), TaskDefinition.class))
.collect(Collectors.toList());
return new DagData(processDefinition, processTaskRelations, taskDefinitions);
}
public List<TaskDefinitionLog> genTaskDefineList(List<ProcessTaskRelation> processTaskRelations) {
Set<TaskDefinition> taskDefinitionSet = new HashSet<>();
for (ProcessTaskRelation processTaskRelation : processTaskRelations) {
if (processTaskRelation.getPreTaskCode() > 0) {
taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPreTaskCode(), processTaskRelation.getPreTaskVersion()));
}
if (processTaskRelation.getPostTaskCode() > 0) {
taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion()));
}
}
return taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet);
}
/**
* find task definition by code and version
*/
public TaskDefinition findTaskDefinition(long taskCode, int taskDefinitionVersion) {
return taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(taskCode, taskDefinitionVersion);
}
/**
* find process task relation list by projectCode and processDefinitionCode
*/
public List<ProcessTaskRelation> findRelationByCode(long projectCode, long processDefinitionCode) {
return processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode);
}
/**
* add authorized resources
*
* @param ownResources own resources
* @param userId userId
*/
private void addAuthorizedResources(List<Resource> ownResources, int userId) {
List<Integer> relationResourceIds = resourceUserMapper.queryResourcesIdListByUserIdAndPerm(userId, 7);
List<Resource> relationResources = CollectionUtils.isNotEmpty(relationResourceIds) ? resourceMapper.queryResourceListById(relationResourceIds) : new ArrayList<>();
ownResources.addAll(relationResources);
}
/**
* Use temporarily before refactoring taskNode
*/
public List<TaskNode> transformTask(List<ProcessTaskRelation> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) {
Map<Long, List<Long>> taskCodeMap = new HashMap<>();
for (ProcessTaskRelation processTaskRelation : taskRelationList) {
taskCodeMap.compute(processTaskRelation.getPostTaskCode(), (k, v) -> {
if (v == null) {
v = new ArrayList<>();
}
if (processTaskRelation.getPreTaskCode() != 0L) {
v.add(processTaskRelation.getPreTaskCode());
}
return v;
});
}
if (CollectionUtils.isEmpty(taskDefinitionLogs)) {
taskDefinitionLogs = genTaskDefineList(taskRelationList);
}
Map<Long, TaskDefinitionLog> taskDefinitionLogMap = taskDefinitionLogs.stream()
.collect(Collectors.toMap(TaskDefinitionLog::getCode, taskDefinitionLog -> taskDefinitionLog));
List<TaskNode> taskNodeList = new ArrayList<>();
for (Entry<Long, List<Long>> code : taskCodeMap.entrySet()) {
TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(code.getKey());
if (taskDefinitionLog != null) {
TaskNode taskNode = new TaskNode();
taskNode.setCode(taskDefinitionLog.getCode());
taskNode.setVersion(taskDefinitionLog.getVersion());
taskNode.setName(taskDefinitionLog.getName());
taskNode.setDesc(taskDefinitionLog.getDescription());
taskNode.setType(taskDefinitionLog.getTaskType().toUpperCase());
taskNode.setRunFlag(taskDefinitionLog.getFlag() == Flag.YES ? Constants.FLOWNODE_RUN_FLAG_NORMAL : Constants.FLOWNODE_RUN_FLAG_FORBIDDEN);
taskNode.setMaxRetryTimes(taskDefinitionLog.getFailRetryTimes());
taskNode.setRetryInterval(taskDefinitionLog.getFailRetryInterval());
Map<String, Object> taskParamsMap = taskNode.taskParamsToJsonObj(taskDefinitionLog.getTaskParams());
taskNode.setConditionResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.CONDITION_RESULT)));
taskNode.setSwitchResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.SWITCH_RESULT)));
taskNode.setDependence(JSONUtils.toJsonString(taskParamsMap.get(Constants.DEPENDENCE)));
taskParamsMap.remove(Constants.CONDITION_RESULT);
taskParamsMap.remove(Constants.DEPENDENCE);
taskNode.setParams(JSONUtils.toJsonString(taskParamsMap));
taskNode.setTaskInstancePriority(taskDefinitionLog.getTaskPriority());
taskNode.setWorkerGroup(taskDefinitionLog.getWorkerGroup());
taskNode.setEnvironmentCode(taskDefinitionLog.getEnvironmentCode());
taskNode.setTimeout(JSONUtils.toJsonString(new TaskTimeoutParameter(taskDefinitionLog.getTimeoutFlag() == TimeoutFlag.OPEN,
taskDefinitionLog.getTimeoutNotifyStrategy(),
taskDefinitionLog.getTimeout())));
taskNode.setDelayTime(taskDefinitionLog.getDelayTime());
taskNode.setPreTasks(JSONUtils.toJsonString(code.getValue().stream().map(taskDefinitionLogMap::get).map(TaskDefinition::getCode).collect(Collectors.toList())));
taskNodeList.add(taskNode);
}
}
return taskNodeList;
}
public Map<ProcessInstance, TaskInstance> notifyProcessList(int processId, int taskId) {
HashMap<ProcessInstance, TaskInstance> processTaskMap = new HashMap<>();
//find sub tasks
ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(processId);
if (processInstanceMap == null) {
return processTaskMap;
}
ProcessInstance fatherProcess = this.findProcessInstanceById(processInstanceMap.getParentProcessInstanceId());
TaskInstance fatherTask = this.findTaskInstanceById(processInstanceMap.getParentTaskInstanceId());
if (fatherProcess != null) {
processTaskMap.put(fatherProcess, fatherTask);
}
return processTaskMap;
}
public ProcessInstance loadNextProcess4Serial(long code, int state) {
return this.processInstanceMapper.loadNextProcess4Serial(code,state);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,882 | [Bug] [Master] process cannot finish and its status is always running. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch :2.0
process cannot finish and its status is always running.
![image](https://user-images.githubusercontent.com/29528966/142152624-f312bc37-f42e-432c-b861-8bfe61fa82c8.png)
### What you expected to happen
process end normally.
### How to reproduce
1. run a process with a sub process ( only a shel task in it)
2. sometimes ( 1/20) the process status would be always running.
### Anything else
_No response_
### Version
2.0.0-alpha
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6882 | https://github.com/apache/dolphinscheduler/pull/6886 | f564687a8988c8ccdb2f138b1b994cce9eb914e9 | 653eae24195957b01d1a911aada020372d1742e6 | "2021-11-17T07:22:20Z" | java | "2021-11-17T09:39:22Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/EventExecuteService.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.runner;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.StateEvent;
import org.apache.dolphinscheduler.common.enums.StateEventType;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.remote.command.StateEventChangeCommand;
import org.apache.dolphinscheduler.remote.processor.StateEventCallbackService;
import org.apache.dolphinscheduler.server.master.cache.ProcessInstanceExecCacheManager;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.commons.lang.StringUtils;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.google.common.util.concurrent.FutureCallback;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
import com.google.common.util.concurrent.ListeningExecutorService;
import com.google.common.util.concurrent.MoreExecutors;
@Service
public class EventExecuteService extends Thread {
private static final Logger logger = LoggerFactory.getLogger(EventExecuteService.class);
/**
* dolphinscheduler database interface
*/
@Autowired
private ProcessService processService;
@Autowired
private MasterConfig masterConfig;
private ExecutorService eventExecService;
/**
*
*/
private StateEventCallbackService stateEventCallbackService;
@Autowired
private ProcessInstanceExecCacheManager processInstanceExecCacheManager;
private ConcurrentHashMap<String, WorkflowExecuteThread> eventHandlerMap = new ConcurrentHashMap();
ListeningExecutorService listeningExecutorService;
public void init() {
eventExecService = ThreadUtils.newDaemonFixedThreadExecutor("MasterEventExecution", masterConfig.getExecThreads());
listeningExecutorService = MoreExecutors.listeningDecorator(eventExecService);
this.stateEventCallbackService = SpringApplicationContext.getBean(StateEventCallbackService.class);
}
@Override
public synchronized void start() {
super.setName("EventServiceStarted");
super.start();
}
public void close() {
eventExecService.shutdown();
logger.info("event service stopped...");
}
@Override
public void run() {
logger.info("Event service started");
while (Stopper.isRunning()) {
try {
eventHandler();
TimeUnit.MILLISECONDS.sleep(Constants.SLEEP_TIME_MILLIS);
} catch (Exception e) {
logger.error("Event service thread error", e);
}
}
}
private void eventHandler() {
for (WorkflowExecuteThread workflowExecuteThread : this.processInstanceExecCacheManager.getAll()) {
if (workflowExecuteThread.eventSize() == 0
|| StringUtils.isEmpty(workflowExecuteThread.getKey())
|| eventHandlerMap.containsKey(workflowExecuteThread.getKey())) {
continue;
}
int processInstanceId = workflowExecuteThread.getProcessInstance().getId();
logger.info("handle process instance : {} , events count:{}",
processInstanceId,
workflowExecuteThread.eventSize());
logger.info("already exists handler process size:{}", this.eventHandlerMap.size());
eventHandlerMap.put(workflowExecuteThread.getKey(), workflowExecuteThread);
ListenableFuture future = this.listeningExecutorService.submit(workflowExecuteThread);
FutureCallback futureCallback = new FutureCallback() {
@Override
public void onSuccess(Object o) {
if (workflowExecuteThread.workFlowFinish()) {
processInstanceExecCacheManager.removeByProcessInstanceId(processInstanceId);
notifyProcessChanged();
logger.info("process instance {} finished.", processInstanceId);
}
if (workflowExecuteThread.getProcessInstance().getId() != processInstanceId) {
processInstanceExecCacheManager.removeByProcessInstanceId(processInstanceId);
processInstanceExecCacheManager.cache(workflowExecuteThread.getProcessInstance().getId(), workflowExecuteThread);
}
eventHandlerMap.remove(workflowExecuteThread.getKey());
}
private void notifyProcessChanged() {
Map<ProcessInstance, TaskInstance> fatherMaps
= processService.notifyProcessList(processInstanceId, 0);
for (ProcessInstance processInstance : fatherMaps.keySet()) {
String address = NetUtils.getAddr(masterConfig.getListenPort());
if (processInstance.getHost().equalsIgnoreCase(address)) {
notifyMyself(processInstance, fatherMaps.get(processInstance));
} else {
notifyProcess(processInstance, fatherMaps.get(processInstance));
}
}
}
private void notifyMyself(ProcessInstance processInstance, TaskInstance taskInstance) {
logger.info("notify process {} task {} state change", processInstance.getId(), taskInstance.getId());
if (!processInstanceExecCacheManager.contains(processInstance.getId())) {
return;
}
WorkflowExecuteThread workflowExecuteThreadNotify = processInstanceExecCacheManager.getByProcessInstanceId(processInstance.getId());
StateEvent stateEvent = new StateEvent();
stateEvent.setTaskInstanceId(taskInstance.getId());
stateEvent.setType(StateEventType.TASK_STATE_CHANGE);
stateEvent.setProcessInstanceId(processInstance.getId());
stateEvent.setExecutionStatus(ExecutionStatus.RUNNING_EXECUTION);
workflowExecuteThreadNotify.addStateEvent(stateEvent);
}
private void notifyProcess(ProcessInstance processInstance, TaskInstance taskInstance) {
String host = processInstance.getHost();
if (StringUtils.isEmpty(host)) {
logger.info("process {} host is empty, cannot notify task {} now.",
processInstance.getId(), taskInstance.getId());
return;
}
String address = host.split(":")[0];
int port = Integer.parseInt(host.split(":")[1]);
logger.info("notify process {} task {} state change, host:{}",
processInstance.getId(), taskInstance.getId(), host);
StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand(
processInstanceId, 0, workflowExecuteThread.getProcessInstance().getState(), processInstance.getId(), taskInstance.getId()
);
stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command());
}
@Override
public void onFailure(Throwable throwable) {
}
};
Futures.addCallback(future, futureCallback, this.listeningExecutorService);
}
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | ambari_plugin/common-services/DOLPHIN/1.3.0/package/scripts/params.py | """
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sys
from resource_management import *
from resource_management.core.logger import Logger
from resource_management.libraries.functions import default
Logger.initialize_logger()
reload(sys)
sys.setdefaultencoding('utf-8')
# server configurations
config = Script.get_config()
# conf_dir = "/etc/"
dolphin_home = "/opt/soft/dolphinscheduler"
dolphin_conf_dir = dolphin_home + "/conf"
dolphin_log_dir = dolphin_home + "/logs"
dolphin_bin_dir = dolphin_home + "/bin"
dolphin_lib_jars = dolphin_home + "/lib/*"
dolphin_pidfile_dir = "/opt/soft/run/dolphinscheduler"
rmHosts = default("/clusterHostInfo/rm_host", [])
# dolphin-env
dolphin_env_map = {}
dolphin_env_map.update(config['configurations']['dolphin-env'])
# which user to install and admin dolphin scheduler
dolphin_user = dolphin_env_map['dolphin.user']
dolphin_group = dolphin_env_map['dolphin.group']
# .dolphinscheduler_env.sh
dolphin_env_path = dolphin_conf_dir + '/env/dolphinscheduler_env.sh'
dolphin_env_content = dolphin_env_map['dolphinscheduler-env-content']
# database config
dolphin_database_config = {}
dolphin_database_config['dolphin_database_type'] = dolphin_env_map['dolphin.database.type']
dolphin_database_config['dolphin_database_username'] = dolphin_env_map['dolphin.database.username']
dolphin_database_config['dolphin_database_password'] = dolphin_env_map['dolphin.database.password']
if 'mysql' == dolphin_database_config['dolphin_database_type']:
dolphin_database_config['dolphin_database_driver'] = 'com.mysql.jdbc.Driver'
dolphin_database_config['driverDelegateClass'] = 'org.quartz.impl.jdbcjobstore.StdJDBCDelegate'
dolphin_database_config['dolphin_database_url'] = 'jdbc:mysql://' + dolphin_env_map['dolphin.database.host'] \
+ ':' + dolphin_env_map['dolphin.database.port'] \
+ '/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8'
else:
dolphin_database_config['dolphin_database_driver'] = 'org.postgresql.Driver'
dolphin_database_config['driverDelegateClass'] = 'org.quartz.impl.jdbcjobstore.PostgreSQLDelegate'
dolphin_database_config['dolphin_database_url'] = 'jdbc:postgresql://' + dolphin_env_map['dolphin.database.host'] \
+ ':' + dolphin_env_map['dolphin.database.port'] \
+ '/dolphinscheduler'
# application-alert.properties
dolphin_alert_map = {}
wechat_push_url = 'https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token=$token'
wechat_token_url = 'https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid=$corpId&corpsecret=$secret'
wechat_team_send_msg = '{\"toparty\":\"{toParty}\",\"agentid\":\"{agentId}\",\"msgtype\":\"text\",\"text\":{\"content\":\"{msg}\"},\"safe\":\"0\"}'
wechat_user_send_msg = '{\"touser\":\"{toUser}\",\"agentid\":\"{agentId}\",\"msgtype\":\"markdown\",\"markdown\":{\"content\":\"{msg}\"}}'
dolphin_alert_config_map = config['configurations']['dolphin-alert']
if dolphin_alert_config_map['enterprise.wechat.enable']:
dolphin_alert_map['enterprise.wechat.push.ur'] = wechat_push_url
dolphin_alert_map['enterprise.wechat.token.url'] = wechat_token_url
dolphin_alert_map['enterprise.wechat.team.send.msg'] = wechat_team_send_msg
dolphin_alert_map['enterprise.wechat.user.send.msg'] = wechat_user_send_msg
dolphin_alert_map.update(dolphin_alert_config_map)
# application-api.properties
dolphin_app_api_map = {}
dolphin_app_api_map.update(config['configurations']['dolphin-application-api'])
# common.properties
dolphin_common_map = {}
if 'yarn-site' in config['configurations'] and \
'yarn.resourcemanager.webapp.address' in config['configurations']['yarn-site']:
yarn_resourcemanager_webapp_address = config['configurations']['yarn-site']['yarn.resourcemanager.webapp.address']
yarn_application_status_address = 'http://' + yarn_resourcemanager_webapp_address + '/ws/v1/cluster/apps/%s'
dolphin_common_map['yarn.application.status.address'] = yarn_application_status_address
rmHosts = default("/clusterHostInfo/rm_host", [])
if len(rmHosts) > 1:
dolphin_common_map['yarn.resourcemanager.ha.rm.ids'] = ','.join(rmHosts)
else:
dolphin_common_map['yarn.resourcemanager.ha.rm.ids'] = ''
dolphin_common_map_tmp = config['configurations']['dolphin-common']
data_basedir_path = dolphin_common_map_tmp['data.basedir.path']
dolphin_common_map['dolphinscheduler.env.path'] = dolphin_env_path
dolphin_common_map.update(config['configurations']['dolphin-common'])
# datasource.properties
dolphin_datasource_map = {}
dolphin_datasource_map['spring.datasource.type'] = 'com.alibaba.druid.pool.DruidDataSource'
dolphin_datasource_map['spring.datasource.driver-class-name'] = dolphin_database_config['dolphin_database_driver']
dolphin_datasource_map['spring.datasource.url'] = dolphin_database_config['dolphin_database_url']
dolphin_datasource_map['spring.datasource.username'] = dolphin_database_config['dolphin_database_username']
dolphin_datasource_map['spring.datasource.password'] = dolphin_database_config['dolphin_database_password']
dolphin_datasource_map.update(config['configurations']['dolphin-datasource'])
# master.properties
dolphin_master_map = config['configurations']['dolphin-master']
# quartz.properties
dolphin_quartz_map = {}
dolphin_quartz_map['org.quartz.jobStore.driverDelegateClass'] = dolphin_database_config['driverDelegateClass']
dolphin_quartz_map.update(config['configurations']['dolphin-quartz'])
# worker.properties
dolphin_worker_map = config['configurations']['dolphin-worker']
# zookeeper.properties
dolphin_zookeeper_map={}
zookeeperHosts = default("/clusterHostInfo/zookeeper_hosts", [])
if len(zookeeperHosts) > 0 and "clientPort" in config['configurations']['zoo.cfg']:
clientPort = config['configurations']['zoo.cfg']['clientPort']
zookeeperPort = ":" + clientPort + ","
dolphin_zookeeper_map['zookeeper.quorum'] = zookeeperPort.join(zookeeperHosts) + ":" + clientPort
dolphin_zookeeper_map.update(config['configurations']['dolphin-zookeeper'])
if 'spring.servlet.multipart.max-file-size' in dolphin_app_api_map:
file_size = dolphin_app_api_map['spring.servlet.multipart.max-file-size']
dolphin_app_api_map['spring.servlet.multipart.max-file-size'] = file_size + "MB"
if 'spring.servlet.multipart.max-request-size' in dolphin_app_api_map:
request_size = dolphin_app_api_map['spring.servlet.multipart.max-request-size']
dolphin_app_api_map['spring.servlet.multipart.max-request-size'] = request_size + "MB"
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | ambari_plugin/common-services/DOLPHIN/1.3.3/package/scripts/params.py | """
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import sys
from resource_management import *
from resource_management.core.logger import Logger
from resource_management.libraries.functions import default
Logger.initialize_logger()
reload(sys)
sys.setdefaultencoding('utf-8')
# server configurations
config = Script.get_config()
# conf_dir = "/etc/"
dolphin_home = "/opt/soft/dolphinscheduler"
dolphin_conf_dir = dolphin_home + "/conf"
dolphin_log_dir = dolphin_home + "/logs"
dolphin_bin_dir = dolphin_home + "/bin"
dolphin_lib_jars = dolphin_home + "/lib/*"
dolphin_pidfile_dir = "/opt/soft/run/dolphinscheduler"
rmHosts = default("/clusterHostInfo/rm_host", [])
# dolphin-env
dolphin_env_map = {}
dolphin_env_map.update(config['configurations']['dolphin-env'])
# which user to install and admin dolphin scheduler
dolphin_user = dolphin_env_map['dolphin.user']
dolphin_group = dolphin_env_map['dolphin.group']
# .dolphinscheduler_env.sh
dolphin_env_path = dolphin_conf_dir + '/env/dolphinscheduler_env.sh'
dolphin_env_content = dolphin_env_map['dolphinscheduler-env-content']
# database config
dolphin_database_config = {}
dolphin_database_config['dolphin_database_type'] = dolphin_env_map['dolphin.database.type']
dolphin_database_config['dolphin_database_username'] = dolphin_env_map['dolphin.database.username']
dolphin_database_config['dolphin_database_password'] = dolphin_env_map['dolphin.database.password']
if 'mysql' == dolphin_database_config['dolphin_database_type']:
dolphin_database_config['dolphin_database_driver'] = 'com.mysql.jdbc.Driver'
dolphin_database_config['driverDelegateClass'] = 'org.quartz.impl.jdbcjobstore.StdJDBCDelegate'
dolphin_database_config['dolphin_database_url'] = 'jdbc:mysql://' + dolphin_env_map['dolphin.database.host'] \
+ ':' + dolphin_env_map['dolphin.database.port'] \
+ '/dolphinscheduler?useUnicode=true&characterEncoding=UTF-8'
else:
dolphin_database_config['dolphin_database_driver'] = 'org.postgresql.Driver'
dolphin_database_config['driverDelegateClass'] = 'org.quartz.impl.jdbcjobstore.PostgreSQLDelegate'
dolphin_database_config['dolphin_database_url'] = 'jdbc:postgresql://' + dolphin_env_map['dolphin.database.host'] \
+ ':' + dolphin_env_map['dolphin.database.port'] \
+ '/dolphinscheduler'
# application-alert.properties
dolphin_alert_map = {}
wechat_push_url = 'https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token=$token'
wechat_token_url = 'https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid=$corpId&corpsecret=$secret'
wechat_team_send_msg = '{\"toparty\":\"{toParty}\",\"agentid\":\"{agentId}\",\"msgtype\":\"text\",\"text\":{\"content\":\"{msg}\"},\"safe\":\"0\"}'
wechat_user_send_msg = '{\"touser\":\"{toUser}\",\"agentid\":\"{agentId}\",\"msgtype\":\"markdown\",\"markdown\":{\"content\":\"{msg}\"}}'
dolphin_alert_config_map = config['configurations']['dolphin-alert']
if dolphin_alert_config_map['enterprise.wechat.enable']:
dolphin_alert_map['enterprise.wechat.push.ur'] = wechat_push_url
dolphin_alert_map['enterprise.wechat.token.url'] = wechat_token_url
dolphin_alert_map['enterprise.wechat.team.send.msg'] = wechat_team_send_msg
dolphin_alert_map['enterprise.wechat.user.send.msg'] = wechat_user_send_msg
dolphin_alert_map.update(dolphin_alert_config_map)
# application-api.properties
dolphin_app_api_map = {}
dolphin_app_api_map.update(config['configurations']['dolphin-application-api'])
# common.properties
dolphin_common_map = {}
if 'yarn-site' in config['configurations'] and \
'yarn.resourcemanager.webapp.address' in config['configurations']['yarn-site']:
yarn_resourcemanager_webapp_address = config['configurations']['yarn-site']['yarn.resourcemanager.webapp.address']
yarn_application_status_address = 'http://' + yarn_resourcemanager_webapp_address + '/ws/v1/cluster/apps/%s'
dolphin_common_map['yarn.application.status.address'] = yarn_application_status_address
rmHosts = default("/clusterHostInfo/rm_host", [])
if len(rmHosts) > 1:
dolphin_common_map['yarn.resourcemanager.ha.rm.ids'] = ','.join(rmHosts)
else:
dolphin_common_map['yarn.resourcemanager.ha.rm.ids'] = ''
dolphin_common_map_tmp = config['configurations']['dolphin-common']
data_basedir_path = dolphin_common_map_tmp['data.basedir.path']
dolphin_common_map['dolphinscheduler.env.path'] = dolphin_env_path
dolphin_common_map.update(config['configurations']['dolphin-common'])
# datasource.properties
dolphin_datasource_map = {}
dolphin_datasource_map['spring.datasource.type'] = 'com.alibaba.druid.pool.DruidDataSource'
dolphin_datasource_map['spring.datasource.driver-class-name'] = dolphin_database_config['dolphin_database_driver']
dolphin_datasource_map['spring.datasource.url'] = dolphin_database_config['dolphin_database_url']
dolphin_datasource_map['spring.datasource.username'] = dolphin_database_config['dolphin_database_username']
dolphin_datasource_map['spring.datasource.password'] = dolphin_database_config['dolphin_database_password']
dolphin_datasource_map.update(config['configurations']['dolphin-datasource'])
# master.properties
dolphin_master_map = config['configurations']['dolphin-master']
# quartz.properties
dolphin_quartz_map = {}
dolphin_quartz_map['org.quartz.jobStore.driverDelegateClass'] = dolphin_database_config['driverDelegateClass']
dolphin_quartz_map.update(config['configurations']['dolphin-quartz'])
# worker.properties
dolphin_worker_map = config['configurations']['dolphin-worker']
# zookeeper.properties
dolphin_zookeeper_map={}
zookeeperHosts = default("/clusterHostInfo/zookeeper_hosts", [])
if len(zookeeperHosts) > 0 and "clientPort" in config['configurations']['zoo.cfg']:
clientPort = config['configurations']['zoo.cfg']['clientPort']
zookeeperPort = ":" + clientPort + ","
dolphin_zookeeper_map['zookeeper.quorum'] = zookeeperPort.join(zookeeperHosts) + ":" + clientPort
dolphin_zookeeper_map.update(config['configurations']['dolphin-zookeeper'])
if 'spring.servlet.multipart.max-file-size' in dolphin_app_api_map:
file_size = dolphin_app_api_map['spring.servlet.multipart.max-file-size']
dolphin_app_api_map['spring.servlet.multipart.max-file-size'] = file_size + "MB"
if 'spring.servlet.multipart.max-request-size' in dolphin_app_api_map:
request_size = dolphin_app_api_map['spring.servlet.multipart.max-request-size']
dolphin_app_api_map['spring.servlet.multipart.max-request-size'] = request_size + "MB"
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | docker/docker-swarm/config.env.sh | # Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#============================================================================
# Database
#============================================================================
# postgresql
DATABASE_TYPE=postgresql
DATABASE_DRIVER=org.postgresql.Driver
DATABASE_HOST=dolphinscheduler-postgresql
DATABASE_PORT=5432
DATABASE_USERNAME=root
DATABASE_PASSWORD=root
DATABASE_DATABASE=dolphinscheduler
DATABASE_PARAMS=characterEncoding=utf8
# mysql
# DATABASE_TYPE=mysql
# DATABASE_DRIVER=com.mysql.jdbc.Driver
# DATABASE_HOST=dolphinscheduler-mysql
# DATABASE_PORT=3306
# DATABASE_USERNAME=root
# DATABASE_PASSWORD=root
# DATABASE_DATABASE=dolphinscheduler
# DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
#============================================================================
# Registry
#============================================================================
REGISTRY_PLUGIN_NAME=zookeeper
REGISTRY_SERVERS=dolphinscheduler-zookeeper:2181
#============================================================================
# Common
#============================================================================
# common opts
DOLPHINSCHEDULER_OPTS=
# common env
DATA_BASEDIR_PATH=/tmp/dolphinscheduler
RESOURCE_STORAGE_TYPE=HDFS
RESOURCE_UPLOAD_PATH=/dolphinscheduler
FS_DEFAULT_FS=file:///
FS_S3A_ENDPOINT=s3.xxx.amazonaws.com
FS_S3A_ACCESS_KEY=xxxxxxx
FS_S3A_SECRET_KEY=xxxxxxx
HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE=false
JAVA_SECURITY_KRB5_CONF_PATH=/opt/krb5.conf
LOGIN_USER_KEYTAB_USERNAME=hdfs@HADOOP.COM
LOGIN_USER_KEYTAB_PATH=/opt/hdfs.keytab
KERBEROS_EXPIRE_TIME=2
HDFS_ROOT_USER=hdfs
RESOURCE_MANAGER_HTTPADDRESS_PORT=8088
YARN_RESOURCEMANAGER_HA_RM_IDS=
YARN_APPLICATION_STATUS_ADDRESS=http://ds1:%s/ws/v1/cluster/apps/%s
YARN_JOB_HISTORY_STATUS_ADDRESS=http://ds1:19888/ws/v1/history/mapreduce/jobs/%s
DATASOURCE_ENCRYPTION_ENABLE=false
DATASOURCE_ENCRYPTION_SALT=!@#$%^&*
SUDO_ENABLE=true
# dolphinscheduler env
HADOOP_HOME=/opt/soft/hadoop
HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
SPARK_HOME1=/opt/soft/spark1
SPARK_HOME2=/opt/soft/spark2
PYTHON_HOME=/usr/bin/python
JAVA_HOME=/usr/local/openjdk-8
HIVE_HOME=/opt/soft/hive
FLINK_HOME=/opt/soft/flink
DATAX_HOME=/opt/soft/datax
#============================================================================
# Master Server
#============================================================================
MASTER_SERVER_OPTS=-Xms1g -Xmx1g -Xmn512m
MASTER_EXEC_THREADS=100
MASTER_EXEC_TASK_NUM=20
MASTER_DISPATCH_TASK_NUM=3
MASTER_HOST_SELECTOR=LowerWeight
MASTER_HEARTBEAT_INTERVAL=10
MASTER_TASK_COMMIT_RETRYTIMES=5
MASTER_TASK_COMMIT_INTERVAL=1000
MASTER_MAX_CPULOAD_AVG=-1
MASTER_RESERVED_MEMORY=0.3
#============================================================================
# Worker Server
#============================================================================
WORKER_SERVER_OPTS=-Xms1g -Xmx1g -Xmn512m
WORKER_EXEC_THREADS=100
WORKER_HEARTBEAT_INTERVAL=10
WORKER_HOST_WEIGHT=100
WORKER_MAX_CPULOAD_AVG=-1
WORKER_RESERVED_MEMORY=0.3
WORKER_GROUPS=default
ALERT_LISTEN_HOST=dolphinscheduler-alert
#============================================================================
# Alert Server
#============================================================================
ALERT_SERVER_OPTS=-Xms512m -Xmx512m -Xmn256m
#============================================================================
# Api Server
#============================================================================
API_SERVER_OPTS=-Xms512m -Xmx512m -Xmn256m
#============================================================================
# Logger Server
#============================================================================
LOGGER_SERVER_OPTS=-Xms512m -Xmx512m -Xmn256m
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/DataSourceServiceTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.service;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.UserType;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper;
import org.apache.dolphinscheduler.dao.mapper.DataSourceUserMapper;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.hive.HiveDataSourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.mysql.MysqlDatasourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.oracle.OracleDatasourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.postgresql.PostgreSqlDatasourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.CommonUtils;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.DatasourceUtil;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.PasswordUtils;
import org.apache.dolphinscheduler.spi.datasource.ConnectionParam;
import org.apache.dolphinscheduler.spi.enums.DbConnectType;
import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.dolphinscheduler.spi.utils.PropertyUtils;
import java.sql.Connection;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PowerMockIgnore;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
/**
* data source service test
*/
@RunWith(PowerMockRunner.class)
@PowerMockIgnore({"sun.security.*", "javax.net.*"})
@PrepareForTest({DatasourceUtil.class, CommonUtils.class, DataSourceClientProvider.class, PasswordUtils.class})
public class DataSourceServiceTest {
@InjectMocks
private DataSourceServiceImpl dataSourceService;
@Mock
private DataSourceMapper dataSourceMapper;
@Mock
private DataSourceUserMapper datasourceUserMapper;
public void createDataSourceTest() {
User loginUser = getAdminUser();
String dataSourceName = "dataSource01";
String dataSourceDesc = "test dataSource";
PostgreSqlDatasourceParamDTO postgreSqlDatasourceParam = new PostgreSqlDatasourceParamDTO();
postgreSqlDatasourceParam.setDatabase(dataSourceName);
postgreSqlDatasourceParam.setNote(dataSourceDesc);
postgreSqlDatasourceParam.setHost("172.16.133.200");
postgreSqlDatasourceParam.setPort(5432);
postgreSqlDatasourceParam.setDatabase("dolphinscheduler");
postgreSqlDatasourceParam.setUserName("postgres");
postgreSqlDatasourceParam.setPassword("");
// data source exits
List<DataSource> dataSourceList = new ArrayList<>();
DataSource dataSource = new DataSource();
dataSource.setName(dataSourceName);
dataSourceList.add(dataSource);
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName.trim())).thenReturn(dataSourceList);
Result dataSourceExitsResult = dataSourceService.createDataSource(loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.DATASOURCE_EXIST.getCode(), dataSourceExitsResult.getCode().intValue());
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(postgreSqlDatasourceParam);
DbType dataSourceType = postgreSqlDatasourceParam.getType();
// data source exits
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName.trim())).thenReturn(null);
Result connectionResult = new Result(Status.DATASOURCE_CONNECT_FAILED.getCode(), Status.DATASOURCE_CONNECT_FAILED.getMsg());
//PowerMockito.when(dataSourceService.checkConnection(dataSourceType, parameter)).thenReturn(connectionResult);
PowerMockito.doReturn(connectionResult).when(dataSourceService).checkConnection(dataSourceType, connectionParam);
Result connectFailedResult = dataSourceService.createDataSource(loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.DATASOURCE_CONNECT_FAILED.getCode(), connectFailedResult.getCode().intValue());
// data source exits
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName.trim())).thenReturn(null);
connectionResult = new Result(Status.SUCCESS.getCode(), Status.SUCCESS.getMsg());
PowerMockito.when(dataSourceService.checkConnection(dataSourceType, connectionParam)).thenReturn(connectionResult);
Result notValidError = dataSourceService.createDataSource(loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.REQUEST_PARAMS_NOT_VALID_ERROR.getCode(), notValidError.getCode().intValue());
// success
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName.trim())).thenReturn(null);
PowerMockito.when(dataSourceService.checkConnection(dataSourceType, connectionParam)).thenReturn(connectionResult);
Result success = dataSourceService.createDataSource(loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.SUCCESS.getCode(), success.getCode().intValue());
}
public void updateDataSourceTest() {
User loginUser = getAdminUser();
int dataSourceId = 12;
String dataSourceName = "dataSource01";
String dataSourceDesc = "test dataSource";
PostgreSqlDatasourceParamDTO postgreSqlDatasourceParam = new PostgreSqlDatasourceParamDTO();
postgreSqlDatasourceParam.setDatabase(dataSourceName);
postgreSqlDatasourceParam.setNote(dataSourceDesc);
postgreSqlDatasourceParam.setHost("172.16.133.200");
postgreSqlDatasourceParam.setPort(5432);
postgreSqlDatasourceParam.setDatabase("dolphinscheduler");
postgreSqlDatasourceParam.setUserName("postgres");
postgreSqlDatasourceParam.setPassword("");
// data source not exits
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(null);
Result resourceNotExits = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.RESOURCE_NOT_EXIST.getCode(), resourceNotExits.getCode().intValue());
// user no operation perm
DataSource dataSource = new DataSource();
dataSource.setUserId(0);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
Result userNoOperationPerm = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.USER_NO_OPERATION_PERM.getCode(), userNoOperationPerm.getCode().intValue());
// data source name exits
dataSource.setUserId(-1);
List<DataSource> dataSourceList = new ArrayList<>();
dataSourceList.add(dataSource);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName)).thenReturn(dataSourceList);
Result dataSourceNameExist = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.DATASOURCE_EXIST.getCode(), dataSourceNameExist.getCode().intValue());
// data source connect failed
DbType dataSourceType = postgreSqlDatasourceParam.getType();
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(postgreSqlDatasourceParam);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName)).thenReturn(null);
Result connectionResult = new Result(Status.SUCCESS.getCode(), Status.SUCCESS.getMsg());
PowerMockito.when(dataSourceService.checkConnection(dataSourceType, connectionParam)).thenReturn(connectionResult);
Result connectFailed = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.DATASOURCE_CONNECT_FAILED.getCode(), connectFailed.getCode().intValue());
//success
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName)).thenReturn(null);
connectionResult = new Result(Status.DATASOURCE_CONNECT_FAILED.getCode(), Status.DATASOURCE_CONNECT_FAILED.getMsg());
PowerMockito.when(dataSourceService.checkConnection(dataSourceType, connectionParam)).thenReturn(connectionResult);
Result success = dataSourceService.updateDataSource(dataSourceId, loginUser, postgreSqlDatasourceParam);
Assert.assertEquals(Status.SUCCESS.getCode(), success.getCode().intValue());
}
@Test
public void queryDataSourceListPagingTest() {
User loginUser = getAdminUser();
String searchVal = "";
int pageNo = 1;
int pageSize = 10;
Result result = dataSourceService.queryDataSourceListPaging(loginUser, searchVal, pageNo, pageSize);
Assert.assertEquals(Status.SUCCESS.getCode(),(int)result.getCode());
}
@Test
public void connectionTest() {
int dataSourceId = -1;
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(null);
Result result = dataSourceService.connectionTest(dataSourceId);
Assert.assertEquals(Status.RESOURCE_NOT_EXIST.getCode(), result.getCode().intValue());
}
@Test
public void deleteTest() {
User loginUser = getAdminUser();
int dataSourceId = 1;
Result result = new Result();
//resource not exist
dataSourceService.putMsg(result, Status.RESOURCE_NOT_EXIST);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(null);
Assert.assertEquals(result.getCode(), dataSourceService.delete(loginUser, dataSourceId).getCode());
// user no operation perm
dataSourceService.putMsg(result, Status.USER_NO_OPERATION_PERM);
DataSource dataSource = new DataSource();
dataSource.setUserId(0);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
Assert.assertEquals(result.getCode(), dataSourceService.delete(loginUser, dataSourceId).getCode());
// success
dataSourceService.putMsg(result, Status.SUCCESS);
dataSource.setUserId(-1);
PowerMockito.when(dataSourceMapper.selectById(dataSourceId)).thenReturn(dataSource);
Assert.assertEquals(result.getCode(), dataSourceService.delete(loginUser, dataSourceId).getCode());
}
@Test
public void unauthDatasourceTest() {
User loginUser = getAdminUser();
int userId = -1;
//user no operation perm
Map<String, Object> noOperationPerm = dataSourceService.unauthDatasource(loginUser, userId);
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, noOperationPerm.get(Constants.STATUS));
//success
loginUser.setUserType(UserType.ADMIN_USER);
Map<String, Object> success = dataSourceService.unauthDatasource(loginUser, userId);
Assert.assertEquals(Status.SUCCESS, success.get(Constants.STATUS));
}
@Test
public void authedDatasourceTest() {
User loginUser = getAdminUser();
int userId = -1;
//user no operation perm
Map<String, Object> noOperationPerm = dataSourceService.authedDatasource(loginUser, userId);
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, noOperationPerm.get(Constants.STATUS));
//success
loginUser.setUserType(UserType.ADMIN_USER);
Map<String, Object> success = dataSourceService.authedDatasource(loginUser, userId);
Assert.assertEquals(Status.SUCCESS, success.get(Constants.STATUS));
}
@Test
public void queryDataSourceListTest() {
User loginUser = new User();
loginUser.setUserType(UserType.GENERAL_USER);
Map<String, Object> map = dataSourceService.queryDataSourceList(loginUser, DbType.MYSQL.ordinal());
Assert.assertEquals(Status.SUCCESS, map.get(Constants.STATUS));
}
@Test
public void verifyDataSourceNameTest() {
User loginUser = new User();
loginUser.setUserType(UserType.GENERAL_USER);
String dataSourceName = "dataSource1";
PowerMockito.when(dataSourceMapper.queryDataSourceByName(dataSourceName)).thenReturn(getDataSourceList());
Result result = dataSourceService.verifyDataSourceName(dataSourceName);
Assert.assertEquals(Status.DATASOURCE_EXIST.getMsg(), result.getMsg());
}
@Test
public void queryDataSourceTest() {
PowerMockito.when(dataSourceMapper.selectById(Mockito.anyInt())).thenReturn(null);
Map<String, Object> result = dataSourceService.queryDataSource(Mockito.anyInt());
Assert.assertEquals(((Status) result.get(Constants.STATUS)).getCode(), Status.RESOURCE_NOT_EXIST.getCode());
PowerMockito.when(dataSourceMapper.selectById(Mockito.anyInt())).thenReturn(getOracleDataSource());
result = dataSourceService.queryDataSource(Mockito.anyInt());
Assert.assertEquals(((Status) result.get(Constants.STATUS)).getCode(), Status.SUCCESS.getCode());
}
private List<DataSource> getDataSourceList() {
List<DataSource> dataSources = new ArrayList<>();
dataSources.add(getOracleDataSource());
return dataSources;
}
private DataSource getOracleDataSource() {
DataSource dataSource = new DataSource();
dataSource.setName("test");
dataSource.setNote("Note");
dataSource.setType(DbType.ORACLE);
dataSource.setConnectionParams("{\"connectType\":\"ORACLE_SID\",\"address\":\"jdbc:oracle:thin:@192.168.xx.xx:49161\",\"database\":\"XE\","
+ "\"jdbcUrl\":\"jdbc:oracle:thin:@192.168.xx.xx:49161/XE\",\"user\":\"system\",\"password\":\"oracle\"}");
return dataSource;
}
@Test
public void buildParameter() {
OracleDatasourceParamDTO oracleDatasourceParamDTO = new OracleDatasourceParamDTO();
oracleDatasourceParamDTO.setHost("192.168.9.1");
oracleDatasourceParamDTO.setPort(1521);
oracleDatasourceParamDTO.setDatabase("im");
oracleDatasourceParamDTO.setUserName("test");
oracleDatasourceParamDTO.setPassword("test");
oracleDatasourceParamDTO.setConnectType(DbConnectType.ORACLE_SERVICE_NAME);
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(oracleDatasourceParamDTO);
String expected = "{\"user\":\"test\",\"password\":\"test\",\"address\":\"jdbc:oracle:thin:@//192.168.9.1:1521\",\"database\":\"im\",\"jdbcUrl\":\"jdbc:oracle:thin:@//192.168.9.1:1521/im\","
+ "\"driverClassName\":\"oracle.jdbc.OracleDriver\",\"validationQuery\":\"select 1 from dual\",\"connectType\":\"ORACLE_SERVICE_NAME\"}";
Assert.assertEquals(expected, JSONUtils.toJsonString(connectionParam));
PowerMockito.mockStatic(CommonUtils.class);
PowerMockito.mockStatic(PasswordUtils.class);
PowerMockito.when(CommonUtils.getKerberosStartupState()).thenReturn(true);
PowerMockito.when(PasswordUtils.encodePassword(Mockito.anyString())).thenReturn("test");
HiveDataSourceParamDTO hiveDataSourceParamDTO = new HiveDataSourceParamDTO();
hiveDataSourceParamDTO.setHost("192.168.9.1");
hiveDataSourceParamDTO.setPort(10000);
hiveDataSourceParamDTO.setDatabase("im");
hiveDataSourceParamDTO.setPrincipal("hive/hdfs-mycluster@ESZ.COM");
hiveDataSourceParamDTO.setUserName("test");
hiveDataSourceParamDTO.setPassword("test");
hiveDataSourceParamDTO.setJavaSecurityKrb5Conf("/opt/krb5.conf");
hiveDataSourceParamDTO.setLoginUserKeytabPath("/opt/hdfs.headless.keytab");
hiveDataSourceParamDTO.setLoginUserKeytabUsername("test2/hdfs-mycluster@ESZ.COM");
connectionParam = DatasourceUtil.buildConnectionParams(hiveDataSourceParamDTO);
expected = "{\"user\":\"test\",\"password\":\"test\",\"address\":\"jdbc:hive2://192.168.9.1:10000\",\"database\":\"im\",\"jdbcUrl\":\"jdbc:hive2://192.168.9.1:10000/im;"
+ "principal=hive/hdfs-mycluster@ESZ.COM\",\"driverClassName\":\"org.apache.hive.jdbc.HiveDriver\",\"validationQuery\":\"select 1\",\"principal\":\"hive/hdfs-mycluster@ESZ.COM\","
+ "\"javaSecurityKrb5Conf\":\"/opt/krb5.conf\",\"loginUserKeytabUsername\":\"test2/hdfs-mycluster@ESZ.COM\",\"loginUserKeytabPath\":\"/opt/hdfs.headless.keytab\"}";
Assert.assertEquals(expected, JSONUtils.toJsonString(connectionParam));
}
@Test
public void buildParameterWithDecodePassword() {
PropertyUtils.setValue(Constants.DATASOURCE_ENCRYPTION_ENABLE, "true");
Map<String, String> other = new HashMap<>();
other.put("autoDeserialize", "yes");
other.put("allowUrlInLocalInfile", "true");
MysqlDatasourceParamDTO mysqlDatasourceParamDTO = new MysqlDatasourceParamDTO();
mysqlDatasourceParamDTO.setHost("192.168.9.1");
mysqlDatasourceParamDTO.setPort(1521);
mysqlDatasourceParamDTO.setDatabase("im");
mysqlDatasourceParamDTO.setUserName("test");
mysqlDatasourceParamDTO.setPassword("123456");
mysqlDatasourceParamDTO.setOther(other);
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(mysqlDatasourceParamDTO);
String expected = "{\"user\":\"test\",\"password\":\"IUAjJCVeJipNVEl6TkRVMg==\",\"address\":\"jdbc:mysql://192.168.9.1:1521\",\"database\":\"im\",\"jdbcUrl\":\"jdbc:mysql://192.168.9.1:1521/"
+ "im\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"validationQuery\":\"select 1\",\"props\":{\"autoDeserialize\":\"yes\",\"allowUrlInLocalInfile\":\"true\"}}";
Assert.assertEquals(expected, JSONUtils.toJsonString(connectionParam));
PropertyUtils.setValue(Constants.DATASOURCE_ENCRYPTION_ENABLE, "false");
mysqlDatasourceParamDTO = new MysqlDatasourceParamDTO();
mysqlDatasourceParamDTO.setHost("192.168.9.1");
mysqlDatasourceParamDTO.setPort(1521);
mysqlDatasourceParamDTO.setDatabase("im");
mysqlDatasourceParamDTO.setUserName("test");
mysqlDatasourceParamDTO.setPassword("123456");
connectionParam = DatasourceUtil.buildConnectionParams(mysqlDatasourceParamDTO);
expected = "{\"user\":\"test\",\"password\":\"123456\",\"address\":\"jdbc:mysql://192.168.9.1:1521\",\"database\":\"im\","
+ "\"jdbcUrl\":\"jdbc:mysql://192.168.9.1:1521/im\",\"driverClassName\":\"com.mysql.jdbc.Driver\",\"validationQuery\":\"select 1\"}";
Assert.assertEquals(expected, JSONUtils.toJsonString(connectionParam));
}
/**
* get Mock Admin User
*
* @return admin user
*/
private User getAdminUser() {
User loginUser = new User();
loginUser.setId(-1);
loginUser.setUserName("admin");
loginUser.setUserType(UserType.GENERAL_USER);
return loginUser;
}
/**
* test check connection
*/
@Test
public void testCheckConnection() throws Exception {
DbType dataSourceType = DbType.POSTGRESQL;
String dataSourceName = "dataSource01";
String dataSourceDesc = "test dataSource";
PostgreSqlDatasourceParamDTO postgreSqlDatasourceParam = new PostgreSqlDatasourceParamDTO();
postgreSqlDatasourceParam.setDatabase(dataSourceName);
postgreSqlDatasourceParam.setNote(dataSourceDesc);
postgreSqlDatasourceParam.setHost("172.16.133.200");
postgreSqlDatasourceParam.setPort(5432);
postgreSqlDatasourceParam.setDatabase("dolphinscheduler");
postgreSqlDatasourceParam.setUserName("postgres");
postgreSqlDatasourceParam.setPassword("");
ConnectionParam connectionParam = DatasourceUtil.buildConnectionParams(postgreSqlDatasourceParam);
PowerMockito.mockStatic(DatasourceUtil.class);
PowerMockito.mockStatic(DataSourceClientProvider.class);
DataSourceClientProvider clientProvider = PowerMockito.mock(DataSourceClientProvider.class);
PowerMockito.when(DataSourceClientProvider.getInstance()).thenReturn(clientProvider);
Result result = dataSourceService.checkConnection(dataSourceType, connectionParam);
Assert.assertEquals(Status.CONNECTION_TEST_FAILURE.getCode(), result.getCode().intValue());
Connection connection = PowerMockito.mock(Connection.class);
PowerMockito.when(clientProvider.getConnection(Mockito.any(), Mockito.any())).thenReturn(connection);
result = dataSourceService.checkConnection(dataSourceType, connectionParam);
Assert.assertEquals(Status.SUCCESS.getCode(), result.getCode().intValue());
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/main/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessor.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.datasource.api.datasource.mysql;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.AbstractDatasourceProcessor;
import org.apache.dolphinscheduler.plugin.datasource.api.datasource.BaseDataSourceParamDTO;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.PasswordUtils;
import org.apache.dolphinscheduler.spi.datasource.BaseConnectionParam;
import org.apache.dolphinscheduler.spi.datasource.ConnectionParam;
import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.dolphinscheduler.spi.utils.Constants;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
import org.apache.dolphinscheduler.spi.utils.StringUtils;
import org.apache.commons.collections4.MapUtils;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.LinkedHashMap;
import java.util.Map;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class MysqlDatasourceProcessor extends AbstractDatasourceProcessor {
private final Logger logger = LoggerFactory.getLogger(MysqlDatasourceProcessor.class);
private static final String ALLOW_LOAD_LOCAL_IN_FILE_NAME = "allowLoadLocalInfile";
private static final String AUTO_DESERIALIZE = "autoDeserialize";
private static final String ALLOW_LOCAL_IN_FILE_NAME = "allowLocalInfile";
private static final String ALLOW_URL_IN_LOCAL_IN_FILE_NAME = "allowUrlInLocalInfile";
private static final String APPEND_PARAMS = "allowLoadLocalInfile=false&autoDeserialize=false&allowLocalInfile=false&allowUrlInLocalInfile=false";
@Override
public BaseDataSourceParamDTO createDatasourceParamDTO(String connectionJson) {
MysqlConnectionParam connectionParams = (MysqlConnectionParam) createConnectionParams(connectionJson);
MysqlDatasourceParamDTO mysqlDatasourceParamDTO = new MysqlDatasourceParamDTO();
mysqlDatasourceParamDTO.setUserName(connectionParams.getUser());
mysqlDatasourceParamDTO.setDatabase(connectionParams.getDatabase());
mysqlDatasourceParamDTO.setOther(parseOther(connectionParams.getOther()));
String address = connectionParams.getAddress();
String[] hostSeperator = address.split(Constants.DOUBLE_SLASH);
String[] hostPortArray = hostSeperator[hostSeperator.length - 1].split(Constants.COMMA);
mysqlDatasourceParamDTO.setPort(Integer.parseInt(hostPortArray[0].split(Constants.COLON)[1]));
mysqlDatasourceParamDTO.setHost(hostPortArray[0].split(Constants.COLON)[0]);
return mysqlDatasourceParamDTO;
}
@Override
public BaseConnectionParam createConnectionParams(BaseDataSourceParamDTO dataSourceParam) {
MysqlDatasourceParamDTO mysqlDatasourceParam = (MysqlDatasourceParamDTO) dataSourceParam;
String address = String.format("%s%s:%s", Constants.JDBC_MYSQL, mysqlDatasourceParam.getHost(), mysqlDatasourceParam.getPort());
String jdbcUrl = String.format("%s/%s", address, mysqlDatasourceParam.getDatabase());
MysqlConnectionParam mysqlConnectionParam = new MysqlConnectionParam();
mysqlConnectionParam.setJdbcUrl(jdbcUrl);
mysqlConnectionParam.setDatabase(mysqlDatasourceParam.getDatabase());
mysqlConnectionParam.setAddress(address);
mysqlConnectionParam.setUser(mysqlDatasourceParam.getUserName());
mysqlConnectionParam.setPassword(PasswordUtils.encodePassword(mysqlDatasourceParam.getPassword()));
mysqlConnectionParam.setDriverClassName(getDatasourceDriver());
mysqlConnectionParam.setValidationQuery(getValidationQuery());
mysqlConnectionParam.setOther(transformOther(mysqlDatasourceParam.getOther()));
mysqlConnectionParam.setProps(mysqlDatasourceParam.getOther());
return mysqlConnectionParam;
}
@Override
public ConnectionParam createConnectionParams(String connectionJson) {
return JSONUtils.parseObject(connectionJson, MysqlConnectionParam.class);
}
@Override
public String getDatasourceDriver() {
return Constants.COM_MYSQL_JDBC_DRIVER;
}
@Override
public String getValidationQuery() {
return Constants.MYSQL_VALIDATION_QUERY;
}
@Override
public String getJdbcUrl(ConnectionParam connectionParam) {
MysqlConnectionParam mysqlConnectionParam = (MysqlConnectionParam) connectionParam;
String jdbcUrl = mysqlConnectionParam.getJdbcUrl();
if (!StringUtils.isEmpty(mysqlConnectionParam.getOther())) {
return String.format("%s?%s&%s", jdbcUrl, mysqlConnectionParam.getOther(), APPEND_PARAMS);
}
return String.format("%s?%s", jdbcUrl, APPEND_PARAMS);
}
@Override
public Connection getConnection(ConnectionParam connectionParam) throws ClassNotFoundException, SQLException {
MysqlConnectionParam mysqlConnectionParam = (MysqlConnectionParam) connectionParam;
Class.forName(getDatasourceDriver());
String user = mysqlConnectionParam.getUser();
if (user.contains(AUTO_DESERIALIZE)) {
logger.warn("sensitive param : {} in username field is filtered", AUTO_DESERIALIZE);
user = user.replace(AUTO_DESERIALIZE, "");
}
String password = PasswordUtils.decodePassword(mysqlConnectionParam.getPassword());
if (password.contains(AUTO_DESERIALIZE)) {
logger.warn("sensitive param : {} in password field is filtered", AUTO_DESERIALIZE);
password = password.replace(AUTO_DESERIALIZE, "");
}
return DriverManager.getConnection(getJdbcUrl(connectionParam), user, password);
}
@Override
public DbType getDbType() {
return DbType.MYSQL;
}
private String transformOther(Map<String, String> paramMap) {
if (MapUtils.isEmpty(paramMap)) {
return null;
}
Map<String, String> otherMap = new HashMap<>();
paramMap.forEach((k, v) -> {
if (!checkKeyIsLegitimate(k)) {
return;
}
otherMap.put(k, v);
});
if (MapUtils.isEmpty(otherMap)) {
return null;
}
StringBuilder stringBuilder = new StringBuilder();
otherMap.forEach((key, value) -> stringBuilder.append(String.format("%s=%s&", key, value)));
return stringBuilder.toString();
}
private static boolean checkKeyIsLegitimate(String key) {
return !key.contains(ALLOW_LOAD_LOCAL_IN_FILE_NAME)
&& !key.contains(AUTO_DESERIALIZE)
&& !key.contains(ALLOW_LOCAL_IN_FILE_NAME)
&& !key.contains(ALLOW_URL_IN_LOCAL_IN_FILE_NAME);
}
private Map<String, String> parseOther(String other) {
if (StringUtils.isEmpty(other)) {
return null;
}
Map<String, String> otherMap = new LinkedHashMap<>();
for (String config : other.split("&")) {
otherMap.put(config.split("=")[0], config.split("=")[1]);
}
return otherMap;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/test/java/org/apache/dolphinscheduler/plugin/datasource/api/datasource/mysql/MysqlDatasourceProcessorTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.plugin.datasource.api.datasource.mysql;
import org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.CommonUtils;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.DatasourceUtil;
import org.apache.dolphinscheduler.plugin.datasource.api.utils.PasswordUtils;
import org.apache.dolphinscheduler.spi.enums.DbType;
import org.apache.dolphinscheduler.spi.utils.Constants;
import org.apache.dolphinscheduler.spi.utils.JSONUtils;
import java.sql.DriverManager;
import java.util.HashMap;
import java.util.Map;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mockito;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
@RunWith(PowerMockRunner.class)
@PrepareForTest({Class.class, DriverManager.class, DatasourceUtil.class, CommonUtils.class, DataSourceClientProvider.class, PasswordUtils.class})
public class MysqlDatasourceProcessorTest {
private MysqlDatasourceProcessor mysqlDatasourceProcessor = new MysqlDatasourceProcessor();
@Test
public void testCreateConnectionParams() {
Map<String, String> props = new HashMap<>();
props.put("serverTimezone", "utc");
MysqlDatasourceParamDTO mysqlDatasourceParamDTO = new MysqlDatasourceParamDTO();
mysqlDatasourceParamDTO.setUserName("root");
mysqlDatasourceParamDTO.setPassword("123456");
mysqlDatasourceParamDTO.setHost("localhost");
mysqlDatasourceParamDTO.setPort(3306);
mysqlDatasourceParamDTO.setDatabase("default");
mysqlDatasourceParamDTO.setOther(props);
PowerMockito.mockStatic(PasswordUtils.class);
PowerMockito.when(PasswordUtils.encodePassword(Mockito.anyString())).thenReturn("test");
MysqlConnectionParam connectionParams = (MysqlConnectionParam) mysqlDatasourceProcessor
.createConnectionParams(mysqlDatasourceParamDTO);
Assert.assertEquals("jdbc:mysql://localhost:3306", connectionParams.getAddress());
Assert.assertEquals("jdbc:mysql://localhost:3306/default", connectionParams.getJdbcUrl());
}
@Test
public void testCreateConnectionParams2() {
String connectionJson = "{\"user\":\"root\",\"password\":\"123456\",\"address\":\"jdbc:mysql://localhost:3306\""
+ ",\"database\":\"default\",\"jdbcUrl\":\"jdbc:mysql://localhost:3306/default\"}";
MysqlConnectionParam connectionParams = (MysqlConnectionParam) mysqlDatasourceProcessor
.createConnectionParams(connectionJson);
Assert.assertNotNull(connectionJson);
Assert.assertEquals("root", connectionParams.getUser());
}
@Test
public void testGetDatasourceDriver() {
Assert.assertEquals(Constants.COM_MYSQL_JDBC_DRIVER, mysqlDatasourceProcessor.getDatasourceDriver());
}
@Test
public void testGetJdbcUrl() {
MysqlConnectionParam mysqlConnectionParam = new MysqlConnectionParam();
mysqlConnectionParam.setJdbcUrl("jdbc:mysql://localhost:3306/default");
Assert.assertEquals("jdbc:mysql://localhost:3306/default?allowLoadLocalInfile=false&autoDeserialize=false&allowLocalInfile=false&allowUrlInLocalInfile=false",
mysqlDatasourceProcessor.getJdbcUrl(mysqlConnectionParam));
}
@Test
public void testGetDbType() {
Assert.assertEquals(DbType.MYSQL, mysqlDatasourceProcessor.getDbType());
}
@Test
public void testGetValidationQuery() {
Assert.assertEquals(Constants.MYSQL_VALIDATION_QUERY, mysqlDatasourceProcessor.getValidationQuery());
}
@Test
public void testGetDatasourceUniqueId() {
MysqlConnectionParam mysqlConnectionParam = new MysqlConnectionParam();
mysqlConnectionParam.setJdbcUrl("jdbc:mysql://localhost:3306/default");
mysqlConnectionParam.setUser("root");
Assert.assertEquals("mysql@root@jdbc:mysql://localhost:3306/default", mysqlDatasourceProcessor.getDatasourceUniqueId(mysqlConnectionParam, DbType.MYSQL));
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/task/TaskConstants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.spi.task;
public class TaskConstants {
private TaskConstants() {
throw new IllegalStateException("Utility class");
}
public static final String APPLICATION_REGEX = "application_\\d+_\\d+";
/**
* string false
*/
public static final String STRING_FALSE = "false";
/**
* exit code kill
*/
public static final int EXIT_CODE_KILL = 137;
public static final String PID = "pid";
/**
* comma ,
*/
public static final String COMMA = ",";
/**
* slash /
*/
public static final String SLASH = "/";
/**
* COLON :
*/
public static final String COLON = ":";
/**
* SPACE " "
*/
public static final String SPACE = " ";
/**
* SINGLE_SLASH /
*/
public static final String SINGLE_SLASH = "/";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* SINGLE_QUOTES "'"
*/
public static final String SINGLE_QUOTES = "'";
/**
* DOUBLE_QUOTES "\""
*/
public static final String DOUBLE_QUOTES = "\"";
/**
* SEMICOLON ;
*/
public static final String SEMICOLON = ";";
/**
* EQUAL SIGN
*/
public static final String EQUAL_SIGN = "=";
/**
* AT SIGN
*/
public static final String AT_SIGN = "@";
/**
* sleep time
*/
public static final int SLEEP_TIME_MILLIS = 1000;
/**
* exit code failure
*/
public static final int EXIT_CODE_FAILURE = -1;
/**
* exit code success
*/
public static final int EXIT_CODE_SUCCESS = 0;
public static final String SH = "sh";
/**
* default log cache rows num,output when reach the number
*/
public static final int DEFAULT_LOG_ROWS_NUM = 4 * 16;
/**
* log flush interval?output when reach the interval
*/
public static final int DEFAULT_LOG_FLUSH_INTERVAL = 1000;
/**
* pstree, get pud and sub pid
*/
public static final String PSTREE = "pstree";
public static final String RWXR_XR_X = "rwxr-xr-x";
/**
* task log info format
*/
public static final String TASK_LOG_INFO_FORMAT = "TaskLogInfo-%s";
/**
* date format of yyyyMMdd
*/
public static final String PARAMETER_FORMAT_DATE = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String PARAMETER_FORMAT_TIME = "yyyyMMddHHmmss";
/**
* new
* schedule time
*/
public static final String PARAMETER_SHECDULE_TIME = "schedule.time";
/**
* system date(yyyyMMddHHmmss)
*/
public static final String PARAMETER_DATETIME = "system.datetime";
/**
* system date(yyyymmdd) today
*/
public static final String PARAMETER_CURRENT_DATE = "system.biz.curdate";
/**
* system date(yyyymmdd) yesterday
*/
public static final String PARAMETER_BUSINESS_DATE = "system.biz.date";
/**
* the absolute path of current executing task
*/
public static final String PARAMETER_TASK_EXECUTE_PATH = "system.task.execute.path";
/**
* the instance id of current task
*/
public static final String PARAMETER_TASK_INSTANCE_ID = "system.task.instance.id";
/**
* month_begin
*/
public static final String MONTH_BEGIN = "month_begin";
/**
* add_months
*/
public static final String ADD_MONTHS = "add_months";
/**
* month_end
*/
public static final String MONTH_END = "month_end";
/**
* week_begin
*/
public static final String WEEK_BEGIN = "week_begin";
/**
* week_end
*/
public static final String WEEK_END = "week_end";
/**
* timestamp
*/
public static final String TIMESTAMP = "timestamp";
public static final char SUBTRACT_CHAR = '-';
public static final char ADD_CHAR = '+';
public static final char MULTIPLY_CHAR = '*';
public static final char DIVISION_CHAR = '/';
public static final char LEFT_BRACE_CHAR = '(';
public static final char RIGHT_BRACE_CHAR = ')';
public static final String ADD_STRING = "+";
public static final String MULTIPLY_STRING = "*";
public static final String DIVISION_STRING = "/";
public static final String LEFT_BRACE_STRING = "(";
public static final char P = 'P';
public static final char N = 'N';
public static final String SUBTRACT_STRING = "-";
public static final String GLOBAL_PARAMS = "globalParams";
public static final String LOCAL_PARAMS = "localParams";
public static final String LOCAL_PARAMS_LIST = "localParamsList";
public static final String SUBPROCESS_INSTANCE_ID = "subProcessInstanceId";
public static final String PROCESS_INSTANCE_STATE = "processInstanceState";
public static final String PARENT_WORKFLOW_INSTANCE = "parentWorkflowInstance";
public static final String CONDITION_RESULT = "conditionResult";
public static final String SWITCH_RESULT = "switchResult";
public static final String DEPENDENCE = "dependence";
public static final String TASK_TYPE = "taskType";
public static final String TASK_LIST = "taskList";
public static final String QUEUE = "queue";
public static final String QUEUE_NAME = "queueName";
public static final int LOG_QUERY_SKIP_LINE_NUMBER = 0;
public static final int LOG_QUERY_LIMIT = 4096;
/**
* default display rows
*/
public static final int DEFAULT_DISPLAY_ROWS = 10;
/**
* jar
*/
public static final String JAR = "jar";
/**
* hadoop
*/
public static final String HADOOP = "hadoop";
/**
* -D <property>=<value>
*/
public static final String D = "-D";
/**
* jdbc url
*/
public static final String JDBC_MYSQL = "jdbc:mysql://";
public static final String JDBC_POSTGRESQL = "jdbc:postgresql://";
public static final String JDBC_HIVE_2 = "jdbc:hive2://";
public static final String JDBC_CLICKHOUSE = "jdbc:clickhouse://";
public static final String JDBC_ORACLE_SID = "jdbc:oracle:thin:@";
public static final String JDBC_ORACLE_SERVICE_NAME = "jdbc:oracle:thin:@//";
public static final String JDBC_SQLSERVER = "jdbc:sqlserver://";
public static final String JDBC_DB2 = "jdbc:db2://";
public static final String JDBC_PRESTO = "jdbc:presto://";
/**
* driver
*/
public static final String ORG_POSTGRESQL_DRIVER = "org.postgresql.Driver";
public static final String COM_MYSQL_JDBC_DRIVER = "com.mysql.jdbc.Driver";
public static final String ORG_APACHE_HIVE_JDBC_HIVE_DRIVER = "org.apache.hive.jdbc.HiveDriver";
public static final String COM_CLICKHOUSE_JDBC_DRIVER = "ru.yandex.clickhouse.ClickHouseDriver";
public static final String COM_ORACLE_JDBC_DRIVER = "oracle.jdbc.driver.OracleDriver";
public static final String COM_SQLSERVER_JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
public static final String COM_DB2_JDBC_DRIVER = "com.ibm.db2.jcc.DB2Driver";
public static final String COM_PRESTO_JDBC_DRIVER = "com.facebook.presto.jdbc.PrestoDriver";
/**
* datasource encryption salt
*/
public static final String DATASOURCE_ENCRYPTION_SALT_DEFAULT = "!@#$%^&*";
public static final String DATASOURCE_ENCRYPTION_ENABLE = "datasource.encryption.enable";
public static final String DATASOURCE_ENCRYPTION_SALT = "datasource.encryption.salt";
/**
* resource storage type
*/
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* kerberos expire time
*/
public static final String KERBEROS_EXPIRE_TIME = "kerberos.expire.time";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* Task Logger Thread's name
*/
public static final String TASK_LOGGER_THREAD_NAME = "TaskLogInfo";
/**
* hdfs/s3 configuration
* resource.upload.path
*/
public static final String RESOURCE_UPLOAD_PATH = "resource.upload.path";
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,505 | [Feature][Dao] Upgrade com.mysql.jdbc.Driver | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Since we have upgraded the mysql connector to 8.0.15 #6484, it's necessary to use `com.mysql.cj.jdbc.Driver` the `com.mysql.jdbc.Driver` has been deprecated.
Also note if we need to modify the current connection parameters, e.g. `useSSL=false`
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6505 | https://github.com/apache/dolphinscheduler/pull/6708 | 653eae24195957b01d1a911aada020372d1742e6 | 861aaaf9712ec7141417a270710a7941438245d9 | "2021-10-12T06:49:53Z" | java | "2021-11-18T00:39:11Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/Constants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.spi.utils;
/**
* constants
*/
public class Constants {
private Constants() {
throw new IllegalStateException("Constants class");
}
/** alert plugin param field string **/
public static final String STRING_PLUGIN_PARAM_FIELD = "field";
/** alert plugin param name string **/
public static final String STRING_PLUGIN_PARAM_NAME = "name";
/** alert plugin param props string **/
public static final String STRING_PLUGIN_PARAM_PROPS = "props";
/** alert plugin param type string **/
public static final String STRING_PLUGIN_PARAM_TYPE = "type";
/** alert plugin param title string **/
public static final String STRING_PLUGIN_PARAM_TITLE = "title";
/** alert plugin param value string **/
public static final String STRING_PLUGIN_PARAM_VALUE = "value";
/** alert plugin param validate string **/
public static final String STRING_PLUGIN_PARAM_VALIDATE = "validate";
/** alert plugin param options string **/
public static final String STRING_PLUGIN_PARAM_OPTIONS = "options";
/** string true */
public static final String STRING_TRUE = "true";
/** string false */
public static final String STRING_FALSE = "false";
/** string yes */
public static final String STRING_YES = "YES";
/** string no */
public static final String STRING_NO = "NO";
/**
* common properties path
*/
public static final String COMMON_PROPERTIES_PATH = "/common.properties";
/**
* date format of yyyy-MM-dd HH:mm:ss
*/
public static final String YYYY_MM_DD_HH_MM_SS = "yyyy-MM-dd HH:mm:ss";
/**
* date format of yyyyMMddHHmmss
*/
public static final String YYYYMMDDHHMMSS = "yyyyMMddHHmmss";
/**
* date format of yyyyMMddHHmmssSSS
*/
public static final String YYYYMMDDHHMMSSSSS = "yyyyMMddHHmmssSSS";
public static final String SPRING_DATASOURCE_MIN_IDLE = "spring.datasource.minIdle";
public static final String SPRING_DATASOURCE_MAX_ACTIVE = "spring.datasource.maxActive";
public static final String SPRING_DATASOURCE_TEST_ON_BORROW = "spring.datasource.testOnBorrow";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* resource storage type
*/
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* driver
*/
public static final String ORG_POSTGRESQL_DRIVER = "org.postgresql.Driver";
public static final String COM_MYSQL_JDBC_DRIVER = "com.mysql.jdbc.Driver";
public static final String ORG_APACHE_HIVE_JDBC_HIVE_DRIVER = "org.apache.hive.jdbc.HiveDriver";
public static final String COM_CLICKHOUSE_JDBC_DRIVER = "ru.yandex.clickhouse.ClickHouseDriver";
public static final String COM_ORACLE_JDBC_DRIVER = "oracle.jdbc.OracleDriver";
public static final String COM_SQLSERVER_JDBC_DRIVER = "com.microsoft.sqlserver.jdbc.SQLServerDriver";
public static final String COM_DB2_JDBC_DRIVER = "com.ibm.db2.jcc.DB2Driver";
public static final String COM_PRESTO_JDBC_DRIVER = "com.facebook.presto.jdbc.PrestoDriver";
/**
* validation Query
*/
public static final String POSTGRESQL_VALIDATION_QUERY = "select version()";
public static final String MYSQL_VALIDATION_QUERY = "select 1";
public static final String HIVE_VALIDATION_QUERY = "select 1";
public static final String CLICKHOUSE_VALIDATION_QUERY = "select 1";
public static final String ORACLE_VALIDATION_QUERY = "select 1 from dual";
public static final String SQLSERVER_VALIDATION_QUERY = "select 1";
public static final String DB2_VALIDATION_QUERY = "select 1 from sysibm.sysdummy1";
public static final String PRESTO_VALIDATION_QUERY = "select 1";
/**
* jdbc url
*/
public static final String JDBC_MYSQL = "jdbc:mysql://";
public static final String JDBC_POSTGRESQL = "jdbc:postgresql://";
public static final String JDBC_HIVE_2 = "jdbc:hive2://";
public static final String JDBC_CLICKHOUSE = "jdbc:clickhouse://";
public static final String JDBC_ORACLE_SID = "jdbc:oracle:thin:@";
public static final String JDBC_ORACLE_SERVICE_NAME = "jdbc:oracle:thin:@//";
public static final String JDBC_SQLSERVER = "jdbc:sqlserver://";
public static final String JDBC_DB2 = "jdbc:db2://";
public static final String JDBC_PRESTO = "jdbc:presto://";
public static final String ADDRESS = "address";
public static final String DATABASE = "database";
public static final String JDBC_URL = "jdbcUrl";
public static final String PRINCIPAL = "principal";
public static final String OTHER = "other";
public static final String ORACLE_DB_CONNECT_TYPE = "connectType";
public static final String KERBEROS_KRB5_CONF_PATH = "javaSecurityKrb5Conf";
public static final String KERBEROS_KEY_TAB_USERNAME = "loginUserKeytabUsername";
public static final String KERBEROS_KEY_TAB_PATH = "loginUserKeytabPath";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* comma ,
*/
public static final String COMMA = ",";
/**
* COLON :
*/
public static final String COLON = ":";
/**
* AT SIGN
*/
public static final String AT_SIGN = "@";
/**
* datasource encryption salt
*/
public static final String DATASOURCE_ENCRYPTION_SALT_DEFAULT = "!@#$%^&*";
public static final String DATASOURCE_ENCRYPTION_ENABLE = "datasource.encryption.enable";
public static final String DATASOURCE_ENCRYPTION_SALT = "datasource.encryption.salt";
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,889 | [Improvement][router] Split the routing module to increase subsequent maintainability. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, all routing configuration files are placed in the `index.js` file, so that the file content is too much, and it is not suitable for search and subsequent maintenance.
### Use case
Split routing file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6889 | https://github.com/apache/dolphinscheduler/pull/6892 | 80dcf9b03ec0c37c01e7323a265dc0728012af52 | 00813b0a696bcd50d484670cf191efcb8921648f | "2021-11-17T10:51:40Z" | java | "2021-11-19T01:38:22Z" | dolphinscheduler-ui/src/js/conf/home/router/index.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import Vue from 'vue'
import store from '@/conf/home/store'
import localStore from '@/module/util/localStorage'
import i18n from '@/module/i18n/index.js'
import config from '~/external/config'
import Router from 'vue-router'
Vue.use(Router)
const router = new Router({
routes: [
{
path: '/',
name: 'index',
redirect: {
name: 'home'
}
},
{
path: '/home',
name: 'home',
component: resolve => require(['../pages/home/index'], resolve),
meta: {
title: `${i18n.$t('Home')} - DolphinScheduler`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects',
name: 'projects',
component: resolve => require(['../pages/projects/index'], resolve),
meta: {
title: `${i18n.$t('Project')}`
},
redirect: {
name: 'projects-list'
},
beforeEnter: (to, from, next) => {
const blacklist = ['projects', 'projects-list']
const { projectCode } = to.params || {}
if (!blacklist.includes(to.name) && projectCode && projectCode !== localStore.getItem('projectCode')) {
store.dispatch('projects/getProjectByCode', projectCode).then(res => {
store.commit('dag/setProjectId', res.id)
store.commit('dag/setProjectCode', res.code)
store.commit('dag/setProjectName', res.name)
localStore.setItem('projectId', res.id)
localStore.setItem('projectCode', res.code)
localStore.setItem('projectName', res.name)
next()
}).catch(e => {
next({ name: 'projects-list' })
})
} else {
next()
}
},
children: [
{
path: '/projects/list',
name: 'projects-list',
component: resolve => require(['../pages/projects/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('Project')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/index',
name: 'projects-index',
component: resolve => require(['../pages/projects/pages/index/index'], resolve),
meta: {
title: `${i18n.$t('Project Home')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/kinship',
name: 'projects-kinship',
component: resolve => require(['../pages/projects/pages/kinship/index'], resolve),
meta: {
title: `${i18n.$t('Kinship')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/definition',
name: 'definition',
component: resolve => require(['../pages/projects/pages/definition/index'], resolve),
meta: {
title: `${i18n.$t('Process definition')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
},
redirect: {
name: 'projects-definition-list'
},
children: [
{
path: '/projects/:projectCode/definition/list',
name: 'projects-definition-list',
component: resolve => require(['../pages/projects/pages/definition/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('Process definition')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/definition/list/:code',
name: 'projects-definition-details',
component: resolve => require(['../pages/projects/pages/definition/pages/details/index'], resolve),
meta: {
title: `${i18n.$t('Process definition details')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/definition/create',
name: 'definition-create',
component: resolve => require(['../pages/projects/pages/definition/pages/create/index'], resolve),
meta: {
title: `${i18n.$t('Create process definition')}`
}
},
{
path: '/projects/:projectCode/definition/tree/:code',
name: 'definition-tree-view-index',
component: resolve => require(['../pages/projects/pages/definition/pages/tree/index'], resolve),
meta: {
title: `${i18n.$t('TreeView')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/definition/list/timing/:code',
name: 'definition-timing-details',
component: resolve => require(['../pages/projects/pages/definition/timing/index'], resolve),
meta: {
title: `${i18n.$t('Scheduled task list')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
}
]
},
{
path: '/projects/:projectCode/instance',
name: 'instance',
component: resolve => require(['../pages/projects/pages/instance/index'], resolve),
meta: {
title: `${i18n.$t('Process Instance')}`
},
redirect: {
name: 'projects-instance-list'
},
children: [
{
path: '/projects/:projectCode/instance/list',
name: 'projects-instance-list',
component: resolve => require(['../pages/projects/pages/instance/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('Process Instance')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/instance/list/:id',
name: 'projects-instance-details',
component: resolve => require(['../pages/projects/pages/instance/pages/details/index'], resolve),
meta: {
title: `${i18n.$t('Process instance details')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/instance/gantt/:id',
name: 'instance-gantt-index',
component: resolve => require(['../pages/projects/pages/instance/pages/gantt/index'], resolve),
meta: {
title: `${i18n.$t('Gantt')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
}
]
},
{
path: '/projects/:projectCode/task-instance',
name: 'task-instance',
component: resolve => require(['../pages/projects/pages/taskInstance'], resolve),
meta: {
title: `${i18n.$t('Task Instance')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/task-definition',
name: 'task-definition',
component: resolve => require(['../pages/projects/pages/taskDefinition/index'], resolve),
meta: {
title: `${i18n.$t('Task Definition')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/task-record',
name: 'task-record',
component: resolve => require(['../pages/projects/pages/taskRecord'], resolve),
meta: {
title: `${i18n.$t('Task record')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/projects/:projectCode/history-task-record',
name: 'history-task-record',
component: resolve => require(['../pages/projects/pages/historyTaskRecord'], resolve),
meta: {
title: `${i18n.$t('History task record')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
}
]
},
{
path: '/resource',
name: 'resource',
component: resolve => require(['../pages/resource/index'], resolve),
redirect: {
name: 'file'
},
meta: {
title: `${i18n.$t('Resources')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
},
children: [
{
path: '/resource/file',
name: 'file',
component: resolve => require(['../pages/resource/pages/file/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('File Manage')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/create',
name: 'resource-file-create',
component: resolve => require(['../pages/resource/pages/file/pages/create/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`
}
},
{
path: '/resource/file/createFolder',
name: 'resource-file-createFolder',
component: resolve => require(['../pages/resource/pages/file/pages/createFolder/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`
}
},
{
path: '/resource/file/subFileFolder/:id',
name: 'resource-file-subFileFolder',
component: resolve => require(['../pages/resource/pages/file/pages/subFileFolder/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/subFile/:id',
name: 'resource-file-subFile',
component: resolve => require(['../pages/resource/pages/file/pages/subFile/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/list/:id',
name: 'resource-file-details',
component: resolve => require(['../pages/resource/pages/file/pages/details/index'], resolve),
meta: {
title: `${i18n.$t('File Details')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/subdirectory/:id',
name: 'resource-file-subdirectory',
component: resolve => require(['../pages/resource/pages/file/pages/subdirectory/index'], resolve),
meta: {
title: `${i18n.$t('File Manage')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/file/edit/:id',
name: 'resource-file-edit',
component: resolve => require(['../pages/resource/pages/file/pages/edit/index'], resolve),
meta: {
title: `${i18n.$t('File Details')}`
}
},
{
path: '/resource/udf',
name: 'udf',
component: resolve => require(['../pages/resource/pages/udf/index'], resolve),
meta: {
title: `${i18n.$t('UDF manage')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
},
children: [
{
path: '/resource/udf',
name: 'resource-udf',
component: resolve => require(['../pages/resource/pages/udf/pages/resource/index'], resolve),
meta: {
title: `${i18n.$t('UDF Resources')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/udf/subUdfDirectory/:id',
name: 'resource-udf-subUdfDirectory',
component: resolve => require(['../pages/resource/pages/udf/pages/subUdfDirectory/index'], resolve),
meta: {
title: `${i18n.$t('UDF Resources')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/resource/udf/createUdfFolder',
name: 'resource-udf-createUdfFolder',
component: resolve => require(['../pages/resource/pages/udf/pages/createUdfFolder/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`
}
},
{
path: '/resource/udf/subCreateUdfFolder/:id',
name: 'resource-udf-subCreateUdfFolder',
component: resolve => require(['../pages/resource/pages/udf/pages/subUdfFolder/index'], resolve),
meta: {
title: `${i18n.$t('Create Resource')}`
}
},
{
path: '/resource/func',
name: 'resource-func',
component: resolve => require(['../pages/resource/pages/udf/pages/function/index'], resolve),
meta: {
title: `${i18n.$t('UDF Function')}`
}
}
]
}
]
},
{
path: '/datasource',
name: 'datasource',
component: resolve => require(['../pages/datasource/index'], resolve),
meta: {
title: `${i18n.$t('Datasource')}`
},
redirect: {
name: 'datasource-list'
},
children: [
{
path: '/datasource/list',
name: 'datasource-list',
component: resolve => require(['../pages/datasource/pages/list/index'], resolve),
meta: {
title: `${i18n.$t('Datasource')}`
}
}
]
},
{
path: '/security',
name: 'security',
component: resolve => require(['../pages/security/index'], resolve),
meta: {
title: `${i18n.$t('Security')}`
},
redirect: {
name: 'tenement-manage'
},
children: [
{
path: '/security/tenant',
name: 'tenement-manage',
component: resolve => require(['../pages/security/pages/tenement/index'], resolve),
meta: {
title: `${i18n.$t('Tenant Manage')}`
}
},
{
path: '/security/users',
name: 'users-manage',
component: resolve => require(['../pages/security/pages/users/index'], resolve),
meta: {
title: `${i18n.$t('User Manage')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/security/warning-groups',
name: 'warning-groups-manage',
component: resolve => require(['../pages/security/pages/warningGroups/index'], resolve),
meta: {
title: `${i18n.$t('Warning group manage')}`
}
},
{
path: '/security/warning-instance',
name: 'warning-instance-manage',
component: resolve => require(['../pages/security/pages/warningInstance/index'], resolve),
meta: {
title: `${i18n.$t('Warning instance manage')}`
}
},
{
path: '/security/queue',
name: 'queue-manage',
component: resolve => require(['../pages/security/pages/queue/index'], resolve),
meta: {
title: `${i18n.$t('Queue manage')}`
}
},
{
path: '/security/worker-groups',
name: 'worker-groups-manage',
component: resolve => require(['../pages/security/pages/workerGroups/index'], resolve),
meta: {
title: `${i18n.$t('Worker group manage')}`
}
},
{
path: '/security/environments',
name: 'environment-manage',
component: resolve => require(['../pages/security/pages/environment/index'], resolve),
meta: {
title: `${i18n.$t('Environment manage')}`
}
},
{
path: '/security/token',
name: 'token-manage',
component: resolve => require(['../pages/security/pages/token/index'], resolve),
meta: {
title: `${i18n.$t('Token manage')}`
}
}
]
},
{
path: '/user',
name: 'user',
component: resolve => require(['../pages/user/index'], resolve),
meta: {
title: `${i18n.$t('User Center')}`
},
redirect: {
name: 'account'
},
children: [
{
path: '/user/account',
name: 'account',
component: resolve => require(['../pages/user/pages/account/index'], resolve),
meta: {
title: `${i18n.$t('User Information')}`
}
},
{
path: '/user/password',
name: 'password',
component: resolve => require(['../pages/user/pages/password/index'], resolve),
meta: {
title: `${i18n.$t('Edit password')}`
}
},
{
path: '/user/token',
name: 'token',
component: resolve => require(['../pages/user/pages/token/index'], resolve),
meta: {
title: `${i18n.$t('Token manage')}`
}
}
]
},
{
path: '/monitor',
name: 'monitor',
component: resolve => require(['../pages/monitor/index'], resolve),
meta: {
title: 'monitor'
},
redirect: {
name: 'servers-master'
},
children: [
{
path: '/monitor/servers/master',
name: 'servers-master',
component: resolve => require(['../pages/monitor/pages/servers/master'], resolve),
meta: {
title: `${i18n.$t('Service-Master')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/worker',
name: 'servers-worker',
component: resolve => require(['../pages/monitor/pages/servers/worker'], resolve),
meta: {
title: `${i18n.$t('Service-Worker')}`,
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/alert',
name: 'servers-alert',
component: resolve => require(['../pages/monitor/pages/servers/alert'], resolve),
meta: {
title: 'Alert',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/rpcserver',
name: 'servers-rpcserver',
component: resolve => require(['../pages/monitor/pages/servers/rpcserver'], resolve),
meta: {
title: 'Rpcserver',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/apiserver',
name: 'servers-apiserver',
component: resolve => require(['../pages/monitor/pages/servers/apiserver'], resolve),
meta: {
title: 'Apiserver',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/db',
name: 'servers-db',
component: resolve => require(['../pages/monitor/pages/servers/db'], resolve),
meta: {
title: 'DB',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
},
{
path: '/monitor/servers/statistics',
name: 'statistics',
component: resolve => require(['../pages/monitor/pages/servers/statistics'], resolve),
meta: {
title: 'statistics',
refreshInSwitchedTab: config.refreshInSwitchedTab
}
}
]
}
]
})
const VueRouterPush = Router.prototype.push
Router.prototype.push = function push (to) {
return VueRouterPush.call(this, to).catch(err => err)
}
router.beforeEach((to, from, next) => {
const $body = $('body')
$body.find('.tooltip.fade.top.in').remove()
if (to.meta.title) {
document.title = `${to.meta.title} - DolphinScheduler`
}
next()
})
export default router
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,889 | [Improvement][router] Split the routing module to increase subsequent maintainability. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, all routing configuration files are placed in the `index.js` file, so that the file content is too much, and it is not suitable for search and subsequent maintenance.
### Use case
Split routing file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6889 | https://github.com/apache/dolphinscheduler/pull/6892 | 80dcf9b03ec0c37c01e7323a265dc0728012af52 | 00813b0a696bcd50d484670cf191efcb8921648f | "2021-11-17T10:51:40Z" | java | "2021-11-19T01:38:22Z" | dolphinscheduler-ui/src/js/conf/home/router/module/datasource.js | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,889 | [Improvement][router] Split the routing module to increase subsequent maintainability. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, all routing configuration files are placed in the `index.js` file, so that the file content is too much, and it is not suitable for search and subsequent maintenance.
### Use case
Split routing file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6889 | https://github.com/apache/dolphinscheduler/pull/6892 | 80dcf9b03ec0c37c01e7323a265dc0728012af52 | 00813b0a696bcd50d484670cf191efcb8921648f | "2021-11-17T10:51:40Z" | java | "2021-11-19T01:38:22Z" | dolphinscheduler-ui/src/js/conf/home/router/module/home.js | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,889 | [Improvement][router] Split the routing module to increase subsequent maintainability. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, all routing configuration files are placed in the `index.js` file, so that the file content is too much, and it is not suitable for search and subsequent maintenance.
### Use case
Split routing file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6889 | https://github.com/apache/dolphinscheduler/pull/6892 | 80dcf9b03ec0c37c01e7323a265dc0728012af52 | 00813b0a696bcd50d484670cf191efcb8921648f | "2021-11-17T10:51:40Z" | java | "2021-11-19T01:38:22Z" | dolphinscheduler-ui/src/js/conf/home/router/module/monitor.js | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,889 | [Improvement][router] Split the routing module to increase subsequent maintainability. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, all routing configuration files are placed in the `index.js` file, so that the file content is too much, and it is not suitable for search and subsequent maintenance.
### Use case
Split routing file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6889 | https://github.com/apache/dolphinscheduler/pull/6892 | 80dcf9b03ec0c37c01e7323a265dc0728012af52 | 00813b0a696bcd50d484670cf191efcb8921648f | "2021-11-17T10:51:40Z" | java | "2021-11-19T01:38:22Z" | dolphinscheduler-ui/src/js/conf/home/router/module/projects.js | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,889 | [Improvement][router] Split the routing module to increase subsequent maintainability. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, all routing configuration files are placed in the `index.js` file, so that the file content is too much, and it is not suitable for search and subsequent maintenance.
### Use case
Split routing file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6889 | https://github.com/apache/dolphinscheduler/pull/6892 | 80dcf9b03ec0c37c01e7323a265dc0728012af52 | 00813b0a696bcd50d484670cf191efcb8921648f | "2021-11-17T10:51:40Z" | java | "2021-11-19T01:38:22Z" | dolphinscheduler-ui/src/js/conf/home/router/module/resource.js | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,889 | [Improvement][router] Split the routing module to increase subsequent maintainability. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, all routing configuration files are placed in the `index.js` file, so that the file content is too much, and it is not suitable for search and subsequent maintenance.
### Use case
Split routing file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6889 | https://github.com/apache/dolphinscheduler/pull/6892 | 80dcf9b03ec0c37c01e7323a265dc0728012af52 | 00813b0a696bcd50d484670cf191efcb8921648f | "2021-11-17T10:51:40Z" | java | "2021-11-19T01:38:22Z" | dolphinscheduler-ui/src/js/conf/home/router/module/security.js | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,889 | [Improvement][router] Split the routing module to increase subsequent maintainability. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, all routing configuration files are placed in the `index.js` file, so that the file content is too much, and it is not suitable for search and subsequent maintenance.
### Use case
Split routing file
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6889 | https://github.com/apache/dolphinscheduler/pull/6892 | 80dcf9b03ec0c37c01e7323a265dc0728012af52 | 00813b0a696bcd50d484670cf191efcb8921648f | "2021-11-17T10:51:40Z" | java | "2021-11-19T01:38:22Z" | dolphinscheduler-ui/src/js/conf/home/router/module/user.js | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`
![360截图17520712172535](https://user-images.githubusercontent.com/20452924/138829616-ce15ed7b-6ad1-46b3-be4d-c45025294689.png)
### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;
![360截图16380508205068](https://user-images.githubusercontent.com/20452924/138833771-460534f6-a830-44d1-86aa-5970217eb712.png)
### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/HadoopUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import static org.apache.dolphinscheduler.common.Constants.RESOURCE_UPLOAD_PATH;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.ResUploadType;
import org.apache.dolphinscheduler.spi.enums.ResourceType;
import org.apache.dolphinscheduler.common.exception.BaseException;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang.StringUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.FileUtil;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hdfs.HdfsConfiguration;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.client.cli.RMAdminCLI;
import java.io.BufferedReader;
import java.io.Closeable;
import java.io.File;
import java.io.IOException;
import java.io.InputStreamReader;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.security.PrivilegedExceptionAction;
import java.util.Collections;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.google.common.cache.CacheBuilder;
import com.google.common.cache.CacheLoader;
import com.google.common.cache.LoadingCache;
/**
* hadoop utils
* single instance
*/
public class HadoopUtils implements Closeable {
private static final Logger logger = LoggerFactory.getLogger(HadoopUtils.class);
private static String hdfsUser = PropertyUtils.getString(Constants.HDFS_ROOT_USER);
public static final String resourceUploadPath = PropertyUtils.getString(RESOURCE_UPLOAD_PATH, "/dolphinscheduler");
public static final String rmHaIds = PropertyUtils.getString(Constants.YARN_RESOURCEMANAGER_HA_RM_IDS);
public static final String appAddress = PropertyUtils.getString(Constants.YARN_APPLICATION_STATUS_ADDRESS);
public static final String jobHistoryAddress = PropertyUtils.getString(Constants.YARN_JOB_HISTORY_STATUS_ADDRESS);
private static final String HADOOP_UTILS_KEY = "HADOOP_UTILS_KEY";
private static final LoadingCache<String, HadoopUtils> cache = CacheBuilder
.newBuilder()
.expireAfterWrite(PropertyUtils.getInt(Constants.KERBEROS_EXPIRE_TIME, 2), TimeUnit.HOURS)
.build(new CacheLoader<String, HadoopUtils>() {
@Override
public HadoopUtils load(String key) throws Exception {
return new HadoopUtils();
}
});
private static volatile boolean yarnEnabled = false;
private Configuration configuration;
private FileSystem fs;
private HadoopUtils() {
init();
initHdfsPath();
}
public static HadoopUtils getInstance() {
return cache.getUnchecked(HADOOP_UTILS_KEY);
}
/**
* init dolphinscheduler root path in hdfs
*/
private void initHdfsPath() {
Path path = new Path(resourceUploadPath);
try {
if (!fs.exists(path)) {
fs.mkdirs(path);
}
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
/**
* init hadoop configuration
*/
private void init() {
try {
configuration = new HdfsConfiguration();
String resourceStorageType = PropertyUtils.getUpperCaseString(Constants.RESOURCE_STORAGE_TYPE);
ResUploadType resUploadType = ResUploadType.valueOf(resourceStorageType);
if (resUploadType == ResUploadType.HDFS) {
if (CommonUtils.loadKerberosConf(configuration)) {
hdfsUser = "";
}
String defaultFS = configuration.get(Constants.FS_DEFAULTFS);
//first get key from core-site.xml hdfs-site.xml ,if null ,then try to get from properties file
// the default is the local file system
if (defaultFS.startsWith("file")) {
String defaultFSProp = PropertyUtils.getString(Constants.FS_DEFAULTFS);
if (StringUtils.isNotBlank(defaultFSProp)) {
Map<String, String> fsRelatedProps = PropertyUtils.getPrefixedProperties("fs.");
configuration.set(Constants.FS_DEFAULTFS, defaultFSProp);
fsRelatedProps.forEach((key, value) -> configuration.set(key, value));
} else {
logger.error("property:{} can not to be empty, please set!", Constants.FS_DEFAULTFS);
throw new RuntimeException(
String.format("property: %s can not to be empty, please set!", Constants.FS_DEFAULTFS)
);
}
} else {
logger.info("get property:{} -> {}, from core-site.xml hdfs-site.xml ", Constants.FS_DEFAULTFS, defaultFS);
}
if (StringUtils.isNotEmpty(hdfsUser)) {
UserGroupInformation ugi = UserGroupInformation.createRemoteUser(hdfsUser);
ugi.doAs((PrivilegedExceptionAction<Boolean>) () -> {
fs = FileSystem.get(configuration);
return true;
});
} else {
logger.warn("hdfs.root.user is not set value!");
fs = FileSystem.get(configuration);
}
} else if (resUploadType == ResUploadType.S3) {
System.setProperty(Constants.AWS_S3_V4, Constants.STRING_TRUE);
configuration.set(Constants.FS_DEFAULTFS, PropertyUtils.getString(Constants.FS_DEFAULTFS));
configuration.set(Constants.FS_S3A_ENDPOINT, PropertyUtils.getString(Constants.FS_S3A_ENDPOINT));
configuration.set(Constants.FS_S3A_ACCESS_KEY, PropertyUtils.getString(Constants.FS_S3A_ACCESS_KEY));
configuration.set(Constants.FS_S3A_SECRET_KEY, PropertyUtils.getString(Constants.FS_S3A_SECRET_KEY));
fs = FileSystem.get(configuration);
}
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
/**
* @return Configuration
*/
public Configuration getConfiguration() {
return configuration;
}
/**
* @return DefaultFS
*/
public String getDefaultFS() {
return getConfiguration().get(Constants.FS_DEFAULTFS);
}
/**
* get application url
*
* @param applicationId application id
* @return url of application
*/
public String getApplicationUrl(String applicationId) throws Exception {
/**
* if rmHaIds contains xx, it signs not use resourcemanager
* otherwise:
* if rmHaIds is empty, single resourcemanager enabled
* if rmHaIds not empty: resourcemanager HA enabled
*/
yarnEnabled = true;
String appUrl = StringUtils.isEmpty(rmHaIds) ? appAddress : getAppAddress(appAddress, rmHaIds);
if (StringUtils.isBlank(appUrl)) {
throw new BaseException("yarn application url generation failed");
}
if (logger.isDebugEnabled()) {
logger.debug("yarn application url:{}, applicationId:{}", appUrl, applicationId);
}
String activeResourceManagerPort = String.valueOf(PropertyUtils.getInt(Constants.HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT, 8088));
return String.format(appUrl, activeResourceManagerPort, applicationId);
}
public String getJobHistoryUrl(String applicationId) {
//eg:application_1587475402360_712719 -> job_1587475402360_712719
String jobId = applicationId.replace("application", "job");
return String.format(jobHistoryAddress, jobId);
}
/**
* cat file on hdfs
*
* @param hdfsFilePath hdfs file path
* @return byte[] byte array
* @throws IOException errors
*/
public byte[] catFile(String hdfsFilePath) throws IOException {
if (StringUtils.isBlank(hdfsFilePath)) {
logger.error("hdfs file path:{} is blank", hdfsFilePath);
return new byte[0];
}
try (FSDataInputStream fsDataInputStream = fs.open(new Path(hdfsFilePath))) {
return IOUtils.toByteArray(fsDataInputStream);
}
}
/**
* cat file on hdfs
*
* @param hdfsFilePath hdfs file path
* @param skipLineNums skip line numbers
* @param limit read how many lines
* @return content of file
* @throws IOException errors
*/
public List<String> catFile(String hdfsFilePath, int skipLineNums, int limit) throws IOException {
if (StringUtils.isBlank(hdfsFilePath)) {
logger.error("hdfs file path:{} is blank", hdfsFilePath);
return Collections.emptyList();
}
try (FSDataInputStream in = fs.open(new Path(hdfsFilePath))) {
BufferedReader br = new BufferedReader(new InputStreamReader(in, StandardCharsets.UTF_8));
Stream<String> stream = br.lines().skip(skipLineNums).limit(limit);
return stream.collect(Collectors.toList());
}
}
/**
* make the given file and all non-existent parents into
* directories. Has the semantics of Unix 'mkdir -p'.
* Existence of the directory hierarchy is not an error.
*
* @param hdfsPath path to create
* @return mkdir result
* @throws IOException errors
*/
public boolean mkdir(String hdfsPath) throws IOException {
return fs.mkdirs(new Path(hdfsPath));
}
/**
* copy files between FileSystems
*
* @param srcPath source hdfs path
* @param dstPath destination hdfs path
* @param deleteSource whether to delete the src
* @param overwrite whether to overwrite an existing file
* @return if success or not
* @throws IOException errors
*/
public boolean copy(String srcPath, String dstPath, boolean deleteSource, boolean overwrite) throws IOException {
return FileUtil.copy(fs, new Path(srcPath), fs, new Path(dstPath), deleteSource, overwrite, fs.getConf());
}
/**
* the src file is on the local disk. Add it to FS at
* the given dst name.
*
* @param srcFile local file
* @param dstHdfsPath destination hdfs path
* @param deleteSource whether to delete the src
* @param overwrite whether to overwrite an existing file
* @return if success or not
* @throws IOException errors
*/
public boolean copyLocalToHdfs(String srcFile, String dstHdfsPath, boolean deleteSource, boolean overwrite) throws IOException {
Path srcPath = new Path(srcFile);
Path dstPath = new Path(dstHdfsPath);
fs.copyFromLocalFile(deleteSource, overwrite, srcPath, dstPath);
return true;
}
/**
* copy hdfs file to local
*
* @param srcHdfsFilePath source hdfs file path
* @param dstFile destination file
* @param deleteSource delete source
* @param overwrite overwrite
* @return result of copy hdfs file to local
* @throws IOException errors
*/
public boolean copyHdfsToLocal(String srcHdfsFilePath, String dstFile, boolean deleteSource, boolean overwrite) throws IOException {
Path srcPath = new Path(srcHdfsFilePath);
File dstPath = new File(dstFile);
if (dstPath.exists()) {
if (dstPath.isFile()) {
if (overwrite) {
Files.delete(dstPath.toPath());
}
} else {
logger.error("destination file must be a file");
}
}
if (!dstPath.getParentFile().exists()) {
dstPath.getParentFile().mkdirs();
}
return FileUtil.copy(fs, srcPath, dstPath, deleteSource, fs.getConf());
}
/**
* delete a file
*
* @param hdfsFilePath the path to delete.
* @param recursive if path is a directory and set to
* true, the directory is deleted else throws an exception. In
* case of a file the recursive can be set to either true or false.
* @return true if delete is successful else false.
* @throws IOException errors
*/
public boolean delete(String hdfsFilePath, boolean recursive) throws IOException {
return fs.delete(new Path(hdfsFilePath), recursive);
}
/**
* check if exists
*
* @param hdfsFilePath source file path
* @return result of exists or not
* @throws IOException errors
*/
public boolean exists(String hdfsFilePath) throws IOException {
return fs.exists(new Path(hdfsFilePath));
}
/**
* Gets a list of files in the directory
*
* @param filePath file path
* @return {@link FileStatus} file status
* @throws Exception errors
*/
public FileStatus[] listFileStatus(String filePath) throws Exception {
try {
return fs.listStatus(new Path(filePath));
} catch (IOException e) {
logger.error("Get file list exception", e);
throw new Exception("Get file list exception", e);
}
}
/**
* Renames Path src to Path dst. Can take place on local fs
* or remote DFS.
*
* @param src path to be renamed
* @param dst new path after rename
* @return true if rename is successful
* @throws IOException on failure
*/
public boolean rename(String src, String dst) throws IOException {
return fs.rename(new Path(src), new Path(dst));
}
/**
* hadoop resourcemanager enabled or not
*
* @return result
*/
public boolean isYarnEnabled() {
return yarnEnabled;
}
/**
* get the state of an application
*
* @param applicationId application id
* @return the return may be null or there may be other parse exceptions
*/
public ExecutionStatus getApplicationStatus(String applicationId) throws Exception {
if (StringUtils.isEmpty(applicationId)) {
return null;
}
String result = Constants.FAILED;
String applicationUrl = getApplicationUrl(applicationId);
if (logger.isDebugEnabled()) {
logger.debug("generate yarn application url, applicationUrl={}", applicationUrl);
}
String responseContent = PropertyUtils.getBoolean(Constants.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false) ? KerberosHttpClient.get(applicationUrl) : HttpUtils.get(applicationUrl);
if (responseContent != null) {
ObjectNode jsonObject = JSONUtils.parseObject(responseContent);
if (!jsonObject.has("app")) {
return ExecutionStatus.FAILURE;
}
result = jsonObject.path("app").path("finalStatus").asText();
} else {
//may be in job history
String jobHistoryUrl = getJobHistoryUrl(applicationId);
if (logger.isDebugEnabled()) {
logger.debug("generate yarn job history application url, jobHistoryUrl={}", jobHistoryUrl);
}
responseContent = PropertyUtils.getBoolean(Constants.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false) ? KerberosHttpClient.get(jobHistoryUrl) : HttpUtils.get(jobHistoryUrl);
if (null != responseContent) {
ObjectNode jsonObject = JSONUtils.parseObject(responseContent);
if (!jsonObject.has("job")) {
return ExecutionStatus.FAILURE;
}
result = jsonObject.path("job").path("state").asText();
} else {
return ExecutionStatus.FAILURE;
}
}
switch (result) {
case Constants.ACCEPTED:
return ExecutionStatus.SUBMITTED_SUCCESS;
case Constants.SUCCEEDED:
return ExecutionStatus.SUCCESS;
case Constants.NEW:
case Constants.NEW_SAVING:
case Constants.SUBMITTED:
case Constants.FAILED:
return ExecutionStatus.FAILURE;
case Constants.KILLED:
return ExecutionStatus.KILL;
case Constants.RUNNING:
default:
return ExecutionStatus.RUNNING_EXECUTION;
}
}
/**
* get data hdfs path
*
* @return data hdfs path
*/
public static String getHdfsDataBasePath() {
if ("/".equals(resourceUploadPath)) {
// if basepath is configured to /, the generated url may be //default/resources (with extra leading /)
return "";
} else {
return resourceUploadPath;
}
}
/**
* hdfs resource dir
*
* @param tenantCode tenant code
* @param resourceType resource type
* @return hdfs resource dir
*/
public static String getHdfsDir(ResourceType resourceType, String tenantCode) {
String hdfsDir = "";
if (resourceType.equals(ResourceType.FILE)) {
hdfsDir = getHdfsResDir(tenantCode);
} else if (resourceType.equals(ResourceType.UDF)) {
hdfsDir = getHdfsUdfDir(tenantCode);
}
return hdfsDir;
}
/**
* hdfs resource dir
*
* @param tenantCode tenant code
* @return hdfs resource dir
*/
public static String getHdfsResDir(String tenantCode) {
return String.format("%s/resources", getHdfsTenantDir(tenantCode));
}
/**
* hdfs user dir
*
* @param tenantCode tenant code
* @param userId user id
* @return hdfs resource dir
*/
public static String getHdfsUserDir(String tenantCode, int userId) {
return String.format("%s/home/%d", getHdfsTenantDir(tenantCode), userId);
}
/**
* hdfs udf dir
*
* @param tenantCode tenant code
* @return get udf dir on hdfs
*/
public static String getHdfsUdfDir(String tenantCode) {
return String.format("%s/udfs", getHdfsTenantDir(tenantCode));
}
/**
* get hdfs file name
*
* @param resourceType resource type
* @param tenantCode tenant code
* @param fileName file name
* @return hdfs file name
*/
public static String getHdfsFileName(ResourceType resourceType, String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsDir(resourceType, tenantCode), fileName);
}
/**
* get absolute path and name for resource file on hdfs
*
* @param tenantCode tenant code
* @param fileName file name
* @return get absolute path and name for file on hdfs
*/
public static String getHdfsResourceFileName(String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsResDir(tenantCode), fileName);
}
/**
* get absolute path and name for udf file on hdfs
*
* @param tenantCode tenant code
* @param fileName file name
* @return get absolute path and name for udf file on hdfs
*/
public static String getHdfsUdfFileName(String tenantCode, String fileName) {
if (fileName.startsWith("/")) {
fileName = fileName.replaceFirst("/", "");
}
return String.format("%s/%s", getHdfsUdfDir(tenantCode), fileName);
}
/**
* @param tenantCode tenant code
* @return file directory of tenants on hdfs
*/
public static String getHdfsTenantDir(String tenantCode) {
return String.format("%s/%s", getHdfsDataBasePath(), tenantCode);
}
/**
* getAppAddress
*
* @param appAddress app address
* @param rmHa resource manager ha
* @return app address
*/
public static String getAppAddress(String appAddress, String rmHa) {
//get active ResourceManager
String activeRM = YarnHAAdminUtils.getAcitveRMName(rmHa);
if (StringUtils.isEmpty(activeRM)) {
return null;
}
String[] split1 = appAddress.split(Constants.DOUBLE_SLASH);
if (split1.length != 2) {
return null;
}
String start = split1[0] + Constants.DOUBLE_SLASH;
String[] split2 = split1[1].split(Constants.COLON);
if (split2.length != 2) {
return null;
}
String end = Constants.COLON + split2[1];
return start + activeRM + end;
}
@Override
public void close() throws IOException {
if (fs != null) {
try {
fs.close();
} catch (IOException e) {
logger.error("Close HadoopUtils instance failed", e);
throw new IOException("Close HadoopUtils instance failed", e);
}
}
}
/**
* yarn ha admin utils
*/
private static final class YarnHAAdminUtils extends RMAdminCLI {
/**
* get active resourcemanager
*/
public static String getAcitveRMName(String rmIds) {
String[] rmIdArr = rmIds.split(Constants.COMMA);
int activeResourceManagerPort = PropertyUtils.getInt(Constants.HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT, 8088);
String yarnUrl = "http://%s:" + activeResourceManagerPort + "/ws/v1/cluster/info";
try {
/**
* send http get request to rm
*/
for (String rmId : rmIdArr) {
String state = getRMState(String.format(yarnUrl, rmId));
if (Constants.HADOOP_RM_STATE_ACTIVE.equals(state)) {
return rmId;
}
}
} catch (Exception e) {
logger.error("yarn ha application url generation failed, message:{}", e.getMessage());
}
return null;
}
/**
* get ResourceManager state
*/
public static String getRMState(String url) {
String retStr = PropertyUtils.getBoolean(Constants.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false) ? KerberosHttpClient.get(url) : HttpUtils.get(url);
if (StringUtils.isEmpty(retStr)) {
return null;
}
//to json
ObjectNode jsonObject = JSONUtils.parseObject(retStr);
//get ResourceManager state
if (!jsonObject.has("clusterInfo")) {
return null;
}
return jsonObject.get("clusterInfo").path("haState").asText();
}
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`
![360截图17520712172535](https://user-images.githubusercontent.com/20452924/138829616-ce15ed7b-6ad1-46b3-be4d-c45025294689.png)
### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;
![360截图16380508205068](https://user-images.githubusercontent.com/20452924/138833771-460534f6-a830-44d1-86aa-5970217eb712.png)
### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/utils/PropertyUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.utils;
import static org.apache.dolphinscheduler.common.Constants.COMMON_PROPERTIES_PATH;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.spi.enums.ResUploadType;
import org.apache.directory.api.util.Strings;
import java.io.IOException;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PropertyUtils {
private static final Logger logger = LoggerFactory.getLogger(PropertyUtils.class);
private static final Properties properties = new Properties();
private PropertyUtils() {
throw new UnsupportedOperationException("Construct PropertyUtils");
}
static {
loadPropertyFile(COMMON_PROPERTIES_PATH);
}
public static synchronized void loadPropertyFile(String... propertyFiles) {
for (String fileName : propertyFiles) {
try (InputStream fis = PropertyUtils.class.getResourceAsStream(fileName);) {
properties.load(fis);
} catch (IOException e) {
logger.error(e.getMessage(), e);
System.exit(1);
}
}
// Override from system properties
System.getProperties().forEach((k, v) -> {
final String key = String.valueOf(k);
logger.info("Overriding property from system property: {}", key);
PropertyUtils.setValue(key, String.valueOf(v));
});
}
/**
* @return judge whether resource upload startup
*/
public static boolean getResUploadStartupState() {
String resUploadStartupType = PropertyUtils.getUpperCaseString(Constants.RESOURCE_STORAGE_TYPE);
ResUploadType resUploadType = ResUploadType.valueOf(resUploadStartupType);
return resUploadType == ResUploadType.HDFS || resUploadType == ResUploadType.S3;
}
/**
* get property value
*
* @param key property name
* @return property value
*/
public static String getString(String key) {
return properties.getProperty(key.trim());
}
/**
* get property value with upper case
*
* @param key property name
* @return property value with upper case
*/
public static String getUpperCaseString(String key) {
return properties.getProperty(key.trim()).toUpperCase();
}
/**
* get property value
*
* @param key property name
* @param defaultVal default value
* @return property value
*/
public static String getString(String key, String defaultVal) {
String val = properties.getProperty(key.trim());
return val == null ? defaultVal : val;
}
/**
* get property value
*
* @param key property name
* @return get property int value , if key == null, then return -1
*/
public static int getInt(String key) {
return getInt(key, -1);
}
/**
* @param key key
* @param defaultValue default value
* @return property value
*/
public static int getInt(String key, int defaultValue) {
String value = getString(key);
if (value == null) {
return defaultValue;
}
try {
return Integer.parseInt(value);
} catch (NumberFormatException e) {
logger.info(e.getMessage(), e);
}
return defaultValue;
}
/**
* get property value
*
* @param key property name
* @return property value
*/
public static boolean getBoolean(String key) {
String value = properties.getProperty(key.trim());
if (null != value) {
return Boolean.parseBoolean(value);
}
return false;
}
/**
* get property value
*
* @param key property name
* @param defaultValue default value
* @return property value
*/
public static Boolean getBoolean(String key, boolean defaultValue) {
String value = properties.getProperty(key.trim());
if (null != value) {
return Boolean.parseBoolean(value);
}
return defaultValue;
}
/**
* get property long value
*
* @param key key
* @param defaultVal default value
* @return property value
*/
public static long getLong(String key, long defaultVal) {
String val = getString(key);
return val == null ? defaultVal : Long.parseLong(val);
}
/**
* get all properties with specified prefix, like: fs.
*
* @param prefix prefix to search
* @return all properties with specified prefix
*/
public static Map<String, String> getPrefixedProperties(String prefix) {
Map<String, String> matchedProperties = new HashMap<>();
for (String propName : properties.stringPropertyNames()) {
if (propName.startsWith(prefix)) {
matchedProperties.put(propName, properties.getProperty(propName));
}
}
return matchedProperties;
}
public static void setValue(String key, String value) {
properties.setProperty(key, value);
}
public static Map<String, String> getPropertiesByPrefix(String prefix) {
if (Strings.isEmpty(prefix)) {
return null;
}
Set<Object> keys = properties.keySet();
if (keys.isEmpty()) {
return null;
}
Map<String, String> propertiesMap = new HashMap<>();
keys.forEach(k -> {
if (k.toString().contains(prefix)) {
propertiesMap.put(k.toString().replaceFirst(prefix + ".", ""), properties.getProperty((String) k));
}
});
return propertiesMap;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,605 | yarn applications: application_1634958933716_0113 , query status failed | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
### operate
when I run a shell for testing mapreduce in ds (fee image below),ds web log shows : yarn status get failed.
shell content :
`hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10`
![360截图17520712172535](https://user-images.githubusercontent.com/20452924/138829616-ce15ed7b-6ad1-46b3-be4d-c45025294689.png)
### ds web log
[INFO] 2021-10-26 10:34:28.745 - [taskAppId=TASK-1-6-89]:[115] - create dir success /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.754 - [taskAppId=TASK-1-6-89]:[88] - shell task params {"rawScript":"hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10","localParams":[],"resourceList":[]}
[INFO] 2021-10-26 10:34:28.758 - [taskAppId=TASK-1-6-89]:[154] - raw script : hadoop jar /opt/app/hadoop-2.9.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar pi 10 10
[INFO] 2021-10-26 10:34:28.759 - [taskAppId=TASK-1-6-89]:[155] - task execute path : /exec/process/1/1/6/89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[87] - tenantCode user:root, task dir:1_6_89
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[92] - create command file:/exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.760 - [taskAppId=TASK-1-6-89]:[111] - command : #!/bin/sh
BASEDIR=$(cd `dirname $0`; pwd)
cd $BASEDIR
source /opt/app/dolphinscheduler/conf/env/dolphinscheduler_env.sh
/exec/process/1/1/6/89/1_6_89_node.sh
[INFO] 2021-10-26 10:34:28.764 - [taskAppId=TASK-1-6-89]:[330] - task run command:
sudo -u root sh /exec/process/1/1/6/89/1_6_89.command
[INFO] 2021-10-26 10:34:28.773 - [taskAppId=TASK-1-6-89]:[211] - process start, process id is: 19627
[INFO] 2021-10-26 10:34:29.774 - [taskAppId=TASK-1-6-89]:[138] - -> SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/app/hadoop-2.9.2/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/app/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Number of Maps = 10
Samples per Map = 10
[INFO] 2021-10-26 10:34:31.775 - [taskAppId=TASK-1-6-89]:[138] - -> Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
21/10/26 10:34:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop47/192.168.80.47:8032
[INFO] 2021-10-26 10:34:32.776 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:32 INFO input.FileInputFormat: Total input files to process : 10
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: number of splits:10
21/10/26 10:34:32 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
21/10/26 10:34:32 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1634958933716_0113
21/10/26 10:34:32 INFO impl.YarnClientImpl: Submitted application application_1634958933716_0113
21/10/26 10:34:32 INFO mapreduce.Job: The url to track the job: http://hadoop47:8088/proxy/application_1634958933716_0113/
21/10/26 10:34:32 INFO mapreduce.Job: Running job: job_1634958933716_0113
[INFO] 2021-10-26 10:34:40.785 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:39 INFO mapreduce.Job: Job job_1634958933716_0113 running in uber mode : false
21/10/26 10:34:39 INFO mapreduce.Job: map 0% reduce 0%
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### worker debug log
[DEBUG] 2021-10-26 10:34:56.708 org.apache.zookeeper.ClientCnxn:[846] - Reading reply sessionid:0x20015bfe8a400c9, packet:: clientPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 serverPath:/dolphinscheduler/nodes/worker/default/192.168.80.49:1234 finished:false header:: 2933,4 replyHeader:: 2933,17180717039,0 request:: '/dolphinscheduler/nodes/worker/default/192.168.80.49:1234,T response:: #302e332c302e39312c302e35392c312e33372c382e302c302e332c323032312d31302d32362030393a32373a30362c323032312d31302d32362031303a33343a35362c302c34303937,s{17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701}
[DEBUG] 2021-10-26 10:34:56.708 org.apache.dolphinscheduler.service.zk.ZookeeperCachedOperator:[62] - zookeeperListener:org.apache.dolphinscheduler.server.master.registry.ServerNodeManager$WorkerGroupNodeListener triggered
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[396] - processResult: CuratorEventImpl{type=GET_DATA, resultCode=0, path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', name='null', children=null, context=null, stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55], watchedEvent=null, aclList=null, opResults=null}
[DEBUG] 2021-10-26 10:34:56.709 org.apache.curator.framework.recipes.cache.TreeCache:[857] - publishEvent: TreeCacheEvent{type=NODE_UPDATED, data=ChildData{path='/dolphinscheduler/nodes/worker/default/192.168.80.49:1234', stat=17180707701,17180717039,1635211626683,1635215696700,407,0,0,144139102061854920,73,0,17180707701
, data=[48, 46, 51, 44, 48, 46, 57, 49, 44, 48, 46, 53, 57, 44, 49, 46, 51, 55, 44, 56, 46, 48, 44, 48, 46, 51, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 48, 57, 58, 50, 55, 58, 48, 54, 44, 50, 48, 50, 49, 45, 49, 48, 45, 50, 54, 32, 49, 48, 58, 51, 52, 58, 53, 54, 44, 48, 44, 52, 48, 57, 55]}}
[INFO] 2021-10-26 10:34:56.789 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:56 INFO mapreduce.Job: map 30% reduce 0%
[INFO] 2021-10-26 10:34:57.790 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:34:57 INFO mapreduce.Job: map 100% reduce 0%
[DEBUG] 2021-10-26 10:34:58.313 org.apache.zookeeper.ClientCnxn:[745] - Got ping response for sessionid: 0x30015c0a38d009d after 0ms
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[445] - find app id: application_1634958933716_0113
[INFO] 2021-10-26 10:35:02.715 - [taskAppId=TASK-1-6-89]:[402] - check yarn application status, appId:application_1634958933716_0113
[DEBUG] 2021-10-26 10:35:02.715 org.apache.dolphinscheduler.common.utils.HadoopUtils:[211] - yarn application url:http://hadoop47:%s/ws/v1/cluster/apps/%s, applicationId:application_1634958933716_0113
[ERROR] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.common.utils.HttpUtils:[73] - Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
org.apache.http.conn.HttpHostConnectException: Connect to hadoop47:80 [hadoop47/192.168.80.47] failed: Connection refused (Connection refused)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:151)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:353)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:380)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at org.apache.dolphinscheduler.common.utils.HttpUtils.get(HttpUtils.java:60)
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:420)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:476)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:218)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:200)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:394)
at java.net.Socket.connect(Socket.java:606)
at org.apache.http.conn.socket.PlainConnectionSocketFactory.connectSocket(PlainConnectionSocketFactory.java:74)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:134)
... 20 common frames omitted
[ERROR] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[418] - yarn applications: application_1634958933716_0113 , query status failed, exception:{}
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.common.utils.HadoopUtils.getApplicationStatus(HadoopUtils.java:423)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.isSuccessOfYarnState(AbstractCommandExecutor.java:404)
at org.apache.dolphinscheduler.server.worker.task.AbstractCommandExecutor.run(AbstractCommandExecutor.java:230)
at org.apache.dolphinscheduler.server.worker.task.shell.ShellTask.handle(ShellTask.java:101)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[INFO] 2021-10-26 10:35:02.720 - [taskAppId=TASK-1-6-89]:[238] - process has exited, execute path:/exec/process/1/1/6/89, processId:19627 ,exitStatusCode:-1 ,processWaitForStatus:true ,processExitValue:0
[INFO] 2021-10-26 10:35:02.720 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[147] - task instance id : 89,task final status : FAILURE
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[185] - develop mode is: false
[INFO] 2021-10-26 10:35:02.721 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[203] - exec local path: /exec/process/1/1/6/89 cleared.
[INFO] 2021-10-26 10:35:02.791 - [taskAppId=TASK-1-6-89]:[138] - -> 21/10/26 10:35:02 INFO mapreduce.Job: map 100% reduce 100%
21/10/26 10:35:02 INFO mapreduce.Job: Job job_1634958933716_0113 completed successfully
21/10/26 10:35:02 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=226
FILE: Number of bytes written=2205654
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=2630
HDFS: Number of bytes written=215
HDFS: Number of read operations=43
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=10
Launched reduce tasks=1
Data-local map tasks=10
Total time spent by all maps in occupied slots (ms)=149819
Total time spent by all reduces in occupied slots (ms)=3113
Total time spent by all map tasks (ms)=149819
Total time spent by all reduce tasks (ms)=3113
Total vcore-milliseconds taken by all map tasks=149819
Total vcore-milliseconds taken by all reduce tasks=3113
Total megabyte-milliseconds taken by all map tasks=153414656
Total megabyte-milliseconds taken by all reduce tasks=3187712
Map-Reduce Framework
Map input records=10
Map output records=20
Map output bytes=180
Map output materialized bytes=280
Input split bytes=1450
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=280
Reduce input records=20
Reduce output records=0
Spilled Records=40
Shuffled Maps =10
Failed Shuffles=0
Merged Map outputs=10
GC time elapsed (ms)=6825
CPU time spent (ms)=4980
Physical memory (bytes) snapshot=3529900032
Virtual memory (bytes) snapshot=22377988096
Total committed heap usage (bytes)=2413297664
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1180
File Output Format Counters
Bytes Written=97
Job Finished in 30.695 seconds
Estimated value of Pi is 3.20000000000000000000
### What you expected to happen
Yarn application_1634958933716_0113 status can always be get;
![360截图16380508205068](https://user-images.githubusercontent.com/20452924/138833771-460534f6-a830-44d1-86aa-5970217eb712.png)
### How to reproduce
Server: KunPeng
OS centos7
DS release:1.3.9
Hadoop version :2.9.2
Yarn Ha: False
conf/common.properties
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://hadoop47:%s/ws/v1/cluster/apps/%s
### Anything else
some times fail ; high probability of this error
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6605 | https://github.com/apache/dolphinscheduler/pull/6661 | 00813b0a696bcd50d484670cf191efcb8921648f | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | "2021-10-26T08:09:42Z" | java | "2021-11-19T02:58:17Z" | dolphinscheduler-spi/src/main/java/org/apache/dolphinscheduler/spi/utils/PropertyUtils.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.spi.utils;
import static org.apache.dolphinscheduler.spi.utils.Constants.COMMON_PROPERTIES_PATH;
import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class PropertyUtils {
private static final Logger logger = LoggerFactory.getLogger(PropertyUtils.class);
private static final Properties properties = new Properties();
private PropertyUtils() {
throw new UnsupportedOperationException("Construct PropertyUtils");
}
static {
loadPropertyFile(COMMON_PROPERTIES_PATH);
}
public static synchronized void loadPropertyFile(String... propertyFiles) {
for (String fileName : propertyFiles) {
try (InputStream fis = PropertyUtils.class.getResourceAsStream(fileName);) {
properties.load(fis);
} catch (IOException e) {
logger.error(e.getMessage(), e);
System.exit(1);
}
}
// Override from system properties
System.getProperties().forEach((k, v) -> {
final String key = String.valueOf(k);
logger.info("Overriding property from system property: {}", key);
PropertyUtils.setValue(key, String.valueOf(v));
});
}
/**
* get property value
*
* @param key property name
* @return property value
*/
public static String getString(String key) {
return properties.getProperty(key.trim());
}
/**
* get property value with upper case
*
* @param key property name
* @return property value with upper case
*/
public static String getUpperCaseString(String key) {
return properties.getProperty(key.trim()).toUpperCase();
}
/**
* get property value
*
* @param key property name
* @param defaultVal default value
* @return property value
*/
public static String getString(String key, String defaultVal) {
String val = properties.getProperty(key.trim());
return val == null ? defaultVal : val;
}
/**
* get property value
*
* @param key property name
* @return get property int value , if key == null, then return -1
*/
public static int getInt(String key) {
return getInt(key, -1);
}
/**
* @param key key
* @param defaultValue default value
* @return property value
*/
public static int getInt(String key, int defaultValue) {
String value = getString(key);
if (value == null) {
return defaultValue;
}
try {
return Integer.parseInt(value);
} catch (NumberFormatException e) {
logger.info(e.getMessage(), e);
}
return defaultValue;
}
/**
* get property value
*
* @param key property name
* @return property value
*/
public static boolean getBoolean(String key) {
String value = properties.getProperty(key.trim());
if (null != value) {
return Boolean.parseBoolean(value);
}
return false;
}
/**
* get property value
*
* @param key property name
* @param defaultValue default value
* @return property value
*/
public static Boolean getBoolean(String key, boolean defaultValue) {
String value = properties.getProperty(key.trim());
if (null != value) {
return Boolean.parseBoolean(value);
}
return defaultValue;
}
public static void setValue(String key, String value) {
properties.setProperty(key, value);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,898 | [Feature][Process definition] The `select` component of batch copy and batch move adds search function. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
To facilitate quick selection when there are multiple workflows, a search box is added to search the data of `select`.
### Use case
You can quickly locate a workflow through the search form.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6898 | https://github.com/apache/dolphinscheduler/pull/6915 | 802fc498b533f855a19ceebb6a3cf0e9d6c57fea | eb01f2e6170520c5daa032f85aa782b248ffb7d4 | "2021-11-18T06:17:33Z" | java | "2021-11-19T03:17:03Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/relatedItems.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<m-popup
ref="popup"
:ok-text="$t('Confirm')"
:nameText="$t('Related items')"
@close="_close"
@ok="_ok">
<template slot="content">
<div class="create-tenement-model">
<m-list-box-f>
<template slot="name"><strong>*</strong>{{$t('Project Name')}}</template>
<template slot="content">
<el-select v-model="selected" size="small">
<el-option
v-for="item in itemList"
:key="item.code"
:value="item.code"
:label="item.name">
</el-option>
</el-select>
</template>
</m-list-box-f>
</div>
</template>
</m-popup>
</template>
<script>
import i18n from '@/module/i18n'
import store from '@/conf/home/store'
import mPopup from '@/module/components/popup/popup'
import mListBoxF from '@/module/components/listBoxF/listBoxF'
export default {
name: 'create-tenement',
data () {
return {
store,
itemList: [],
selected: ''
}
},
props: {
tmp: Boolean
},
methods: {
_close () {
this.$emit('closeRelatedItems')
},
_ok () {
if (this._verification()) {
if (this.tmp) {
this.$emit('onBatchMove', this.selected)
} else {
this.$emit('onBatchCopy', this.selected)
}
}
},
_verification () {
if (!this.selected) {
this.$message.warning(`${i18n.$t('Project name is required')}`)
return false
}
return true
}
},
watch: {
},
created () {
this.store.dispatch('dag/getAllItems', {}).then(res => {
if (res.data.length > 0) {
this.itemList = res.data
}
})
},
mounted () {
},
components: { mPopup, mListBoxF }
}
</script>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,896 | [Bug] [Task Instance] A new line appears in the name search, making it unavailable. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
When the query condition is less than 1370 pixels in the viewable area of the browser, the name search appears to wrap and is not fully displayed.
![image](https://user-images.githubusercontent.com/19239641/142401188-24eb6bac-882d-49bf-9209-ab3f5b065504.png)
### What you expected to happen
The height of a `class` named `conditions-box` is not stretched.
### How to reproduce
Reduce the width of the visible area of the browser to less than 1370 pixels.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6896 | https://github.com/apache/dolphinscheduler/pull/6911 | f9a130d73e79e8f4cfd17e077c4bf48137a64b38 | 1be080237bad025651247bd24dc5ad2b24520f8d | "2021-11-18T04:19:52Z" | java | "2021-11-19T03:46:45Z" | dolphinscheduler-ui/src/sass/common/index.scss | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
@import "scrollbar";
@import "table";
.ans-input {
textarea,input {
font-weight: 400;
}
}
.ans-radio {
.ans-radio-inner {
border: 1px solid #B3B3B3;
}
}
.ans-poptip {
min-width: 158px;
}
.ans-checkbox-wrapper .checkbox-label,
.ans-radio-wrapper {
font-weight: normal;
}
a:focus {
text-decoration: none !important;
}
a:hover{
color:#0097e0;
text-decoration: none !important;
}
.CodeMirror-hints {
z-index: 999 !important;
}
.cm-s-mdn-like .CodeMirror-gutters {
border-left:0 !important;
}
.cm-s-mdn-like .CodeMirror-linenumber {
padding-left: 0;
}
body{
background: #EDEEED;
.home-main {
min-height: calc(100vh - 100px);
background: #fff;
margin: 20px;
border-radius: 3px;
>.content-title {
height: 48px;
background: #f8fbfe;
border-radius: 3px 3px 0 0;
span {
font-size: 22px;
padding-left: 18px;
padding-top: 10px;
display: inline-block;
color: #2A455B;
}
}
>.conditions-box {
//background: #f8fbfe;
}
.conditions-model {
height: 50px;
position: relative;
.left {
position: absolute;
left: 12px;
top: 13px;
}
.right {
position: absolute;
right: 8px;
top: 13px;
.form-box {
.list {
float: right;
margin-right: 4px;
}
}
}
}
}
.main-layout-model {
&.dag-screen {
.m-top {
position: unset;
}
.m-bottom {
z-index: 2;
}
.index-model {
width: 100%;
height: 100%;
position: fixed;
top: -20px;
left: -20px;
.dag-model {
height: 100% !important;
}
}
}
}
font-family:13px/1.5 PingFangSC-Light,Helvetica Neue,Helvetica,Microsoft Yahei,Arial,Hiragino Sans GB,tahoma,SimSun,sans-serif;
-webkit-backface-visibility:hidden;
-webkit-tap-highlight-color:transparent;
-webkit-text-size-adjust:none;
-webkit-touch-callout:none;
-webkit-font-smoothing:antialiased;
-moz-osx-font-smoothing:grayscale
}
// icon disabled state
.iconfont.icon-disabled {
color: #999 !important;
}
.main-layout-box {
padding-left: 200px;
&.no {
padding-left: 0;
}
}
.tooltip-cont-model {
width: 200px;
height: 300px;
background: #fff;
border-radius: 3px;
margin-left: -4px;
margin-top: -4px;
}
.global-loading {
background: #fff;
width: 100%;
height: 100%;
position: fixed;
left: 0;
top: 0;
z-index: 999;
.svg-box {
width: 100px;
height: 66px;
position: absolute;
left:50%;
top: 50%;
margin-left: -50px;
margin-top: -100px;
text-align: center;
.sp1 {
display: block;
font-size: 14px;
color: #999;
padding-top: 4px;
}
}
}
.page-box {
text-align: right;
padding: 20px;
.ans-page {
display: inline-block;
}
}
button[disabled='disabled'] {
color: #bbbec4 !important;
background-color: #f7f7f7 !important;
border-color: #dddee1 !important;
}
.ans-icon-spinner.loading {
text-rendering: auto;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
-webkit-animation: fa-spin .6s infinite linear;
animation: fa-spin .6s infinite linear;
}
.fa-spin {
font-size: 16px;
}
.CodeMirror {
border: 1px solid #DDDDDD !important;
border-radius: 3px;
}
.el-dialog__wrapper .el-dialog {
display: table;
width: auto;
}
article,aside,blockquote,body,button,code,dd,details,div,dl,dt,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,hr,input,legend,li,menu,nav,ol,p,pre,section,td,textarea,th,ul {
margin:0;
padding:0
}
h1,h2,h3,p {
font-style:normal
}
table {
border-collapse:collapse;
border-spacing:0
}
button,input,select,textarea {
font-family:inherit;
font-size:inherit
}
ol,ul {
list-style:none
}
img {
border:0
}
input::-ms-clear,input::-ms-reveal {
display:none
}
a,body {
color:#333
}
:focus {
outline:0
}
*,:after,:before {
-webkit-box-sizing:border-box;
box-sizing:border-box
}
[hidden],[v-cloak] {
display:none
}
body::-webkit-scrollbar {
width:4px;
height:4px
}
body::-webkit-scrollbar-track {
background:hsla(0,0%,100%,0);
border-radius:2x;
margin:4px 0
}
body::-webkit-scrollbar-thumb {
background:hsla(0,0%,100%,0);
border-radius:2px;
-webkit-transition:background .2s ease-in-out;
transition:background .2s ease-in-out
}
[class*=scrollbar]:hover::-webkit-scrollbar-thumb {
background:#70bdf7
}
.clearfix,.outer {
zoom:1
}
.clearfix:after,.outer:after {
clear:both;
content:" ";
display:block;
width:0;
height:0;
visibility:hidden
}
.f-show {
display:block
}
.f-hide {
display:none
}
.f-pr {
position:relative
}
.f-fl {
float:left
}
.f-fr {
float:right
}
.float-left {
float:left
}
.float-right {
float:right
}
.f-no-select {
-webkit-user-select:none;
-moz-user-select:none;
-ms-user-select:none;
-o-user-select:none;
user-select:none
}
.f-word-break {
white-space:normal;
word-wrap:break-word;
word-break:break-all
}
.f-text-ellipis {
overflow:hidden;
word-wrap:normal;
white-space:nowrap;
text-overflow:ellipsis
}
.f-link {
display:inline-block;
text-decoration:none;
width:100%;
height:100%
}
.f-wide {
margin:0 auto
}
.f-pre,.f-wide {
text-align:left
}
.f-pre {
overflow:hidden;
white-space:pre-wrap;
word-wrap:break-word;
word-break:break-all
}
.f-cursor-p {
cursor:pointer
}
.f-text-overflow {
white-space:nowrap;
overflow:hidden;
text-overflow:ellipsis
}
.f-f12 {
font-size:12px
}
.f-f14 {
font-size:14px
}
.f-f16 {
font-size:16px
}
.f-f18 {
font-size:18px
}
.vue-treeselect__multi-value {
max-height: 200px;
overflow: auto;
}
.el-dialog__body {
padding: 10px;
}
.el-dialog__header {
.el-dialog__headerbtn {
right: 10px;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/config/MasterConfig.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.config;
import org.apache.dolphinscheduler.server.master.dispatch.host.assign.HostSelector;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.stereotype.Component;
@Component
@EnableConfigurationProperties
@ConfigurationProperties("master")
public class MasterConfig {
private int listenPort;
private int execThreads;
private int execTaskNum;
private int dispatchTaskNumber;
private HostSelector hostSelector;
private int heartbeatInterval;
private int taskCommitRetryTimes;
private int taskCommitInterval;
private int stateWheelInterval;
private double maxCpuLoadAvg;
private double reservedMemory;
private boolean cacheProcessDefinition;
public int getListenPort() {
return listenPort;
}
public void setListenPort(int listenPort) {
this.listenPort = listenPort;
}
public int getExecThreads() {
return execThreads;
}
public void setExecThreads(int execThreads) {
this.execThreads = execThreads;
}
public int getExecTaskNum() {
return execTaskNum;
}
public void setExecTaskNum(int execTaskNum) {
this.execTaskNum = execTaskNum;
}
public int getDispatchTaskNumber() {
return dispatchTaskNumber;
}
public void setDispatchTaskNumber(int dispatchTaskNumber) {
this.dispatchTaskNumber = dispatchTaskNumber;
}
public HostSelector getHostSelector() {
return hostSelector;
}
public void setHostSelector(HostSelector hostSelector) {
this.hostSelector = hostSelector;
}
public int getHeartbeatInterval() {
return heartbeatInterval;
}
public void setHeartbeatInterval(int heartbeatInterval) {
this.heartbeatInterval = heartbeatInterval;
}
public int getTaskCommitRetryTimes() {
return taskCommitRetryTimes;
}
public void setTaskCommitRetryTimes(int taskCommitRetryTimes) {
this.taskCommitRetryTimes = taskCommitRetryTimes;
}
public int getTaskCommitInterval() {
return taskCommitInterval;
}
public void setTaskCommitInterval(int taskCommitInterval) {
this.taskCommitInterval = taskCommitInterval;
}
public int getStateWheelInterval() {
return stateWheelInterval;
}
public void setStateWheelInterval(int stateWheelInterval) {
this.stateWheelInterval = stateWheelInterval;
}
public double getMaxCpuLoadAvg() {
return maxCpuLoadAvg > 0 ? maxCpuLoadAvg : Runtime.getRuntime().availableProcessors() * 2;
}
public void setMaxCpuLoadAvg(double maxCpuLoadAvg) {
this.maxCpuLoadAvg = maxCpuLoadAvg;
}
public double getReservedMemory() {
return reservedMemory;
}
public void setReservedMemory(double reservedMemory) {
this.reservedMemory = reservedMemory;
}
public boolean isCacheProcessDefinition() {
return cacheProcessDefinition;
}
public void setCacheProcessDefinition(boolean cacheProcessDefinition) {
this.cacheProcessDefinition = cacheProcessDefinition;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/MasterSchedulerService.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.runner;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.remote.NettyRemotingClient;
import org.apache.dolphinscheduler.remote.config.NettyClientConfig;
import org.apache.dolphinscheduler.server.master.cache.ProcessInstanceExecCacheManager;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.dispatch.executor.NettyExecutorManager;
import org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient;
import org.apache.dolphinscheduler.server.master.registry.ServerNodeManager;
import org.apache.dolphinscheduler.service.alert.ProcessAlertManager;
import org.apache.dolphinscheduler.service.process.ProcessService;
import java.util.HashMap;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
/**
* master scheduler thread
*/
@Service
public class MasterSchedulerService extends Thread {
/**
* logger of MasterSchedulerService
*/
private static final Logger logger = LoggerFactory.getLogger(MasterSchedulerService.class);
/**
* dolphinscheduler database interface
*/
@Autowired
private ProcessService processService;
/**
* zookeeper master client
*/
@Autowired
private MasterRegistryClient masterRegistryClient;
/**
* master config
*/
@Autowired
private MasterConfig masterConfig;
/**
* alert manager
*/
@Autowired
private ProcessAlertManager processAlertManager;
/**
* netty remoting client
*/
private NettyRemotingClient nettyRemotingClient;
@Autowired
NettyExecutorManager nettyExecutorManager;
/**
* master exec service
*/
private ThreadPoolExecutor masterExecService;
@Autowired
private ProcessInstanceExecCacheManager processInstanceExecCacheManager;
/**
* process timeout check list
*/
ConcurrentHashMap<Integer, ProcessInstance> processTimeoutCheckList = new ConcurrentHashMap<>();
/**
* task time out checkout list
*/
ConcurrentHashMap<Integer, TaskInstance> taskTimeoutCheckList = new ConcurrentHashMap<>();
/**
* key:code-version
* value: processDefinition
*/
HashMap<String, ProcessDefinition> processDefinitionCacheMaps = new HashMap<>();
private StateWheelExecuteThread stateWheelExecuteThread;
/**
* constructor of MasterSchedulerService
*/
public void init() {
this.masterExecService = (ThreadPoolExecutor) ThreadUtils.newDaemonFixedThreadExecutor("Master-Exec-Thread", masterConfig.getExecThreads());
NettyClientConfig clientConfig = new NettyClientConfig();
this.nettyRemotingClient = new NettyRemotingClient(clientConfig);
stateWheelExecuteThread = new StateWheelExecuteThread(processTimeoutCheckList,
taskTimeoutCheckList,
this.processInstanceExecCacheManager,
masterConfig.getStateWheelInterval() * Constants.SLEEP_TIME_MILLIS);
}
@Override
public synchronized void start() {
super.setName("MasterSchedulerService");
super.start();
this.stateWheelExecuteThread.start();
}
public void close() {
masterExecService.shutdown();
boolean terminated = false;
try {
terminated = masterExecService.awaitTermination(5, TimeUnit.SECONDS);
} catch (InterruptedException ignore) {
Thread.currentThread().interrupt();
}
if (!terminated) {
logger.warn("masterExecService shutdown without terminated, increase await time");
}
nettyRemotingClient.close();
logger.info("master schedule service stopped...");
}
/**
* run of MasterSchedulerService
*/
@Override
public void run() {
logger.info("master scheduler started");
while (Stopper.isRunning()) {
try {
boolean runCheckFlag = OSUtils.checkResource(masterConfig.getMaxCpuLoadAvg(), masterConfig.getReservedMemory());
if (!runCheckFlag) {
Thread.sleep(Constants.SLEEP_TIME_MILLIS);
continue;
}
scheduleProcess();
} catch (Exception e) {
logger.error("master scheduler thread error", e);
}
}
}
/**
* 1. get command by slot
* 2. donot handle command if slot is empty
*
* @throws Exception
*/
private void scheduleProcess() throws Exception {
// make sure to scan and delete command table in one transaction
Command command = findOneCommand();
if (command != null) {
logger.info("find one command: id: {}, type: {}", command.getId(), command.getCommandType());
try {
ProcessInstance processInstance = processService.handleCommand(logger,
getLocalAddress(),
command,
processDefinitionCacheMaps);
if (!masterConfig.isCacheProcessDefinition()
&& processDefinitionCacheMaps.size() > 0) {
processDefinitionCacheMaps.clear();
}
if (processInstance != null) {
WorkflowExecuteThread workflowExecuteThread = new WorkflowExecuteThread(
processInstance
, processService
, nettyExecutorManager
, processAlertManager
, masterConfig
, taskTimeoutCheckList);
this.processInstanceExecCacheManager.cache(processInstance.getId(), workflowExecuteThread);
if (processInstance.getTimeout() > 0) {
this.processTimeoutCheckList.put(processInstance.getId(), processInstance);
}
logger.info("handle command end, command {} process {} start...",
command.getId(), processInstance.getId());
masterExecService.execute(workflowExecuteThread);
}
} catch (Exception e) {
logger.error("scan command error ", e);
processService.moveToErrorCommand(command, e.toString());
}
} else {
//indicate that no command ,sleep for 1s
Thread.sleep(Constants.SLEEP_TIME_MILLIS);
}
}
private Command findOneCommand() {
int pageNumber = 0;
Command result = null;
while (Stopper.isRunning()) {
if (ServerNodeManager.MASTER_SIZE == 0) {
return null;
}
List<Command> commandList = processService.findCommandPage(ServerNodeManager.MASTER_SIZE, pageNumber);
if (commandList.size() == 0) {
return null;
}
for (Command command : commandList) {
int slot = ServerNodeManager.getSlot();
if (ServerNodeManager.MASTER_SIZE != 0
&& command.getId() % ServerNodeManager.MASTER_SIZE == slot) {
result = command;
break;
}
}
if (result != null) {
logger.info("find command {}, slot:{} :",
result.getId(),
ServerNodeManager.getSlot());
break;
}
pageNumber += 1;
}
return result;
}
private String getLocalAddress() {
return NetUtils.getAddr(masterConfig.getListenPort());
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-server/src/main/resources/application-master.yaml | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
spring:
application:
name: master-server
master:
listen-port: 5678
# master execute thread number to limit process instances in parallel
exec-threads: 100
# master execute task number in parallel per process instance
exec-task-num: 20
# master dispatch task number per batch
dispatch-task-number: 3
# master host selector to select a suitable worker, default value: LowerWeight. Optional values include random, round_robin, lower_weight
host-selector: lower_weight
# master heartbeat interval, the unit is second
heartbeat-interval: 10
# master commit task retry times
task-commit-retry-times: 5
# master commit task interval, the unit is millisecond
task-commit-interval: 1000
state-wheel-interval: 5
# master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2
max-cpu-load-avg: -1
# master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G
reserved-memory: 0.3
# master cache process definition, default: true
cache-process-definition: true
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.service.process;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_EMPTY_SUB_PROCESS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_FATHER_PARAMS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID;
import static org.apache.dolphinscheduler.common.Constants.LOCAL_PARAMS;
import static java.util.stream.Collectors.toSet;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.AuthorizationType;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.Direct;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.spi.enums.ResourceType;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.DateInterval;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.ProcessDag;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.process.ResourceInfo;
import org.apache.dolphinscheduler.common.task.AbstractParameters;
import org.apache.dolphinscheduler.common.task.TaskTimeoutParameter;
import org.apache.dolphinscheduler.common.task.subprocess.SubProcessParameters;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.ParameterUtils;
import org.apache.dolphinscheduler.common.utils.SnowFlakeUtils;
import org.apache.dolphinscheduler.common.utils.SnowFlakeUtils.SnowFlakeException;
import org.apache.dolphinscheduler.common.utils.TaskParametersUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.DagData;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.Environment;
import org.apache.dolphinscheduler.dao.entity.ErrorCommand;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.ProcessInstanceMap;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog;
import org.apache.dolphinscheduler.dao.entity.Project;
import org.apache.dolphinscheduler.dao.entity.ProjectUser;
import org.apache.dolphinscheduler.dao.entity.Resource;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.UdfFunc;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.CommandMapper;
import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper;
import org.apache.dolphinscheduler.dao.mapper.EnvironmentMapper;
import org.apache.dolphinscheduler.dao.mapper.ErrorCommandMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.ResourceMapper;
import org.apache.dolphinscheduler.dao.mapper.ResourceUserMapper;
import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.TenantMapper;
import org.apache.dolphinscheduler.dao.mapper.UdfFuncMapper;
import org.apache.dolphinscheduler.dao.mapper.UserMapper;
import org.apache.dolphinscheduler.dao.utils.DagHelper;
import org.apache.dolphinscheduler.remote.command.StateEventChangeCommand;
import org.apache.dolphinscheduler.remote.processor.StateEventCallbackService;
import org.apache.dolphinscheduler.remote.utils.Host;
import org.apache.dolphinscheduler.service.log.LogClientService;
import org.apache.dolphinscheduler.service.quartz.cron.CronUtils;
import org.apache.commons.collections.CollectionUtils;
import org.apache.commons.lang.StringUtils;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date;
import java.util.EnumMap;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Objects;
import java.util.Set;
import java.util.stream.Collectors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.transaction.annotation.Transactional;
import com.facebook.presto.jdbc.internal.guava.collect.Lists;
import com.fasterxml.jackson.core.type.TypeReference;
import com.fasterxml.jackson.databind.node.ObjectNode;
/**
* process relative dao that some mappers in this.
*/
@Component
public class ProcessService {
private final Logger logger = LoggerFactory.getLogger(getClass());
private final int[] stateArray = new int[]{ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.DELAY_EXECUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal()};
@Autowired
private UserMapper userMapper;
@Autowired
private ProcessDefinitionMapper processDefineMapper;
@Autowired
private ProcessDefinitionLogMapper processDefineLogMapper;
@Autowired
private ProcessInstanceMapper processInstanceMapper;
@Autowired
private DataSourceMapper dataSourceMapper;
@Autowired
private ProcessInstanceMapMapper processInstanceMapMapper;
@Autowired
private TaskInstanceMapper taskInstanceMapper;
@Autowired
private CommandMapper commandMapper;
@Autowired
private ScheduleMapper scheduleMapper;
@Autowired
private UdfFuncMapper udfFuncMapper;
@Autowired
private ResourceMapper resourceMapper;
@Autowired
private ResourceUserMapper resourceUserMapper;
@Autowired
private ErrorCommandMapper errorCommandMapper;
@Autowired
private TenantMapper tenantMapper;
@Autowired
private ProjectMapper projectMapper;
@Autowired
private TaskDefinitionMapper taskDefinitionMapper;
@Autowired
private TaskDefinitionLogMapper taskDefinitionLogMapper;
@Autowired
private ProcessTaskRelationMapper processTaskRelationMapper;
@Autowired
private ProcessTaskRelationLogMapper processTaskRelationLogMapper;
@Autowired
StateEventCallbackService stateEventCallbackService;
@Autowired
private EnvironmentMapper environmentMapper;
/**
* handle Command (construct ProcessInstance from Command) , wrapped in transaction
*
* @param logger logger
* @param host host
* @param command found command
* @param processDefinitionCacheMaps
* @return process instance
*/
public ProcessInstance handleCommand(Logger logger, String host, Command command, HashMap<String, ProcessDefinition> processDefinitionCacheMaps) {
ProcessInstance processInstance = constructProcessInstance(command, host, processDefinitionCacheMaps);
// cannot construct process instance, return null
if (processInstance == null) {
logger.error("scan command, command parameter is error: {}", command);
moveToErrorCommand(command, "process instance is null");
return null;
}
processInstance.setCommandType(command.getCommandType());
processInstance.addHistoryCmd(command.getCommandType());
//if the processDefination is serial
ProcessDefinition processDefinition = this.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
if (processDefinition.getExecutionType().typeIsSerial()) {
saveSerialProcess(processInstance,processDefinition);
if (processInstance.getState() != ExecutionStatus.SUBMITTED_SUCCESS) {
this.setSubProcessParam(processInstance);
this.commandMapper.deleteById(command.getId());
return null;
}
} else {
saveProcessInstance(processInstance);
}
this.setSubProcessParam(processInstance);
this.commandMapper.deleteById(command.getId());
return processInstance;
}
private void saveSerialProcess(ProcessInstance processInstance,ProcessDefinition processDefinition) {
processInstance.setState(ExecutionStatus.SERIAL_WAIT);
saveProcessInstance(processInstance);
//serial wait
//when we get the running instance(or waiting instance) only get the priority instance(by id)
if (processDefinition.getExecutionType().typeIsSerialWait()) {
while (true) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isEmpty(runningProcessInstances)) {
processInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS);
saveProcessInstance(processInstance);
return;
}
ProcessInstance runningProcess = runningProcessInstances.get(0);
if (this.processInstanceMapper.updateNextProcessIdById(processInstance.getId(), runningProcess.getId())) {
return;
}
}
} else if (processDefinition.getExecutionType().typeIsSerialDiscard()) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isEmpty(runningProcessInstances)) {
processInstance.setState(ExecutionStatus.STOP);
saveProcessInstance(processInstance);
}
} else if (processDefinition.getExecutionType().typeIsSerialPriority()) {
List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(),
Constants.RUNNING_PROCESS_STATE,processInstance.getId());
if (CollectionUtils.isNotEmpty(runningProcessInstances)) {
for (ProcessInstance info : runningProcessInstances) {
info.setCommandType(CommandType.STOP);
info.addHistoryCmd(CommandType.STOP);
info.setState(ExecutionStatus.READY_STOP);
int update = updateProcessInstance(info);
// determine whether the process is normal
if (update > 0) {
String host = info.getHost();
String address = host.split(":")[0];
int port = Integer.parseInt(host.split(":")[1]);
StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand(
info.getId(), 0, info.getState(), info.getId(), 0
);
try {
stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command());
} catch (Exception e) {
logger.error("sendResultError");
}
}
}
}
}
}
/**
* save error command, and delete original command
*
* @param command command
* @param message message
*/
public void moveToErrorCommand(Command command, String message) {
ErrorCommand errorCommand = new ErrorCommand(command, message);
this.errorCommandMapper.insert(errorCommand);
this.commandMapper.deleteById(command.getId());
}
/**
* set process waiting thread
*
* @param command command
* @param processInstance processInstance
* @return process instance
*/
private ProcessInstance setWaitingThreadProcess(Command command, ProcessInstance processInstance) {
processInstance.setState(ExecutionStatus.WAITING_THREAD);
if (command.getCommandType() != CommandType.RECOVER_WAITING_THREAD) {
processInstance.addHistoryCmd(command.getCommandType());
}
saveProcessInstance(processInstance);
this.setSubProcessParam(processInstance);
createRecoveryWaitingThreadCommand(command, processInstance);
return null;
}
/**
* insert one command
*
* @param command command
* @return create result
*/
public int createCommand(Command command) {
int result = 0;
if (command != null) {
result = commandMapper.insert(command);
}
return result;
}
/**
* get command page
*
* @param pageSize
* @param pageNumber
* @return
*/
public List<Command> findCommandPage(int pageSize, int pageNumber) {
return commandMapper.queryCommandPage(pageSize, pageNumber * pageSize);
}
/**
* check the input command exists in queue list
*
* @param command command
* @return create command result
*/
public boolean verifyIsNeedCreateCommand(Command command) {
boolean isNeedCreate = true;
EnumMap<CommandType, Integer> cmdTypeMap = new EnumMap<>(CommandType.class);
cmdTypeMap.put(CommandType.REPEAT_RUNNING, 1);
cmdTypeMap.put(CommandType.RECOVER_SUSPENDED_PROCESS, 1);
cmdTypeMap.put(CommandType.START_FAILURE_TASK_PROCESS, 1);
CommandType commandType = command.getCommandType();
if (cmdTypeMap.containsKey(commandType)) {
ObjectNode cmdParamObj = JSONUtils.parseObject(command.getCommandParam());
int processInstanceId = cmdParamObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt();
List<Command> commands = commandMapper.selectList(null);
// for all commands
for (Command tmpCommand : commands) {
if (cmdTypeMap.containsKey(tmpCommand.getCommandType())) {
ObjectNode tempObj = JSONUtils.parseObject(tmpCommand.getCommandParam());
if (tempObj != null && processInstanceId == tempObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt()) {
isNeedCreate = false;
break;
}
}
}
}
return isNeedCreate;
}
/**
* find process instance detail by id
*
* @param processId processId
* @return process instance
*/
public ProcessInstance findProcessInstanceDetailById(int processId) {
return processInstanceMapper.queryDetailById(processId);
}
/**
* get task node list by definitionId
*/
public List<TaskDefinition> getTaskNodeListByDefinition(long defineCode) {
ProcessDefinition processDefinition = processDefineMapper.queryByCode(defineCode);
if (processDefinition == null) {
logger.error("process define not exists");
return new ArrayList<>();
}
List<ProcessTaskRelationLog> processTaskRelations = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion());
Set<TaskDefinition> taskDefinitionSet = new HashSet<>();
for (ProcessTaskRelationLog processTaskRelation : processTaskRelations) {
if (processTaskRelation.getPostTaskCode() > 0) {
taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion()));
}
}
List<TaskDefinitionLog> taskDefinitionLogs = taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet);
return new ArrayList<>(taskDefinitionLogs);
}
/**
* find process instance by id
*
* @param processId processId
* @return process instance
*/
public ProcessInstance findProcessInstanceById(int processId) {
return processInstanceMapper.selectById(processId);
}
/**
* find process define by id.
*
* @param processDefinitionId processDefinitionId
* @return process definition
*/
public ProcessDefinition findProcessDefineById(int processDefinitionId) {
return processDefineMapper.selectById(processDefinitionId);
}
/**
* find process define by code and version.
*
* @param processDefinitionCode processDefinitionCode
* @return process definition
*/
public ProcessDefinition findProcessDefinition(Long processDefinitionCode, int version) {
ProcessDefinition processDefinition = processDefineMapper.queryByCode(processDefinitionCode);
if (processDefinition == null || processDefinition.getVersion() != version) {
processDefinition = processDefineLogMapper.queryByDefinitionCodeAndVersion(processDefinitionCode, version);
if (processDefinition != null) {
processDefinition.setId(0);
}
}
return processDefinition;
}
/**
* find process define by code.
*
* @param processDefinitionCode processDefinitionCode
* @return process definition
*/
public ProcessDefinition findProcessDefinitionByCode(Long processDefinitionCode) {
return processDefineMapper.queryByCode(processDefinitionCode);
}
/**
* delete work process instance by id
*
* @param processInstanceId processInstanceId
* @return delete process instance result
*/
public int deleteWorkProcessInstanceById(int processInstanceId) {
return processInstanceMapper.deleteById(processInstanceId);
}
/**
* delete all sub process by parent instance id
*
* @param processInstanceId processInstanceId
* @return delete all sub process instance result
*/
public int deleteAllSubWorkProcessByParentId(int processInstanceId) {
List<Integer> subProcessIdList = processInstanceMapMapper.querySubIdListByParentId(processInstanceId);
for (Integer subId : subProcessIdList) {
deleteAllSubWorkProcessByParentId(subId);
deleteWorkProcessMapByParentId(subId);
removeTaskLogFile(subId);
deleteWorkProcessInstanceById(subId);
}
return 1;
}
/**
* remove task log file
*
* @param processInstanceId processInstanceId
*/
public void removeTaskLogFile(Integer processInstanceId) {
List<TaskInstance> taskInstanceList = findValidTaskListByProcessId(processInstanceId);
if (CollectionUtils.isEmpty(taskInstanceList)) {
return;
}
try (LogClientService logClient = new LogClientService()) {
for (TaskInstance taskInstance : taskInstanceList) {
String taskLogPath = taskInstance.getLogPath();
if (StringUtils.isEmpty(taskInstance.getHost())) {
continue;
}
int port = Constants.RPC_PORT;
String ip = "";
try {
ip = Host.of(taskInstance.getHost()).getIp();
} catch (Exception e) {
// compatible old version
ip = taskInstance.getHost();
}
// remove task log from loggerserver
logClient.removeTaskLog(ip, port, taskLogPath);
}
}
}
/**
* recursive query sub process definition id by parent id.
*
* @param parentCode parentCode
* @param ids ids
*/
public void recurseFindSubProcess(long parentCode, List<Long> ids) {
List<TaskDefinition> taskNodeList = this.getTaskNodeListByDefinition(parentCode);
if (taskNodeList != null && !taskNodeList.isEmpty()) {
for (TaskDefinition taskNode : taskNodeList) {
String parameter = taskNode.getTaskParams();
ObjectNode parameterJson = JSONUtils.parseObject(parameter);
if (parameterJson.get(CMD_PARAM_SUB_PROCESS_DEFINE_CODE) != null) {
SubProcessParameters subProcessParam = JSONUtils.parseObject(parameter, SubProcessParameters.class);
ids.add(subProcessParam.getProcessDefinitionCode());
recurseFindSubProcess(subProcessParam.getProcessDefinitionCode(), ids);
}
}
}
}
/**
* create recovery waiting thread command when thread pool is not enough for the process instance.
* sub work process instance need not to create recovery command.
* create recovery waiting thread command and delete origin command at the same time.
* if the recovery command is exists, only update the field update_time
*
* @param originCommand originCommand
* @param processInstance processInstance
*/
public void createRecoveryWaitingThreadCommand(Command originCommand, ProcessInstance processInstance) {
// sub process doesnot need to create wait command
if (processInstance.getIsSubProcess() == Flag.YES) {
if (originCommand != null) {
commandMapper.deleteById(originCommand.getId());
}
return;
}
Map<String, String> cmdParam = new HashMap<>();
cmdParam.put(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD, String.valueOf(processInstance.getId()));
// process instance quit by "waiting thread" state
if (originCommand == null) {
Command command = new Command(
CommandType.RECOVER_WAITING_THREAD,
processInstance.getTaskDependType(),
processInstance.getFailureStrategy(),
processInstance.getExecutorId(),
processInstance.getProcessDefinition().getCode(),
JSONUtils.toJsonString(cmdParam),
processInstance.getWarningType(),
processInstance.getWarningGroupId(),
processInstance.getScheduleTime(),
processInstance.getWorkerGroup(),
processInstance.getEnvironmentCode(),
processInstance.getProcessInstancePriority(),
processInstance.getDryRun(),
processInstance.getId(),
processInstance.getProcessDefinitionVersion()
);
saveCommand(command);
return;
}
// update the command time if current command if recover from waiting
if (originCommand.getCommandType() == CommandType.RECOVER_WAITING_THREAD) {
originCommand.setUpdateTime(new Date());
saveCommand(originCommand);
} else {
// delete old command and create new waiting thread command
commandMapper.deleteById(originCommand.getId());
originCommand.setId(0);
originCommand.setCommandType(CommandType.RECOVER_WAITING_THREAD);
originCommand.setUpdateTime(new Date());
originCommand.setCommandParam(JSONUtils.toJsonString(cmdParam));
originCommand.setProcessInstancePriority(processInstance.getProcessInstancePriority());
saveCommand(originCommand);
}
}
/**
* get schedule time from command
*
* @param command command
* @param cmdParam cmdParam map
* @return date
*/
private Date getScheduleTime(Command command, Map<String, String> cmdParam) {
Date scheduleTime = command.getScheduleTime();
if (scheduleTime == null
&& cmdParam != null
&& cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) {
Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE));
Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
List<Schedule> schedules = queryReleaseSchedulerListByProcessDefinitionCode(command.getProcessDefinitionCode());
List<Date> complementDateList = CronUtils.getSelfFireDateList(start, end, schedules);
if (complementDateList.size() > 0) {
scheduleTime = complementDateList.get(0);
} else {
logger.error("set scheduler time error: complement date list is empty, command: {}",
command.toString());
}
}
return scheduleTime;
}
/**
* generate a new work process instance from command.
*
* @param processDefinition processDefinition
* @param command command
* @param cmdParam cmdParam map
* @return process instance
*/
private ProcessInstance generateNewProcessInstance(ProcessDefinition processDefinition,
Command command,
Map<String, String> cmdParam) {
ProcessInstance processInstance = new ProcessInstance(processDefinition);
processInstance.setProcessDefinitionCode(processDefinition.getCode());
processInstance.setProcessDefinitionVersion(processDefinition.getVersion());
processInstance.setState(ExecutionStatus.RUNNING_EXECUTION);
processInstance.setRecovery(Flag.NO);
processInstance.setStartTime(new Date());
processInstance.setRunTimes(1);
processInstance.setMaxTryTimes(0);
processInstance.setCommandParam(command.getCommandParam());
processInstance.setCommandType(command.getCommandType());
processInstance.setIsSubProcess(Flag.NO);
processInstance.setTaskDependType(command.getTaskDependType());
processInstance.setFailureStrategy(command.getFailureStrategy());
processInstance.setExecutorId(command.getExecutorId());
WarningType warningType = command.getWarningType() == null ? WarningType.NONE : command.getWarningType();
processInstance.setWarningType(warningType);
Integer warningGroupId = command.getWarningGroupId() == null ? 0 : command.getWarningGroupId();
processInstance.setWarningGroupId(warningGroupId);
processInstance.setDryRun(command.getDryRun());
if (command.getScheduleTime() != null) {
processInstance.setScheduleTime(command.getScheduleTime());
}
processInstance.setCommandStartTime(command.getStartTime());
processInstance.setLocations(processDefinition.getLocations());
// reset global params while there are start parameters
setGlobalParamIfCommanded(processDefinition, cmdParam);
// curing global params
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
getCommandTypeIfComplement(processInstance, command),
processInstance.getScheduleTime()));
// set process instance priority
processInstance.setProcessInstancePriority(command.getProcessInstancePriority());
String workerGroup = StringUtils.isBlank(command.getWorkerGroup()) ? Constants.DEFAULT_WORKER_GROUP : command.getWorkerGroup();
processInstance.setWorkerGroup(workerGroup);
processInstance.setEnvironmentCode(Objects.isNull(command.getEnvironmentCode()) ? -1 : command.getEnvironmentCode());
processInstance.setTimeout(processDefinition.getTimeout());
processInstance.setTenantId(processDefinition.getTenantId());
return processInstance;
}
private void setGlobalParamIfCommanded(ProcessDefinition processDefinition, Map<String, String> cmdParam) {
// get start params from command param
Map<String, String> startParamMap = new HashMap<>();
if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_START_PARAMS)) {
String startParamJson = cmdParam.get(Constants.CMD_PARAM_START_PARAMS);
startParamMap = JSONUtils.toMap(startParamJson);
}
Map<String, String> fatherParamMap = new HashMap<>();
if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_FATHER_PARAMS)) {
String fatherParamJson = cmdParam.get(Constants.CMD_PARAM_FATHER_PARAMS);
fatherParamMap = JSONUtils.toMap(fatherParamJson);
}
startParamMap.putAll(fatherParamMap);
// set start param into global params
if (startParamMap.size() > 0
&& processDefinition.getGlobalParamMap() != null) {
for (Map.Entry<String, String> param : processDefinition.getGlobalParamMap().entrySet()) {
String val = startParamMap.get(param.getKey());
if (val != null) {
param.setValue(val);
}
}
}
}
/**
* get process tenant
* there is tenant id in definition, use the tenant of the definition.
* if there is not tenant id in the definiton or the tenant not exist
* use definition creator's tenant.
*
* @param tenantId tenantId
* @param userId userId
* @return tenant
*/
public Tenant getTenantForProcess(int tenantId, int userId) {
Tenant tenant = null;
if (tenantId >= 0) {
tenant = tenantMapper.queryById(tenantId);
}
if (userId == 0) {
return null;
}
if (tenant == null) {
User user = userMapper.selectById(userId);
tenant = tenantMapper.queryById(user.getTenantId());
}
return tenant;
}
/**
* get an environment
* use the code of the environment to find a environment.
*
* @param environmentCode environmentCode
* @return Environment
*/
public Environment findEnvironmentByCode(Long environmentCode) {
Environment environment = null;
if (environmentCode >= 0) {
environment = environmentMapper.queryByEnvironmentCode(environmentCode);
}
return environment;
}
/**
* check command parameters is valid
*
* @param command command
* @param cmdParam cmdParam map
* @return whether command param is valid
*/
private Boolean checkCmdParam(Command command, Map<String, String> cmdParam) {
if (command.getTaskDependType() == TaskDependType.TASK_ONLY || command.getTaskDependType() == TaskDependType.TASK_PRE) {
if (cmdParam == null
|| !cmdParam.containsKey(Constants.CMD_PARAM_START_NODES)
|| cmdParam.get(Constants.CMD_PARAM_START_NODES).isEmpty()) {
logger.error("command node depend type is {}, but start nodes is null ", command.getTaskDependType());
return false;
}
}
return true;
}
/**
* construct process instance according to one command.
*
* @param command command
* @param host host
* @param processDefinitionCacheMaps
* @return process instance
*/
private ProcessInstance constructProcessInstance(Command command, String host, HashMap<String, ProcessDefinition> processDefinitionCacheMaps) {
ProcessInstance processInstance;
ProcessDefinition processDefinition;
CommandType commandType = command.getCommandType();
String key = String.format("%d-%d", command.getProcessDefinitionCode(), command.getProcessDefinitionVersion());
if (processDefinitionCacheMaps.containsKey(key)) {
processDefinition = processDefinitionCacheMaps.get(key);
} else {
processDefinition = this.findProcessDefinition(command.getProcessDefinitionCode(), command.getProcessDefinitionVersion());
if (processDefinition != null) {
processDefinitionCacheMaps.put(key, processDefinition);
}
}
if (processDefinition == null) {
logger.error("cannot find the work process define! define code : {}", command.getProcessDefinitionCode());
return null;
}
Map<String, String> cmdParam = JSONUtils.toMap(command.getCommandParam());
int processInstanceId = command.getProcessInstanceId();
if (processInstanceId == 0) {
processInstance = generateNewProcessInstance(processDefinition, command, cmdParam);
} else {
processInstance = this.findProcessInstanceDetailById(processInstanceId);
if (processInstance == null) {
return processInstance;
}
}
if (cmdParam != null) {
CommandType commandTypeIfComplement = getCommandTypeIfComplement(processInstance, command);
// reset global params while repeat running is needed by cmdParam
if (commandTypeIfComplement == CommandType.REPEAT_RUNNING) {
setGlobalParamIfCommanded(processDefinition, cmdParam);
}
// Recalculate global parameters after rerun.
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
commandTypeIfComplement,
processInstance.getScheduleTime()));
processInstance.setProcessDefinition(processDefinition);
}
//reset command parameter
if (processInstance.getCommandParam() != null) {
Map<String, String> processCmdParam = JSONUtils.toMap(processInstance.getCommandParam());
for (Map.Entry<String, String> entry : processCmdParam.entrySet()) {
if (!cmdParam.containsKey(entry.getKey())) {
cmdParam.put(entry.getKey(), entry.getValue());
}
}
}
// reset command parameter if sub process
if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) {
processInstance.setCommandParam(command.getCommandParam());
}
if (Boolean.FALSE.equals(checkCmdParam(command, cmdParam))) {
logger.error("command parameter check failed!");
return null;
}
if (command.getScheduleTime() != null) {
processInstance.setScheduleTime(command.getScheduleTime());
}
processInstance.setHost(host);
ExecutionStatus runStatus = ExecutionStatus.RUNNING_EXECUTION;
int runTime = processInstance.getRunTimes();
switch (commandType) {
case START_PROCESS:
break;
case START_FAILURE_TASK_PROCESS:
// find failed tasks and init these tasks
List<Integer> failedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.FAILURE);
List<Integer> toleranceList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.NEED_FAULT_TOLERANCE);
List<Integer> killedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.KILL);
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
failedList.addAll(killedList);
failedList.addAll(toleranceList);
for (Integer taskId : failedList) {
initTaskInstance(this.findTaskInstanceById(taskId));
}
cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING,
String.join(Constants.COMMA, convertIntListToString(failedList)));
processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam));
processInstance.setRunTimes(runTime + 1);
break;
case START_CURRENT_TASK_PROCESS:
break;
case RECOVER_WAITING_THREAD:
break;
case RECOVER_SUSPENDED_PROCESS:
// find pause tasks and init task's state
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
List<Integer> suspendedNodeList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.PAUSE);
List<Integer> stopNodeList = findTaskIdByInstanceState(processInstance.getId(),
ExecutionStatus.KILL);
suspendedNodeList.addAll(stopNodeList);
for (Integer taskId : suspendedNodeList) {
// initialize the pause state
initTaskInstance(this.findTaskInstanceById(taskId));
}
cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING, String.join(",", convertIntListToString(suspendedNodeList)));
processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam));
processInstance.setRunTimes(runTime + 1);
break;
case RECOVER_TOLERANCE_FAULT_PROCESS:
// recover tolerance fault process
processInstance.setRecovery(Flag.YES);
runStatus = processInstance.getState();
break;
case COMPLEMENT_DATA:
// delete all the valid tasks when complement data if id is not null
if (processInstance.getId() != 0) {
List<TaskInstance> taskInstanceList = this.findValidTaskListByProcessId(processInstance.getId());
for (TaskInstance taskInstance : taskInstanceList) {
taskInstance.setFlag(Flag.NO);
this.updateTaskInstance(taskInstance);
}
}
break;
case REPEAT_RUNNING:
// delete the recover task names from command parameter
if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) {
cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING);
processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam));
}
// delete all the valid tasks when repeat running
List<TaskInstance> validTaskList = findValidTaskListByProcessId(processInstance.getId());
for (TaskInstance taskInstance : validTaskList) {
taskInstance.setFlag(Flag.NO);
updateTaskInstance(taskInstance);
}
processInstance.setStartTime(new Date());
processInstance.setEndTime(null);
processInstance.setRunTimes(runTime + 1);
initComplementDataParam(processDefinition, processInstance, cmdParam);
break;
case SCHEDULER:
break;
default:
break;
}
processInstance.setState(runStatus);
return processInstance;
}
/**
* get process definition by command
* If it is a fault-tolerant command, get the specified version of ProcessDefinition through ProcessInstance
* Otherwise, get the latest version of ProcessDefinition
*
* @return ProcessDefinition
*/
private ProcessDefinition getProcessDefinitionByCommand(long processDefinitionCode, Map<String, String> cmdParam) {
if (cmdParam != null) {
int processInstanceId = 0;
if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING)) {
processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING));
} else if (cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) {
processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_SUB_PROCESS));
} else if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD)) {
processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD));
}
if (processInstanceId != 0) {
ProcessInstance processInstance = this.findProcessInstanceDetailById(processInstanceId);
if (processInstance == null) {
return null;
}
return processDefineLogMapper.queryByDefinitionCodeAndVersion(
processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
}
}
return processDefineMapper.queryByCode(processDefinitionCode);
}
/**
* return complement data if the process start with complement data
*
* @param processInstance processInstance
* @param command command
* @return command type
*/
private CommandType getCommandTypeIfComplement(ProcessInstance processInstance, Command command) {
if (CommandType.COMPLEMENT_DATA == processInstance.getCmdTypeIfComplement()) {
return CommandType.COMPLEMENT_DATA;
} else {
return command.getCommandType();
}
}
/**
* initialize complement data parameters
*
* @param processDefinition processDefinition
* @param processInstance processInstance
* @param cmdParam cmdParam
*/
private void initComplementDataParam(ProcessDefinition processDefinition,
ProcessInstance processInstance,
Map<String, String> cmdParam) {
if (!processInstance.isComplementData()) {
return;
}
Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE));
Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE));
List<Schedule> listSchedules = queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode());
List<Date> complementDate = CronUtils.getSelfFireDateList(start, end, listSchedules);
if (complementDate.size() > 0
&& Flag.NO == processInstance.getIsSubProcess()) {
processInstance.setScheduleTime(complementDate.get(0));
}
processInstance.setGlobalParams(ParameterUtils.curingGlobalParams(
processDefinition.getGlobalParamMap(),
processDefinition.getGlobalParamList(),
CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime()));
}
/**
* set sub work process parameters.
* handle sub work process instance, update relation table and command parameters
* set sub work process flag, extends parent work process command parameters
*
* @param subProcessInstance subProcessInstance
*/
public void setSubProcessParam(ProcessInstance subProcessInstance) {
String cmdParam = subProcessInstance.getCommandParam();
if (StringUtils.isEmpty(cmdParam)) {
return;
}
Map<String, String> paramMap = JSONUtils.toMap(cmdParam);
// write sub process id into cmd param.
if (paramMap.containsKey(CMD_PARAM_SUB_PROCESS)
&& CMD_PARAM_EMPTY_SUB_PROCESS.equals(paramMap.get(CMD_PARAM_SUB_PROCESS))) {
paramMap.remove(CMD_PARAM_SUB_PROCESS);
paramMap.put(CMD_PARAM_SUB_PROCESS, String.valueOf(subProcessInstance.getId()));
subProcessInstance.setCommandParam(JSONUtils.toJsonString(paramMap));
subProcessInstance.setIsSubProcess(Flag.YES);
this.saveProcessInstance(subProcessInstance);
}
// copy parent instance user def params to sub process..
String parentInstanceId = paramMap.get(CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID);
if (StringUtils.isNotEmpty(parentInstanceId)) {
ProcessInstance parentInstance = findProcessInstanceDetailById(Integer.parseInt(parentInstanceId));
if (parentInstance != null) {
subProcessInstance.setGlobalParams(
joinGlobalParams(parentInstance.getGlobalParams(), subProcessInstance.getGlobalParams()));
this.saveProcessInstance(subProcessInstance);
} else {
logger.error("sub process command params error, cannot find parent instance: {} ", cmdParam);
}
}
ProcessInstanceMap processInstanceMap = JSONUtils.parseObject(cmdParam, ProcessInstanceMap.class);
if (processInstanceMap == null || processInstanceMap.getParentProcessInstanceId() == 0) {
return;
}
// update sub process id to process map table
processInstanceMap.setProcessInstanceId(subProcessInstance.getId());
this.updateWorkProcessInstanceMap(processInstanceMap);
}
/**
* join parent global params into sub process.
* only the keys doesn't in sub process global would be joined.
*
* @param parentGlobalParams parentGlobalParams
* @param subGlobalParams subGlobalParams
* @return global params join
*/
private String joinGlobalParams(String parentGlobalParams, String subGlobalParams) {
List<Property> parentPropertyList = JSONUtils.toList(parentGlobalParams, Property.class);
List<Property> subPropertyList = JSONUtils.toList(subGlobalParams, Property.class);
Map<String, String> subMap = subPropertyList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue));
for (Property parent : parentPropertyList) {
if (!subMap.containsKey(parent.getProp())) {
subPropertyList.add(parent);
}
}
return JSONUtils.toJsonString(subPropertyList);
}
/**
* initialize task instance
*
* @param taskInstance taskInstance
*/
private void initTaskInstance(TaskInstance taskInstance) {
if (!taskInstance.isSubProcess()
&& (taskInstance.getState().typeIsCancel() || taskInstance.getState().typeIsFailure())) {
taskInstance.setFlag(Flag.NO);
updateTaskInstance(taskInstance);
return;
}
taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS);
updateTaskInstance(taskInstance);
}
/**
* retry submit task to db
*
* @param taskInstance
* @param commitRetryTimes
* @param commitInterval
* @return
*/
public TaskInstance submitTask(TaskInstance taskInstance, int commitRetryTimes, int commitInterval) {
int retryTimes = 1;
boolean submitDB = false;
TaskInstance task = null;
while (retryTimes <= commitRetryTimes) {
try {
if (!submitDB) {
// submit task to db
task = submitTask(taskInstance);
if (task != null && task.getId() != 0) {
submitDB = true;
break;
}
}
if (!submitDB) {
logger.error("task commit to db failed , taskId {} has already retry {} times, please check the database", taskInstance.getId(), retryTimes);
}
Thread.sleep(commitInterval);
} catch (Exception e) {
logger.error("task commit to mysql failed", e);
}
retryTimes += 1;
}
return task;
}
/**
* submit task to db
* submit sub process to command
*
* @param taskInstance taskInstance
* @return task instance
*/
@Transactional(rollbackFor = Exception.class)
public TaskInstance submitTask(TaskInstance taskInstance) {
ProcessInstance processInstance = this.findProcessInstanceDetailById(taskInstance.getProcessInstanceId());
logger.info("start submit task : {}, instance id:{}, state: {}",
taskInstance.getName(), taskInstance.getProcessInstanceId(), processInstance.getState());
//submit to db
TaskInstance task = submitTaskInstanceToDB(taskInstance, processInstance);
if (task == null) {
logger.error("end submit task to db error, task name:{}, process id:{} state: {} ",
taskInstance.getName(), taskInstance.getProcessInstance(), processInstance.getState());
return task;
}
if (!task.getState().typeIsFinished()) {
createSubWorkProcess(processInstance, task);
}
logger.info("end submit task to db successfully:{} {} state:{} complete, instance id:{} state: {} ",
taskInstance.getId(), taskInstance.getName(), task.getState(), processInstance.getId(), processInstance.getState());
return task;
}
/**
* set work process instance map
* consider o
* repeat running does not generate new sub process instance
* set map {parent instance id, task instance id, 0(child instance id)}
*
* @param parentInstance parentInstance
* @param parentTask parentTask
* @return process instance map
*/
private ProcessInstanceMap setProcessInstanceMap(ProcessInstance parentInstance, TaskInstance parentTask) {
ProcessInstanceMap processMap = findWorkProcessMapByParent(parentInstance.getId(), parentTask.getId());
if (processMap != null) {
return processMap;
}
if (parentInstance.getCommandType() == CommandType.REPEAT_RUNNING) {
// update current task id to map
processMap = findPreviousTaskProcessMap(parentInstance, parentTask);
if (processMap != null) {
processMap.setParentTaskInstanceId(parentTask.getId());
updateWorkProcessInstanceMap(processMap);
return processMap;
}
}
// new task
processMap = new ProcessInstanceMap();
processMap.setParentProcessInstanceId(parentInstance.getId());
processMap.setParentTaskInstanceId(parentTask.getId());
createWorkProcessInstanceMap(processMap);
return processMap;
}
/**
* find previous task work process map.
*
* @param parentProcessInstance parentProcessInstance
* @param parentTask parentTask
* @return process instance map
*/
private ProcessInstanceMap findPreviousTaskProcessMap(ProcessInstance parentProcessInstance,
TaskInstance parentTask) {
Integer preTaskId = 0;
List<TaskInstance> preTaskList = this.findPreviousTaskListByWorkProcessId(parentProcessInstance.getId());
for (TaskInstance task : preTaskList) {
if (task.getName().equals(parentTask.getName())) {
preTaskId = task.getId();
ProcessInstanceMap map = findWorkProcessMapByParent(parentProcessInstance.getId(), preTaskId);
if (map != null) {
return map;
}
}
}
logger.info("sub process instance is not found,parent task:{},parent instance:{}",
parentTask.getId(), parentProcessInstance.getId());
return null;
}
/**
* create sub work process command
*
* @param parentProcessInstance parentProcessInstance
* @param task task
*/
public void createSubWorkProcess(ProcessInstance parentProcessInstance, TaskInstance task) {
if (!task.isSubProcess()) {
return;
}
//check create sub work flow firstly
ProcessInstanceMap instanceMap = findWorkProcessMapByParent(parentProcessInstance.getId(), task.getId());
if (null != instanceMap && CommandType.RECOVER_TOLERANCE_FAULT_PROCESS == parentProcessInstance.getCommandType()) {
// recover failover tolerance would not create a new command when the sub command already have been created
return;
}
instanceMap = setProcessInstanceMap(parentProcessInstance, task);
ProcessInstance childInstance = null;
if (instanceMap.getProcessInstanceId() != 0) {
childInstance = findProcessInstanceById(instanceMap.getProcessInstanceId());
}
Command subProcessCommand = createSubProcessCommand(parentProcessInstance, childInstance, instanceMap, task);
updateSubProcessDefinitionByParent(parentProcessInstance, subProcessCommand.getProcessDefinitionCode());
initSubInstanceState(childInstance);
createCommand(subProcessCommand);
logger.info("sub process command created: {} ", subProcessCommand);
}
/**
* complement data needs transform parent parameter to child.
*/
private String getSubWorkFlowParam(ProcessInstanceMap instanceMap, ProcessInstance parentProcessInstance, Map<String, String> fatherParams) {
// set sub work process command
String processMapStr = JSONUtils.toJsonString(instanceMap);
Map<String, String> cmdParam = JSONUtils.toMap(processMapStr);
if (parentProcessInstance.isComplementData()) {
Map<String, String> parentParam = JSONUtils.toMap(parentProcessInstance.getCommandParam());
String endTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE);
String startTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE);
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, endTime);
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, startTime);
processMapStr = JSONUtils.toJsonString(cmdParam);
}
if (fatherParams.size() != 0) {
cmdParam.put(CMD_PARAM_FATHER_PARAMS, JSONUtils.toJsonString(fatherParams));
processMapStr = JSONUtils.toJsonString(cmdParam);
}
return processMapStr;
}
public Map<String, String> getGlobalParamMap(String globalParams) {
List<Property> propList;
Map<String, String> globalParamMap = new HashMap<>();
if (StringUtils.isNotEmpty(globalParams)) {
propList = JSONUtils.toList(globalParams, Property.class);
globalParamMap = propList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue));
}
return globalParamMap;
}
/**
* create sub work process command
*/
public Command createSubProcessCommand(ProcessInstance parentProcessInstance,
ProcessInstance childInstance,
ProcessInstanceMap instanceMap,
TaskInstance task) {
CommandType commandType = getSubCommandType(parentProcessInstance, childInstance);
Map<String, String> subProcessParam = JSONUtils.toMap(task.getTaskParams());
long childDefineCode = 0L;
if (subProcessParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE)) {
childDefineCode = Long.parseLong(subProcessParam.get(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE));
}
ProcessDefinition subProcessDefinition = processDefineMapper.queryByCode(childDefineCode);
Object localParams = subProcessParam.get(Constants.LOCAL_PARAMS);
List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class);
Map<String, String> globalMap = this.getGlobalParamMap(parentProcessInstance.getGlobalParams());
Map<String, String> fatherParams = new HashMap<>();
if (CollectionUtils.isNotEmpty(allParam)) {
for (Property info : allParam) {
fatherParams.put(info.getProp(), globalMap.get(info.getProp()));
}
}
String processParam = getSubWorkFlowParam(instanceMap, parentProcessInstance, fatherParams);
int subProcessInstanceId = childInstance == null ? 0 : childInstance.getId();
return new Command(
commandType,
TaskDependType.TASK_POST,
parentProcessInstance.getFailureStrategy(),
parentProcessInstance.getExecutorId(),
subProcessDefinition.getCode(),
processParam,
parentProcessInstance.getWarningType(),
parentProcessInstance.getWarningGroupId(),
parentProcessInstance.getScheduleTime(),
task.getWorkerGroup(),
task.getEnvironmentCode(),
parentProcessInstance.getProcessInstancePriority(),
parentProcessInstance.getDryRun(),
subProcessInstanceId,
subProcessDefinition.getVersion()
);
}
/**
* initialize sub work flow state
* child instance state would be initialized when 'recovery from pause/stop/failure'
*/
private void initSubInstanceState(ProcessInstance childInstance) {
if (childInstance != null) {
childInstance.setState(ExecutionStatus.RUNNING_EXECUTION);
updateProcessInstance(childInstance);
}
}
/**
* get sub work flow command type
* child instance exist: child command = fatherCommand
* child instance not exists: child command = fatherCommand[0]
*/
private CommandType getSubCommandType(ProcessInstance parentProcessInstance, ProcessInstance childInstance) {
CommandType commandType = parentProcessInstance.getCommandType();
if (childInstance == null) {
String fatherHistoryCommand = parentProcessInstance.getHistoryCmd();
commandType = CommandType.valueOf(fatherHistoryCommand.split(Constants.COMMA)[0]);
}
return commandType;
}
/**
* update sub process definition
*
* @param parentProcessInstance parentProcessInstance
* @param childDefinitionCode childDefinitionId
*/
private void updateSubProcessDefinitionByParent(ProcessInstance parentProcessInstance, long childDefinitionCode) {
ProcessDefinition fatherDefinition = this.findProcessDefinition(parentProcessInstance.getProcessDefinitionCode(),
parentProcessInstance.getProcessDefinitionVersion());
ProcessDefinition childDefinition = this.findProcessDefinitionByCode(childDefinitionCode);
if (childDefinition != null && fatherDefinition != null) {
childDefinition.setWarningGroupId(fatherDefinition.getWarningGroupId());
processDefineMapper.updateById(childDefinition);
}
}
/**
* submit task to mysql
*
* @param taskInstance taskInstance
* @param processInstance processInstance
* @return task instance
*/
public TaskInstance submitTaskInstanceToDB(TaskInstance taskInstance, ProcessInstance processInstance) {
ExecutionStatus processInstanceState = processInstance.getState();
if (taskInstance.getState().typeIsFailure()) {
if (taskInstance.isSubProcess()) {
taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1);
} else {
if (processInstanceState != ExecutionStatus.READY_STOP
&& processInstanceState != ExecutionStatus.READY_PAUSE) {
// failure task set invalid
taskInstance.setFlag(Flag.NO);
updateTaskInstance(taskInstance);
// crate new task instance
if (taskInstance.getState() != ExecutionStatus.NEED_FAULT_TOLERANCE) {
taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1);
}
taskInstance.setSubmitTime(null);
taskInstance.setLogPath(null);
taskInstance.setExecutePath(null);
taskInstance.setStartTime(null);
taskInstance.setEndTime(null);
taskInstance.setFlag(Flag.YES);
taskInstance.setHost(null);
taskInstance.setId(0);
}
}
}
taskInstance.setExecutorId(processInstance.getExecutorId());
taskInstance.setProcessInstancePriority(processInstance.getProcessInstancePriority());
taskInstance.setState(getSubmitTaskState(taskInstance, processInstanceState));
if (taskInstance.getSubmitTime() == null) {
taskInstance.setSubmitTime(new Date());
}
if (taskInstance.getFirstSubmitTime() == null) {
taskInstance.setFirstSubmitTime(taskInstance.getSubmitTime());
}
boolean saveResult = saveTaskInstance(taskInstance);
if (!saveResult) {
return null;
}
return taskInstance;
}
/**
* get submit task instance state by the work process state
* cannot modify the task state when running/kill/submit success, or this
* task instance is already exists in task queue .
* return pause if work process state is ready pause
* return stop if work process state is ready stop
* if all of above are not satisfied, return submit success
*
* @param taskInstance taskInstance
* @param processInstanceState processInstanceState
* @return process instance state
*/
public ExecutionStatus getSubmitTaskState(TaskInstance taskInstance, ExecutionStatus processInstanceState) {
ExecutionStatus state = taskInstance.getState();
// running, delayed or killed
// the task already exists in task queue
// return state
if (
state == ExecutionStatus.RUNNING_EXECUTION
|| state == ExecutionStatus.DELAY_EXECUTION
|| state == ExecutionStatus.KILL
) {
return state;
}
//return pasue /stop if process instance state is ready pause / stop
// or return submit success
if (processInstanceState == ExecutionStatus.READY_PAUSE) {
state = ExecutionStatus.PAUSE;
} else if (processInstanceState == ExecutionStatus.READY_STOP
|| !checkProcessStrategy(taskInstance)) {
state = ExecutionStatus.KILL;
} else {
state = ExecutionStatus.SUBMITTED_SUCCESS;
}
return state;
}
/**
* check process instance strategy
*
* @param taskInstance taskInstance
* @return check strategy result
*/
private boolean checkProcessStrategy(TaskInstance taskInstance) {
ProcessInstance processInstance = this.findProcessInstanceById(taskInstance.getProcessInstanceId());
FailureStrategy failureStrategy = processInstance.getFailureStrategy();
if (failureStrategy == FailureStrategy.CONTINUE) {
return true;
}
List<TaskInstance> taskInstances = this.findValidTaskListByProcessId(taskInstance.getProcessInstanceId());
for (TaskInstance task : taskInstances) {
if (task.getState() == ExecutionStatus.FAILURE
&& task.getRetryTimes() >= task.getMaxRetryTimes()) {
return false;
}
}
return true;
}
/**
* insert or update work process instance to data base
*
* @param processInstance processInstance
*/
public void saveProcessInstance(ProcessInstance processInstance) {
if (processInstance == null) {
logger.error("save error, process instance is null!");
return;
}
if (processInstance.getId() != 0) {
processInstanceMapper.updateById(processInstance);
} else {
processInstanceMapper.insert(processInstance);
}
}
/**
* insert or update command
*
* @param command command
* @return save command result
*/
public int saveCommand(Command command) {
if (command.getId() != 0) {
return commandMapper.updateById(command);
} else {
return commandMapper.insert(command);
}
}
/**
* insert or update task instance
*
* @param taskInstance taskInstance
* @return save task instance result
*/
public boolean saveTaskInstance(TaskInstance taskInstance) {
if (taskInstance.getId() != 0) {
return updateTaskInstance(taskInstance);
} else {
return createTaskInstance(taskInstance);
}
}
/**
* insert task instance
*
* @param taskInstance taskInstance
* @return create task instance result
*/
public boolean createTaskInstance(TaskInstance taskInstance) {
int count = taskInstanceMapper.insert(taskInstance);
return count > 0;
}
/**
* update task instance
*
* @param taskInstance taskInstance
* @return update task instance result
*/
public boolean updateTaskInstance(TaskInstance taskInstance) {
int count = taskInstanceMapper.updateById(taskInstance);
return count > 0;
}
/**
* find task instance by id
*
* @param taskId task id
* @return task intance
*/
public TaskInstance findTaskInstanceById(Integer taskId) {
return taskInstanceMapper.selectById(taskId);
}
/**
* package task instance,associate processInstance and processDefine
*
* @param taskInstId taskInstId
* @return task instance
*/
public TaskInstance getTaskInstanceDetailByTaskId(int taskInstId) {
// get task instance
TaskInstance taskInstance = findTaskInstanceById(taskInstId);
if (taskInstance == null) {
return null;
}
setTaskInstanceDetail(taskInstance);
return taskInstance;
}
/**
* package task instance,associate processInstance and processDefine
*
* @param taskInstance taskInstance
* @return task instance
*/
public void setTaskInstanceDetail(TaskInstance taskInstance) {
// get process instance
ProcessInstance processInstance = findProcessInstanceDetailById(taskInstance.getProcessInstanceId());
// get process define
ProcessDefinition processDefine = findProcessDefinition(processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion());
taskInstance.setProcessInstance(processInstance);
taskInstance.setProcessDefine(processDefine);
TaskDefinition taskDefinition = taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(
taskInstance.getTaskCode(),
taskInstance.getTaskDefinitionVersion());
updateTaskDefinitionResources(taskDefinition);
taskInstance.setTaskDefine(taskDefinition);
}
/**
* Update {@link ResourceInfo} information in {@link TaskDefinition}
*
* @param taskDefinition the given {@link TaskDefinition}
*/
private void updateTaskDefinitionResources(TaskDefinition taskDefinition) {
Map<String, Object> taskParameters = JSONUtils.parseObject(
taskDefinition.getTaskParams(),
new TypeReference<Map<String, Object>>() { });
if (taskParameters != null) {
// if contains mainJar field, query resource from database
// Flink, Spark, MR
if (taskParameters.containsKey("mainJar")) {
Object mainJarObj = taskParameters.get("mainJar");
ResourceInfo mainJar = JSONUtils.parseObject(
JSONUtils.toJsonString(mainJarObj),
ResourceInfo.class);
ResourceInfo resourceInfo = updateResourceInfo(mainJar);
if (resourceInfo != null) {
taskParameters.put("mainJar", resourceInfo);
}
}
// update resourceList information
if (taskParameters.containsKey("resourceList")) {
String resourceListStr = JSONUtils.toJsonString(taskParameters.get("resourceList"));
List<ResourceInfo> resourceInfos = JSONUtils.toList(resourceListStr, ResourceInfo.class);
List<ResourceInfo> updatedResourceInfos = resourceInfos
.stream()
.map(this::updateResourceInfo)
.filter(Objects::nonNull)
.collect(Collectors.toList());
taskParameters.put("resourceList", updatedResourceInfos);
}
// set task parameters
taskDefinition.setTaskParams(JSONUtils.toJsonString(taskParameters));
}
}
/**
* update {@link ResourceInfo} by given original ResourceInfo
*
* @param res origin resource info
* @return {@link ResourceInfo}
*/
private ResourceInfo updateResourceInfo(ResourceInfo res) {
ResourceInfo resourceInfo = null;
// only if mainJar is not null and does not contains "resourceName" field
if (res != null) {
int resourceId = res.getId();
if (resourceId <= 0) {
logger.error("invalid resourceId, {}", resourceId);
return null;
}
resourceInfo = new ResourceInfo();
// get resource from database, only one resource should be returned
Resource resource = getResourceById(resourceId);
resourceInfo.setId(resourceId);
resourceInfo.setRes(resource.getFileName());
resourceInfo.setResourceName(resource.getFullName());
if (logger.isInfoEnabled()) {
logger.info("updated resource info {}",
JSONUtils.toJsonString(resourceInfo));
}
}
return resourceInfo;
}
/**
* get id list by task state
*
* @param instanceId instanceId
* @param state state
* @return task instance states
*/
public List<Integer> findTaskIdByInstanceState(int instanceId, ExecutionStatus state) {
return taskInstanceMapper.queryTaskByProcessIdAndState(instanceId, state.ordinal());
}
/**
* find valid task list by process definition id
*
* @param processInstanceId processInstanceId
* @return task instance list
*/
public List<TaskInstance> findValidTaskListByProcessId(Integer processInstanceId) {
return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.YES);
}
/**
* find previous task list by work process id
*
* @param processInstanceId processInstanceId
* @return task instance list
*/
public List<TaskInstance> findPreviousTaskListByWorkProcessId(Integer processInstanceId) {
return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.NO);
}
/**
* update work process instance map
*
* @param processInstanceMap processInstanceMap
* @return update process instance result
*/
public int updateWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) {
return processInstanceMapMapper.updateById(processInstanceMap);
}
/**
* create work process instance map
*
* @param processInstanceMap processInstanceMap
* @return create process instance result
*/
public int createWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) {
int count = 0;
if (processInstanceMap != null) {
return processInstanceMapMapper.insert(processInstanceMap);
}
return count;
}
/**
* find work process map by parent process id and parent task id.
*
* @param parentWorkProcessId parentWorkProcessId
* @param parentTaskId parentTaskId
* @return process instance map
*/
public ProcessInstanceMap findWorkProcessMapByParent(Integer parentWorkProcessId, Integer parentTaskId) {
return processInstanceMapMapper.queryByParentId(parentWorkProcessId, parentTaskId);
}
/**
* delete work process map by parent process id
*
* @param parentWorkProcessId parentWorkProcessId
* @return delete process map result
*/
public int deleteWorkProcessMapByParentId(int parentWorkProcessId) {
return processInstanceMapMapper.deleteByParentProcessId(parentWorkProcessId);
}
/**
* find sub process instance
*
* @param parentProcessId parentProcessId
* @param parentTaskId parentTaskId
* @return process instance
*/
public ProcessInstance findSubProcessInstance(Integer parentProcessId, Integer parentTaskId) {
ProcessInstance processInstance = null;
ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryByParentId(parentProcessId, parentTaskId);
if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) {
return processInstance;
}
processInstance = findProcessInstanceById(processInstanceMap.getProcessInstanceId());
return processInstance;
}
/**
* find parent process instance
*
* @param subProcessId subProcessId
* @return process instance
*/
public ProcessInstance findParentProcessInstance(Integer subProcessId) {
ProcessInstance processInstance = null;
ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(subProcessId);
if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) {
return processInstance;
}
processInstance = findProcessInstanceById(processInstanceMap.getParentProcessInstanceId());
return processInstance;
}
/**
* change task state
*
* @param state state
* @param startTime startTime
* @param host host
* @param executePath executePath
* @param logPath logPath
* @param taskInstId taskInstId
*/
public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state, Date startTime, String host,
String executePath,
String logPath,
int taskInstId) {
taskInstance.setState(state);
taskInstance.setStartTime(startTime);
taskInstance.setHost(host);
taskInstance.setExecutePath(executePath);
taskInstance.setLogPath(logPath);
saveTaskInstance(taskInstance);
}
/**
* update process instance
*
* @param processInstance processInstance
* @return update process instance result
*/
public int updateProcessInstance(ProcessInstance processInstance) {
return processInstanceMapper.updateById(processInstance);
}
/**
* change task state
*
* @param state state
* @param endTime endTime
* @param taskInstId taskInstId
* @param varPool varPool
*/
public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state,
Date endTime,
int processId,
String appIds,
int taskInstId,
String varPool) {
taskInstance.setPid(processId);
taskInstance.setAppLink(appIds);
taskInstance.setState(state);
taskInstance.setEndTime(endTime);
taskInstance.setVarPool(varPool);
changeOutParam(taskInstance);
saveTaskInstance(taskInstance);
}
/**
* for show in page of taskInstance
*
* @param taskInstance
*/
public void changeOutParam(TaskInstance taskInstance) {
if (StringUtils.isEmpty(taskInstance.getVarPool())) {
return;
}
List<Property> properties = JSONUtils.toList(taskInstance.getVarPool(), Property.class);
if (CollectionUtils.isEmpty(properties)) {
return;
}
//if the result more than one line,just get the first .
Map<String, Object> taskParams = JSONUtils.parseObject(taskInstance.getTaskParams(), new TypeReference<Map<String, Object>>() {});
Object localParams = taskParams.get(LOCAL_PARAMS);
if (localParams == null) {
return;
}
List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class);
Map<String, String> outProperty = new HashMap<>();
for (Property info : properties) {
if (info.getDirect() == Direct.OUT) {
outProperty.put(info.getProp(), info.getValue());
}
}
for (Property info : allParam) {
if (info.getDirect() == Direct.OUT) {
String paramName = info.getProp();
info.setValue(outProperty.get(paramName));
}
}
taskParams.put(LOCAL_PARAMS, allParam);
taskInstance.setTaskParams(JSONUtils.toJsonString(taskParams));
}
/**
* convert integer list to string list
*
* @param intList intList
* @return string list
*/
public List<String> convertIntListToString(List<Integer> intList) {
if (intList == null) {
return new ArrayList<>();
}
List<String> result = new ArrayList<>(intList.size());
for (Integer intVar : intList) {
result.add(String.valueOf(intVar));
}
return result;
}
/**
* query schedule by id
*
* @param id id
* @return schedule
*/
public Schedule querySchedule(int id) {
return scheduleMapper.selectById(id);
}
/**
* query Schedule by processDefinitionCode
*
* @param processDefinitionCode processDefinitionCode
* @see Schedule
*/
public List<Schedule> queryReleaseSchedulerListByProcessDefinitionCode(long processDefinitionCode) {
return scheduleMapper.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode);
}
/**
* query need failover process instance
*
* @param host host
* @return process instance list
*/
public List<ProcessInstance> queryNeedFailoverProcessInstances(String host) {
return processInstanceMapper.queryByHostAndStatus(host, stateArray);
}
/**
* process need failover process instance
*
* @param processInstance processInstance
*/
@Transactional(rollbackFor = RuntimeException.class)
public void processNeedFailoverProcessInstances(ProcessInstance processInstance) {
//1 update processInstance host is null
processInstance.setHost(Constants.NULL);
processInstanceMapper.updateById(processInstance);
ProcessDefinition processDefinition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
//2 insert into recover command
Command cmd = new Command();
cmd.setProcessDefinitionCode(processDefinition.getCode());
cmd.setProcessDefinitionVersion(processDefinition.getVersion());
cmd.setProcessInstanceId(processInstance.getId());
cmd.setCommandParam(String.format("{\"%s\":%d}", Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING, processInstance.getId()));
cmd.setExecutorId(processInstance.getExecutorId());
cmd.setCommandType(CommandType.RECOVER_TOLERANCE_FAULT_PROCESS);
createCommand(cmd);
}
/**
* query all need failover task instances by host
*
* @param host host
* @return task instance list
*/
public List<TaskInstance> queryNeedFailoverTaskInstances(String host) {
return taskInstanceMapper.queryByHostAndStatus(host,
stateArray);
}
/**
* find data source by id
*
* @param id id
* @return datasource
*/
public DataSource findDataSourceById(int id) {
return dataSourceMapper.selectById(id);
}
/**
* update process instance state by id
*
* @param processInstanceId processInstanceId
* @param executionStatus executionStatus
* @return update process result
*/
public int updateProcessInstanceState(Integer processInstanceId, ExecutionStatus executionStatus) {
ProcessInstance instance = processInstanceMapper.selectById(processInstanceId);
instance.setState(executionStatus);
return processInstanceMapper.updateById(instance);
}
/**
* find process instance by the task id
*
* @param taskId taskId
* @return process instance
*/
public ProcessInstance findProcessInstanceByTaskId(int taskId) {
TaskInstance taskInstance = taskInstanceMapper.selectById(taskId);
if (taskInstance != null) {
return processInstanceMapper.selectById(taskInstance.getProcessInstanceId());
}
return null;
}
/**
* find udf function list by id list string
*
* @param ids ids
* @return udf function list
*/
public List<UdfFunc> queryUdfFunListByIds(int[] ids) {
return udfFuncMapper.queryUdfByIdStr(ids, null);
}
/**
* find tenant code by resource name
*
* @param resName resource name
* @param resourceType resource type
* @return tenant code
*/
public String queryTenantCodeByResName(String resName, ResourceType resourceType) {
// in order to query tenant code successful although the version is older
String fullName = resName.startsWith("/") ? resName : String.format("/%s", resName);
List<Resource> resourceList = resourceMapper.queryResource(fullName, resourceType.ordinal());
if (CollectionUtils.isEmpty(resourceList)) {
return StringUtils.EMPTY;
}
int userId = resourceList.get(0).getUserId();
User user = userMapper.selectById(userId);
if (Objects.isNull(user)) {
return StringUtils.EMPTY;
}
Tenant tenant = tenantMapper.selectById(user.getTenantId());
if (Objects.isNull(tenant)) {
return StringUtils.EMPTY;
}
return tenant.getTenantCode();
}
/**
* find schedule list by process define codes.
*
* @param codes codes
* @return schedule list
*/
public List<Schedule> selectAllByProcessDefineCode(long[] codes) {
return scheduleMapper.selectAllByProcessDefineArray(codes);
}
/**
* find last scheduler process instance in the date interval
*
* @param definitionCode definitionCode
* @param dateInterval dateInterval
* @return process instance
*/
public ProcessInstance findLastSchedulerProcessInterval(Long definitionCode, DateInterval dateInterval) {
return processInstanceMapper.queryLastSchedulerProcess(definitionCode,
dateInterval.getStartTime(),
dateInterval.getEndTime());
}
/**
* find last manual process instance interval
*
* @param definitionCode process definition code
* @param dateInterval dateInterval
* @return process instance
*/
public ProcessInstance findLastManualProcessInterval(Long definitionCode, DateInterval dateInterval) {
return processInstanceMapper.queryLastManualProcess(definitionCode,
dateInterval.getStartTime(),
dateInterval.getEndTime());
}
/**
* find last running process instance
*
* @param definitionCode process definition code
* @param startTime start time
* @param endTime end time
* @return process instance
*/
public ProcessInstance findLastRunningProcess(Long definitionCode, Date startTime, Date endTime) {
return processInstanceMapper.queryLastRunningProcess(definitionCode,
startTime,
endTime,
stateArray);
}
/**
* query user queue by process instance id
*
* @param processInstanceId processInstanceId
* @return queue
*/
public String queryUserQueueByProcessInstanceId(int processInstanceId) {
String queue = "";
ProcessInstance processInstance = processInstanceMapper.selectById(processInstanceId);
if (processInstance == null) {
return queue;
}
User executor = userMapper.selectById(processInstance.getExecutorId());
if (executor != null) {
queue = executor.getQueue();
}
return queue;
}
/**
* query project name and user name by processInstanceId.
*
* @param processInstanceId processInstanceId
* @return projectName and userName
*/
public ProjectUser queryProjectWithUserByProcessInstanceId(int processInstanceId) {
return projectMapper.queryProjectWithUserByProcessInstanceId(processInstanceId);
}
/**
* get task worker group
*
* @param taskInstance taskInstance
* @return workerGroupId
*/
public String getTaskWorkerGroup(TaskInstance taskInstance) {
String workerGroup = taskInstance.getWorkerGroup();
if (StringUtils.isNotBlank(workerGroup)) {
return workerGroup;
}
int processInstanceId = taskInstance.getProcessInstanceId();
ProcessInstance processInstance = findProcessInstanceById(processInstanceId);
if (processInstance != null) {
return processInstance.getWorkerGroup();
}
logger.info("task : {} will use default worker group", taskInstance.getId());
return Constants.DEFAULT_WORKER_GROUP;
}
/**
* get have perm project list
*
* @param userId userId
* @return project list
*/
public List<Project> getProjectListHavePerm(int userId) {
List<Project> createProjects = projectMapper.queryProjectCreatedByUser(userId);
List<Project> authedProjects = projectMapper.queryAuthedProjectListByUserId(userId);
if (createProjects == null) {
createProjects = new ArrayList<>();
}
if (authedProjects != null) {
createProjects.addAll(authedProjects);
}
return createProjects;
}
/**
* list unauthorized udf function
*
* @param userId user id
* @param needChecks data source id array
* @return unauthorized udf function list
*/
public <T> List<T> listUnauthorized(int userId, T[] needChecks, AuthorizationType authorizationType) {
List<T> resultList = new ArrayList<>();
if (Objects.nonNull(needChecks) && needChecks.length > 0) {
Set<T> originResSet = new HashSet<>(Arrays.asList(needChecks));
switch (authorizationType) {
case RESOURCE_FILE_ID:
case UDF_FILE:
List<Resource> ownUdfResources = resourceMapper.listAuthorizedResourceById(userId, needChecks);
addAuthorizedResources(ownUdfResources, userId);
Set<Integer> authorizedResourceFiles = ownUdfResources.stream().map(Resource::getId).collect(toSet());
originResSet.removeAll(authorizedResourceFiles);
break;
case RESOURCE_FILE_NAME:
List<Resource> ownResources = resourceMapper.listAuthorizedResource(userId, needChecks);
addAuthorizedResources(ownResources, userId);
Set<String> authorizedResources = ownResources.stream().map(Resource::getFullName).collect(toSet());
originResSet.removeAll(authorizedResources);
break;
case DATASOURCE:
Set<Integer> authorizedDatasources = dataSourceMapper.listAuthorizedDataSource(userId, needChecks).stream().map(DataSource::getId).collect(toSet());
originResSet.removeAll(authorizedDatasources);
break;
case UDF:
Set<Integer> authorizedUdfs = udfFuncMapper.listAuthorizedUdfFunc(userId, needChecks).stream().map(UdfFunc::getId).collect(toSet());
originResSet.removeAll(authorizedUdfs);
break;
default:
break;
}
resultList.addAll(originResSet);
}
return resultList;
}
/**
* get user by user id
*
* @param userId user id
* @return User
*/
public User getUserById(int userId) {
return userMapper.selectById(userId);
}
/**
* get resource by resource id
*
* @param resourceId resource id
* @return Resource
*/
public Resource getResourceById(int resourceId) {
return resourceMapper.selectById(resourceId);
}
/**
* list resources by ids
*
* @param resIds resIds
* @return resource list
*/
public List<Resource> listResourceByIds(Integer[] resIds) {
return resourceMapper.listResourceByIds(resIds);
}
/**
* format task app id in task instance
*/
public String formatTaskAppId(TaskInstance taskInstance) {
ProcessInstance processInstance = findProcessInstanceById(taskInstance.getProcessInstanceId());
if (processInstance == null) {
return "";
}
ProcessDefinition definition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion());
if (definition == null) {
return "";
}
return String.format("%s_%s_%s", definition.getId(), processInstance.getId(), taskInstance.getId());
}
/**
* switch process definition version to process definition log version
*/
public int switchVersion(ProcessDefinition processDefinition, ProcessDefinitionLog processDefinitionLog) {
if (null == processDefinition || null == processDefinitionLog) {
return Constants.DEFINITION_FAILURE;
}
processDefinitionLog.setId(processDefinition.getId());
processDefinitionLog.setReleaseState(ReleaseState.OFFLINE);
processDefinitionLog.setFlag(Flag.YES);
int result = processDefineMapper.updateById(processDefinitionLog);
if (result > 0) {
result = switchProcessTaskRelationVersion(processDefinitionLog);
if (result <= 0) {
return Constants.DEFINITION_FAILURE;
}
}
return result;
}
public int switchProcessTaskRelationVersion(ProcessDefinition processDefinition) {
List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode());
if (!processTaskRelationList.isEmpty()) {
processTaskRelationMapper.deleteByCode(processDefinition.getProjectCode(), processDefinition.getCode());
}
List<ProcessTaskRelationLog> processTaskRelationLogList = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion());
return processTaskRelationMapper.batchInsert(processTaskRelationLogList);
}
/**
* get resource ids
*
* @param taskDefinition taskDefinition
* @return resource ids
*/
public String getResourceIds(TaskDefinition taskDefinition) {
Set<Integer> resourceIds = null;
AbstractParameters params = TaskParametersUtils.getParameters(taskDefinition.getTaskType(), taskDefinition.getTaskParams());
if (params != null && CollectionUtils.isNotEmpty(params.getResourceFilesList())) {
resourceIds = params.getResourceFilesList().
stream()
.filter(t -> t.getId() != 0)
.map(ResourceInfo::getId)
.collect(Collectors.toSet());
}
if (CollectionUtils.isEmpty(resourceIds)) {
return StringUtils.EMPTY;
}
return StringUtils.join(resourceIds, ",");
}
public int saveTaskDefine(User operator, long projectCode, List<TaskDefinitionLog> taskDefinitionLogs) {
Date now = new Date();
List<TaskDefinitionLog> newTaskDefinitionLogs = new ArrayList<>();
List<TaskDefinitionLog> updateTaskDefinitionLogs = new ArrayList<>();
for (TaskDefinitionLog taskDefinitionLog : taskDefinitionLogs) {
taskDefinitionLog.setProjectCode(projectCode);
taskDefinitionLog.setUpdateTime(now);
taskDefinitionLog.setOperateTime(now);
taskDefinitionLog.setOperator(operator.getId());
taskDefinitionLog.setResourceIds(getResourceIds(taskDefinitionLog));
if (taskDefinitionLog.getCode() > 0 && taskDefinitionLog.getVersion() > 0) {
TaskDefinitionLog definitionCodeAndVersion = taskDefinitionLogMapper
.queryByDefinitionCodeAndVersion(taskDefinitionLog.getCode(), taskDefinitionLog.getVersion());
if (definitionCodeAndVersion != null) {
if (!taskDefinitionLog.equals(definitionCodeAndVersion)) {
taskDefinitionLog.setUserId(definitionCodeAndVersion.getUserId());
Integer version = taskDefinitionLogMapper.queryMaxVersionForDefinition(taskDefinitionLog.getCode());
taskDefinitionLog.setVersion(version + 1);
taskDefinitionLog.setCreateTime(definitionCodeAndVersion.getCreateTime());
updateTaskDefinitionLogs.add(taskDefinitionLog);
}
continue;
}
}
taskDefinitionLog.setUserId(operator.getId());
taskDefinitionLog.setVersion(Constants.VERSION_FIRST);
taskDefinitionLog.setCreateTime(now);
if (taskDefinitionLog.getCode() == 0) {
try {
taskDefinitionLog.setCode(SnowFlakeUtils.getInstance().nextId());
} catch (SnowFlakeException e) {
logger.error("Task code get error, ", e);
return Constants.DEFINITION_FAILURE;
}
}
newTaskDefinitionLogs.add(taskDefinitionLog);
}
int insertResult = 0;
int updateResult = 0;
for (TaskDefinitionLog taskDefinitionToUpdate : updateTaskDefinitionLogs) {
TaskDefinition task = taskDefinitionMapper.queryByCode(taskDefinitionToUpdate.getCode());
if (task == null) {
newTaskDefinitionLogs.add(taskDefinitionToUpdate);
} else {
insertResult += taskDefinitionLogMapper.insert(taskDefinitionToUpdate);
taskDefinitionToUpdate.setId(task.getId());
updateResult += taskDefinitionMapper.updateById(taskDefinitionToUpdate);
}
}
if (!newTaskDefinitionLogs.isEmpty()) {
updateResult += taskDefinitionMapper.batchInsert(newTaskDefinitionLogs);
insertResult += taskDefinitionLogMapper.batchInsert(newTaskDefinitionLogs);
}
return (insertResult & updateResult) > 0 ? 1 : Constants.EXIT_CODE_SUCCESS;
}
/**
* save processDefinition (including create or update processDefinition)
*/
public int saveProcessDefine(User operator, ProcessDefinition processDefinition, Boolean isFromProcessDefine) {
ProcessDefinitionLog processDefinitionLog = new ProcessDefinitionLog(processDefinition);
Integer version = processDefineLogMapper.queryMaxVersionForDefinition(processDefinition.getCode());
int insertVersion = version == null || version == 0 ? Constants.VERSION_FIRST : version + 1;
processDefinitionLog.setVersion(insertVersion);
processDefinitionLog.setReleaseState(isFromProcessDefine ? ReleaseState.OFFLINE : ReleaseState.ONLINE);
processDefinitionLog.setOperator(operator.getId());
processDefinitionLog.setOperateTime(processDefinition.getUpdateTime());
int insertLog = processDefineLogMapper.insert(processDefinitionLog);
int result;
if (0 == processDefinition.getId()) {
result = processDefineMapper.insert(processDefinitionLog);
} else {
processDefinitionLog.setId(processDefinition.getId());
result = processDefineMapper.updateById(processDefinitionLog);
}
return (insertLog & result) > 0 ? insertVersion : 0;
}
/**
* save task relations
*/
public int saveTaskRelation(User operator, long projectCode, long processDefinitionCode, int processDefinitionVersion,
List<ProcessTaskRelationLog> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) {
Map<Long, TaskDefinitionLog> taskDefinitionLogMap = null;
if (CollectionUtils.isNotEmpty(taskDefinitionLogs)) {
taskDefinitionLogMap = taskDefinitionLogs.stream()
.collect(Collectors.toMap(TaskDefinition::getCode, taskDefinitionLog -> taskDefinitionLog));
}
Date now = new Date();
for (ProcessTaskRelationLog processTaskRelationLog : taskRelationList) {
processTaskRelationLog.setProjectCode(projectCode);
processTaskRelationLog.setProcessDefinitionCode(processDefinitionCode);
processTaskRelationLog.setProcessDefinitionVersion(processDefinitionVersion);
if (taskDefinitionLogMap != null) {
TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(processTaskRelationLog.getPreTaskCode());
if (taskDefinitionLog != null) {
processTaskRelationLog.setPreTaskVersion(taskDefinitionLog.getVersion());
}
processTaskRelationLog.setPostTaskVersion(taskDefinitionLogMap.get(processTaskRelationLog.getPostTaskCode()).getVersion());
}
processTaskRelationLog.setCreateTime(now);
processTaskRelationLog.setUpdateTime(now);
processTaskRelationLog.setOperator(operator.getId());
processTaskRelationLog.setOperateTime(now);
}
List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode);
if (!processTaskRelationList.isEmpty()) {
Set<Integer> processTaskRelationSet = processTaskRelationList.stream().map(ProcessTaskRelation::hashCode).collect(toSet());
Set<Integer> taskRelationSet = taskRelationList.stream().map(ProcessTaskRelationLog::hashCode).collect(toSet());
boolean result = CollectionUtils.isEqualCollection(processTaskRelationSet, taskRelationSet);
if (result) {
return Constants.EXIT_CODE_SUCCESS;
}
processTaskRelationMapper.deleteByCode(projectCode, processDefinitionCode);
}
int result = processTaskRelationMapper.batchInsert(taskRelationList);
int resultLog = processTaskRelationLogMapper.batchInsert(taskRelationList);
return (result & resultLog) > 0 ? Constants.EXIT_CODE_SUCCESS : Constants.EXIT_CODE_FAILURE;
}
public boolean isTaskOnline(long taskCode) {
List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByTaskCode(taskCode);
if (!processTaskRelationList.isEmpty()) {
Set<Long> processDefinitionCodes = processTaskRelationList
.stream()
.map(ProcessTaskRelation::getProcessDefinitionCode)
.collect(Collectors.toSet());
List<ProcessDefinition> processDefinitionList = processDefineMapper.queryByCodes(processDefinitionCodes);
// check process definition is already online
for (ProcessDefinition processDefinition : processDefinitionList) {
if (processDefinition.getReleaseState() == ReleaseState.ONLINE) {
return true;
}
}
}
return false;
}
/**
* Generate the DAG Graph based on the process definition id
*
* @param processDefinition process definition
* @return dag graph
*/
public DAG<String, TaskNode, TaskNodeRelation> genDagGraph(ProcessDefinition processDefinition) {
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode());
List<TaskNode> taskNodeList = transformTask(processTaskRelations, Lists.newArrayList());
ProcessDag processDag = DagHelper.getProcessDag(taskNodeList, new ArrayList<>(processTaskRelations));
// Generate concrete Dag to be executed
return DagHelper.buildDagGraph(processDag);
}
/**
* generate DagData
*/
public DagData genDagData(ProcessDefinition processDefinition) {
List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode());
List<TaskDefinitionLog> taskDefinitionLogList = genTaskDefineList(processTaskRelations);
List<TaskDefinition> taskDefinitions = taskDefinitionLogList.stream()
.map(taskDefinitionLog -> JSONUtils.parseObject(JSONUtils.toJsonString(taskDefinitionLog), TaskDefinition.class))
.collect(Collectors.toList());
return new DagData(processDefinition, processTaskRelations, taskDefinitions);
}
public List<TaskDefinitionLog> genTaskDefineList(List<ProcessTaskRelation> processTaskRelations) {
Set<TaskDefinition> taskDefinitionSet = new HashSet<>();
for (ProcessTaskRelation processTaskRelation : processTaskRelations) {
if (processTaskRelation.getPreTaskCode() > 0) {
taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPreTaskCode(), processTaskRelation.getPreTaskVersion()));
}
if (processTaskRelation.getPostTaskCode() > 0) {
taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion()));
}
}
return taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet);
}
/**
* find task definition by code and version
*/
public TaskDefinition findTaskDefinition(long taskCode, int taskDefinitionVersion) {
return taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(taskCode, taskDefinitionVersion);
}
/**
* find process task relation list by projectCode and processDefinitionCode
*/
public List<ProcessTaskRelation> findRelationByCode(long projectCode, long processDefinitionCode) {
return processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode);
}
/**
* add authorized resources
*
* @param ownResources own resources
* @param userId userId
*/
private void addAuthorizedResources(List<Resource> ownResources, int userId) {
List<Integer> relationResourceIds = resourceUserMapper.queryResourcesIdListByUserIdAndPerm(userId, 7);
List<Resource> relationResources = CollectionUtils.isNotEmpty(relationResourceIds) ? resourceMapper.queryResourceListById(relationResourceIds) : new ArrayList<>();
ownResources.addAll(relationResources);
}
/**
* Use temporarily before refactoring taskNode
*/
public List<TaskNode> transformTask(List<ProcessTaskRelation> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) {
Map<Long, List<Long>> taskCodeMap = new HashMap<>();
for (ProcessTaskRelation processTaskRelation : taskRelationList) {
taskCodeMap.compute(processTaskRelation.getPostTaskCode(), (k, v) -> {
if (v == null) {
v = new ArrayList<>();
}
if (processTaskRelation.getPreTaskCode() != 0L) {
v.add(processTaskRelation.getPreTaskCode());
}
return v;
});
}
if (CollectionUtils.isEmpty(taskDefinitionLogs)) {
taskDefinitionLogs = genTaskDefineList(taskRelationList);
}
Map<Long, TaskDefinitionLog> taskDefinitionLogMap = taskDefinitionLogs.stream()
.collect(Collectors.toMap(TaskDefinitionLog::getCode, taskDefinitionLog -> taskDefinitionLog));
List<TaskNode> taskNodeList = new ArrayList<>();
for (Entry<Long, List<Long>> code : taskCodeMap.entrySet()) {
TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(code.getKey());
if (taskDefinitionLog != null) {
TaskNode taskNode = new TaskNode();
taskNode.setCode(taskDefinitionLog.getCode());
taskNode.setVersion(taskDefinitionLog.getVersion());
taskNode.setName(taskDefinitionLog.getName());
taskNode.setDesc(taskDefinitionLog.getDescription());
taskNode.setType(taskDefinitionLog.getTaskType().toUpperCase());
taskNode.setRunFlag(taskDefinitionLog.getFlag() == Flag.YES ? Constants.FLOWNODE_RUN_FLAG_NORMAL : Constants.FLOWNODE_RUN_FLAG_FORBIDDEN);
taskNode.setMaxRetryTimes(taskDefinitionLog.getFailRetryTimes());
taskNode.setRetryInterval(taskDefinitionLog.getFailRetryInterval());
Map<String, Object> taskParamsMap = taskNode.taskParamsToJsonObj(taskDefinitionLog.getTaskParams());
taskNode.setConditionResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.CONDITION_RESULT)));
taskNode.setSwitchResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.SWITCH_RESULT)));
taskNode.setDependence(JSONUtils.toJsonString(taskParamsMap.get(Constants.DEPENDENCE)));
taskParamsMap.remove(Constants.CONDITION_RESULT);
taskParamsMap.remove(Constants.DEPENDENCE);
taskNode.setParams(JSONUtils.toJsonString(taskParamsMap));
taskNode.setTaskInstancePriority(taskDefinitionLog.getTaskPriority());
taskNode.setWorkerGroup(taskDefinitionLog.getWorkerGroup());
taskNode.setEnvironmentCode(taskDefinitionLog.getEnvironmentCode());
taskNode.setTimeout(JSONUtils.toJsonString(new TaskTimeoutParameter(taskDefinitionLog.getTimeoutFlag() == TimeoutFlag.OPEN,
taskDefinitionLog.getTimeoutNotifyStrategy(),
taskDefinitionLog.getTimeout())));
taskNode.setDelayTime(taskDefinitionLog.getDelayTime());
taskNode.setPreTasks(JSONUtils.toJsonString(code.getValue().stream().map(taskDefinitionLogMap::get).map(TaskDefinition::getCode).collect(Collectors.toList())));
taskNodeList.add(taskNode);
}
}
return taskNodeList;
}
public Map<ProcessInstance, TaskInstance> notifyProcessList(int processId, int taskId) {
HashMap<ProcessInstance, TaskInstance> processTaskMap = new HashMap<>();
//find sub tasks
ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(processId);
if (processInstanceMap == null) {
return processTaskMap;
}
ProcessInstance fatherProcess = this.findProcessInstanceById(processInstanceMap.getParentProcessInstanceId());
TaskInstance fatherTask = this.findTaskInstanceById(processInstanceMap.getParentTaskInstanceId());
if (fatherProcess != null) {
processTaskMap.put(fatherProcess, fatherTask);
}
return processTaskMap;
}
public ProcessInstance loadNextProcess4Serial(long code, int state) {
return this.processInstanceMapper.loadNextProcess4Serial(code,state);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,849 | [Improvement][MasterServer] improve master scan and handle commands | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master scan one command from DB and convert to process instance each time, it's a looply work on single thread, which limits overall speed.
So I think it can be changed to fetch more commands each time and handle in parallel.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6849 | https://github.com/apache/dolphinscheduler/pull/6850 | 1be080237bad025651247bd24dc5ad2b24520f8d | 595e4843d02addf9bc4c11a8c556c354109d802f | "2021-11-15T02:20:30Z" | java | "2021-11-19T04:03:49Z" | dolphinscheduler-service/src/test/java/org/apache/dolphinscheduler/service/process/ProcessServiceTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.service.process;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_START_PARAMS;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE;
import static org.mockito.ArgumentMatchers.any;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.ProcessExecutionTypeEnum;
import org.apache.dolphinscheduler.common.enums.TaskType;
import org.apache.dolphinscheduler.common.enums.UserType;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.ResourceInfo;
import org.apache.dolphinscheduler.common.task.spark.SparkParameters;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.entity.Command;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.ProcessInstanceMap;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog;
import org.apache.dolphinscheduler.dao.entity.Resource;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.CommandMapper;
import org.apache.dolphinscheduler.dao.mapper.ErrorCommandMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationLogMapper;
import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationMapper;
import org.apache.dolphinscheduler.dao.mapper.ResourceMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper;
import org.apache.dolphinscheduler.dao.mapper.UserMapper;
import org.apache.dolphinscheduler.service.quartz.cron.CronUtilsTest;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner;
import org.powermock.reflect.Whitebox;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.fasterxml.jackson.databind.JsonNode;
/**
* process service test
*/
@RunWith(MockitoJUnitRunner.Silent.class)
public class ProcessServiceTest {
private static final Logger logger = LoggerFactory.getLogger(CronUtilsTest.class);
@InjectMocks
private ProcessService processService;
@Mock
private CommandMapper commandMapper;
@Mock
private ProcessTaskRelationLogMapper processTaskRelationLogMapper;
@Mock
private ErrorCommandMapper errorCommandMapper;
@Mock
private ProcessDefinitionMapper processDefineMapper;
@Mock
private ProcessInstanceMapper processInstanceMapper;
@Mock
private UserMapper userMapper;
@Mock
private TaskInstanceMapper taskInstanceMapper;
@Mock
private TaskDefinitionLogMapper taskDefinitionLogMapper;
@Mock
private TaskDefinitionMapper taskDefinitionMapper;
@Mock
private ProcessTaskRelationMapper processTaskRelationMapper;
@Mock
private ProcessDefinitionLogMapper processDefineLogMapper;
@Mock
private ResourceMapper resourceMapper;
private HashMap<String, ProcessDefinition> processDefinitionCacheMaps = new HashMap<>();
@Test
public void testCreateSubCommand() {
ProcessInstance parentInstance = new ProcessInstance();
parentInstance.setWarningType(WarningType.SUCCESS);
parentInstance.setWarningGroupId(0);
TaskInstance task = new TaskInstance();
task.setTaskParams("{\"processDefinitionCode\":10}}");
task.setId(10);
task.setTaskCode(1L);
task.setTaskDefinitionVersion(1);
ProcessInstance childInstance = null;
ProcessInstanceMap instanceMap = new ProcessInstanceMap();
instanceMap.setParentProcessInstanceId(1);
instanceMap.setParentTaskInstanceId(10);
Command command;
//father history: start; child null == command type: start
parentInstance.setHistoryCmd("START_PROCESS");
parentInstance.setCommandType(CommandType.START_PROCESS);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setCode(10L);
Mockito.when(processDefineMapper.queryByDefineId(100)).thenReturn(processDefinition);
Mockito.when(processDefineMapper.queryByCode(10L)).thenReturn(processDefinition);
command = processService.createSubProcessCommand(parentInstance, childInstance, instanceMap, task);
Assert.assertEquals(CommandType.START_PROCESS, command.getCommandType());
//father history: start,start failure; child null == command type: start
parentInstance.setCommandType(CommandType.START_FAILURE_TASK_PROCESS);
parentInstance.setHistoryCmd("START_PROCESS,START_FAILURE_TASK_PROCESS");
command = processService.createSubProcessCommand(parentInstance, childInstance, instanceMap, task);
Assert.assertEquals(CommandType.START_PROCESS, command.getCommandType());
//father history: scheduler,start failure; child null == command type: scheduler
parentInstance.setCommandType(CommandType.START_FAILURE_TASK_PROCESS);
parentInstance.setHistoryCmd("SCHEDULER,START_FAILURE_TASK_PROCESS");
command = processService.createSubProcessCommand(parentInstance, childInstance, instanceMap, task);
Assert.assertEquals(CommandType.SCHEDULER, command.getCommandType());
//father history: complement,start failure; child null == command type: complement
String startString = "2020-01-01 00:00:00";
String endString = "2020-01-10 00:00:00";
parentInstance.setCommandType(CommandType.START_FAILURE_TASK_PROCESS);
parentInstance.setHistoryCmd("COMPLEMENT_DATA,START_FAILURE_TASK_PROCESS");
Map<String, String> complementMap = new HashMap<>();
complementMap.put(Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE, startString);
complementMap.put(Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE, endString);
parentInstance.setCommandParam(JSONUtils.toJsonString(complementMap));
command = processService.createSubProcessCommand(parentInstance, childInstance, instanceMap, task);
Assert.assertEquals(CommandType.COMPLEMENT_DATA, command.getCommandType());
JsonNode complementDate = JSONUtils.parseObject(command.getCommandParam());
Date start = DateUtils.stringToDate(complementDate.get(Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE).asText());
Date end = DateUtils.stringToDate(complementDate.get(Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE).asText());
Assert.assertEquals(startString, DateUtils.dateToString(start));
Assert.assertEquals(endString, DateUtils.dateToString(end));
//father history: start,failure,start failure; child not null == command type: start failure
childInstance = new ProcessInstance();
parentInstance.setCommandType(CommandType.START_FAILURE_TASK_PROCESS);
parentInstance.setHistoryCmd("START_PROCESS,START_FAILURE_TASK_PROCESS");
command = processService.createSubProcessCommand(parentInstance, childInstance, instanceMap, task);
Assert.assertEquals(CommandType.START_FAILURE_TASK_PROCESS, command.getCommandType());
}
@Test
public void testVerifyIsNeedCreateCommand() {
List<Command> commands = new ArrayList<>();
Command command = new Command();
command.setCommandType(CommandType.REPEAT_RUNNING);
command.setCommandParam("{\"" + CMD_PARAM_RECOVER_PROCESS_ID_STRING + "\":\"111\"}");
commands.add(command);
Mockito.when(commandMapper.selectList(null)).thenReturn(commands);
Assert.assertFalse(processService.verifyIsNeedCreateCommand(command));
Command command1 = new Command();
command1.setCommandType(CommandType.REPEAT_RUNNING);
command1.setCommandParam("{\"" + CMD_PARAM_RECOVER_PROCESS_ID_STRING + "\":\"222\"}");
Assert.assertTrue(processService.verifyIsNeedCreateCommand(command1));
Command command2 = new Command();
command2.setCommandType(CommandType.PAUSE);
Assert.assertTrue(processService.verifyIsNeedCreateCommand(command2));
}
@Test
public void testCreateRecoveryWaitingThreadCommand() {
int id = 123;
Mockito.when(commandMapper.deleteById(id)).thenReturn(1);
ProcessInstance subProcessInstance = new ProcessInstance();
subProcessInstance.setIsSubProcess(Flag.YES);
Command originCommand = new Command();
originCommand.setId(id);
processService.createRecoveryWaitingThreadCommand(originCommand, subProcessInstance);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(111);
processService.createRecoveryWaitingThreadCommand(null, subProcessInstance);
Command recoverCommand = new Command();
recoverCommand.setCommandType(CommandType.RECOVER_WAITING_THREAD);
processService.createRecoveryWaitingThreadCommand(recoverCommand, subProcessInstance);
Command repeatRunningCommand = new Command();
recoverCommand.setCommandType(CommandType.REPEAT_RUNNING);
processService.createRecoveryWaitingThreadCommand(repeatRunningCommand, subProcessInstance);
ProcessInstance subProcessInstance2 = new ProcessInstance();
subProcessInstance2.setId(111);
subProcessInstance2.setIsSubProcess(Flag.NO);
processService.createRecoveryWaitingThreadCommand(repeatRunningCommand, subProcessInstance2);
}
@Test
public void testHandleCommand() {
//cannot construct process instance, return null;
String host = "127.0.0.1";
Command command = new Command();
command.setProcessDefinitionCode(222);
command.setCommandType(CommandType.REPEAT_RUNNING);
command.setCommandParam("{\"" + CMD_PARAM_RECOVER_PROCESS_ID_STRING + "\":\"111\",\""
+ CMD_PARAM_SUB_PROCESS_DEFINE_CODE + "\":\"222\"}");
Assert.assertNull(processService.handleCommand(logger, host, command, processDefinitionCacheMaps));
int definitionVersion = 1;
long definitionCode = 123;
int processInstanceId = 222;
//there is not enough thread for this command
Command command1 = new Command();
command1.setProcessDefinitionCode(definitionCode);
command1.setProcessDefinitionVersion(definitionVersion);
command1.setCommandParam("{\"ProcessInstanceId\":222}");
command1.setCommandType(CommandType.START_PROCESS);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setId(123);
processDefinition.setName("test");
processDefinition.setVersion(definitionVersion);
processDefinition.setCode(definitionCode);
processDefinition.setGlobalParams("[{\"prop\":\"startParam1\",\"direct\":\"IN\",\"type\":\"VARCHAR\",\"value\":\"\"}]");
processDefinition.setExecutionType(ProcessExecutionTypeEnum.PARALLEL);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(222);
processInstance.setProcessDefinitionCode(11L);
processInstance.setHost("127.0.0.1:5678");
processInstance.setProcessDefinitionVersion(1);
processInstance.setId(processInstanceId);
processInstance.setProcessDefinitionCode(definitionCode);
processInstance.setProcessDefinitionVersion(definitionVersion);
Mockito.when(processDefineMapper.queryByCode(command1.getProcessDefinitionCode())).thenReturn(processDefinition);
Mockito.when(processDefineLogMapper.queryByDefinitionCodeAndVersion(processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion())).thenReturn(new ProcessDefinitionLog(processDefinition));
Mockito.when(processInstanceMapper.queryDetailById(222)).thenReturn(processInstance);
Assert.assertNotNull(processService.handleCommand(logger, host, command1, processDefinitionCacheMaps));
Command command2 = new Command();
command2.setCommandParam("{\"ProcessInstanceId\":222,\"StartNodeIdList\":\"n1,n2\"}");
command2.setProcessDefinitionCode(definitionCode);
command2.setProcessDefinitionVersion(definitionVersion);
command2.setCommandType(CommandType.RECOVER_SUSPENDED_PROCESS);
command2.setProcessInstanceId(processInstanceId);
Assert.assertNotNull(processService.handleCommand(logger, host, command2, processDefinitionCacheMaps));
Command command3 = new Command();
command3.setProcessDefinitionCode(definitionCode);
command3.setProcessDefinitionVersion(definitionVersion);
command3.setProcessInstanceId(processInstanceId);
command3.setCommandParam("{\"WaitingThreadInstanceId\":222}");
command3.setCommandType(CommandType.START_FAILURE_TASK_PROCESS);
Assert.assertNotNull(processService.handleCommand(logger, host, command3, processDefinitionCacheMaps));
Command command4 = new Command();
command4.setProcessDefinitionCode(definitionCode);
command4.setProcessDefinitionVersion(definitionVersion);
command4.setCommandParam("{\"WaitingThreadInstanceId\":222,\"StartNodeIdList\":\"n1,n2\"}");
command4.setCommandType(CommandType.REPEAT_RUNNING);
command4.setProcessInstanceId(processInstanceId);
Assert.assertNotNull(processService.handleCommand(logger, host, command4, processDefinitionCacheMaps));
Command command5 = new Command();
command5.setProcessDefinitionCode(definitionCode);
command5.setProcessDefinitionVersion(definitionVersion);
HashMap<String, String> startParams = new HashMap<>();
startParams.put("startParam1", "testStartParam1");
HashMap<String, String> commandParams = new HashMap<>();
commandParams.put(CMD_PARAM_START_PARAMS, JSONUtils.toJsonString(startParams));
command5.setCommandParam(JSONUtils.toJsonString(commandParams));
command5.setCommandType(CommandType.START_PROCESS);
command5.setDryRun(Constants.DRY_RUN_FLAG_NO);
ProcessInstance processInstance1 = processService.handleCommand(logger, host, command5, processDefinitionCacheMaps);
Assert.assertTrue(processInstance1.getGlobalParams().contains("\"testStartParam1\""));
ProcessDefinition processDefinition1 = new ProcessDefinition();
processDefinition1.setId(123);
processDefinition1.setName("test");
processDefinition1.setVersion(1);
processDefinition1.setCode(11L);
processDefinition1.setVersion(1);
processDefinition1.setExecutionType(ProcessExecutionTypeEnum.SERIAL_WAIT);
List<ProcessInstance> lists = new ArrayList<>();
ProcessInstance processInstance11 = new ProcessInstance();
processInstance11.setId(222);
processInstance11.setProcessDefinitionCode(11L);
processInstance11.setProcessDefinitionVersion(1);
processInstance11.setHost("127.0.0.1:5678");
lists.add(processInstance11);
ProcessInstance processInstance2 = new ProcessInstance();
processInstance2.setId(223);
processInstance2.setProcessDefinitionCode(11L);
processInstance2.setProcessDefinitionVersion(1);
Mockito.when(processInstanceMapper.queryDetailById(223)).thenReturn(processInstance2);
Mockito.when(processDefineMapper.queryByCode(11L)).thenReturn(processDefinition1);
Assert.assertNotNull(processService.handleCommand(logger, host, command1, processDefinitionCacheMaps));
Command command6 = new Command();
command6.setProcessDefinitionCode(11L);
command6.setCommandParam("{\"ProcessInstanceId\":223}");
command6.setCommandType(CommandType.RECOVER_SERIAL_WAIT);
command6.setProcessDefinitionVersion(1);
Mockito.when(processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(11L,Constants.RUNNING_PROCESS_STATE,223)).thenReturn(lists);
Mockito.when(processInstanceMapper.updateNextProcessIdById(223, 222)).thenReturn(true);
ProcessInstance processInstance6 = processService.handleCommand(logger, host, command6, processDefinitionCacheMaps);
Assert.assertTrue(processInstance6 != null);
processDefinition1.setExecutionType(ProcessExecutionTypeEnum.SERIAL_DISCARD);
Mockito.when(processDefineMapper.queryByCode(11L)).thenReturn(processDefinition1);
ProcessInstance processInstance7 = new ProcessInstance();
processInstance7.setId(224);
processInstance7.setProcessDefinitionCode(11L);
processInstance7.setProcessDefinitionVersion(1);
Mockito.when(processInstanceMapper.queryDetailById(224)).thenReturn(processInstance7);
Command command7 = new Command();
command7.setProcessDefinitionCode(11L);
command7.setCommandParam("{\"ProcessInstanceId\":224}");
command7.setCommandType(CommandType.RECOVER_SERIAL_WAIT);
command7.setProcessDefinitionVersion(1);
Mockito.when(processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(11L,Constants.RUNNING_PROCESS_STATE,224)).thenReturn(null);
ProcessInstance processInstance8 = processService.handleCommand(logger, host, command7, processDefinitionCacheMaps);
Assert.assertTrue(processInstance8 == null);
ProcessDefinition processDefinition2 = new ProcessDefinition();
processDefinition2.setId(123);
processDefinition2.setName("test");
processDefinition2.setVersion(1);
processDefinition2.setCode(12L);
processDefinition2.setExecutionType(ProcessExecutionTypeEnum.SERIAL_PRIORITY);
Mockito.when(processDefineMapper.queryByCode(12L)).thenReturn(processDefinition2);
ProcessInstance processInstance9 = new ProcessInstance();
processInstance9.setId(225);
processInstance9.setProcessDefinitionCode(11L);
processInstance9.setProcessDefinitionVersion(1);
Command command9 = new Command();
command9.setProcessDefinitionCode(12L);
command9.setCommandParam("{\"ProcessInstanceId\":225}");
command9.setCommandType(CommandType.RECOVER_SERIAL_WAIT);
command9.setProcessDefinitionVersion(1);
Mockito.when(processInstanceMapper.queryDetailById(225)).thenReturn(processInstance9);
Mockito.when(processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(12L,Constants.RUNNING_PROCESS_STATE,0)).thenReturn(lists);
Mockito.when(processInstanceMapper.updateById(processInstance)).thenReturn(1);
ProcessInstance processInstance10 = processService.handleCommand(logger, host, command9, processDefinitionCacheMaps);
Assert.assertTrue(processInstance10 == null);
}
@Test
public void testGetUserById() {
User user = new User();
user.setId(123);
Mockito.when(userMapper.selectById(123)).thenReturn(user);
Assert.assertEquals(user, processService.getUserById(123));
}
@Test
public void testFormatTaskAppId() {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(333);
taskInstance.setProcessInstanceId(222);
Mockito.when(processService.findProcessInstanceById(taskInstance.getProcessInstanceId())).thenReturn(null);
Assert.assertEquals("", processService.formatTaskAppId(taskInstance));
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setId(111);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(222);
processInstance.setProcessDefinitionVersion(1);
processInstance.setProcessDefinitionCode(1L);
Mockito.when(processService.findProcessInstanceById(taskInstance.getProcessInstanceId())).thenReturn(processInstance);
Assert.assertEquals("", processService.formatTaskAppId(taskInstance));
}
@Test
public void testRecurseFindSubProcessId() {
int parentProcessDefineId = 1;
long parentProcessDefineCode = 1L;
int parentProcessDefineVersion = 1;
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setCode(parentProcessDefineCode);
processDefinition.setVersion(parentProcessDefineVersion);
Mockito.when(processDefineMapper.selectById(parentProcessDefineId)).thenReturn(processDefinition);
long postTaskCode = 2L;
int postTaskVersion = 2;
List<ProcessTaskRelationLog> relationLogList = new ArrayList<>();
ProcessTaskRelationLog processTaskRelationLog = new ProcessTaskRelationLog();
processTaskRelationLog.setPostTaskCode(postTaskCode);
processTaskRelationLog.setPostTaskVersion(postTaskVersion);
relationLogList.add(processTaskRelationLog);
Mockito.when(processTaskRelationLogMapper.queryByProcessCodeAndVersion(parentProcessDefineCode
, parentProcessDefineVersion)).thenReturn(relationLogList);
List<TaskDefinitionLog> taskDefinitionLogs = new ArrayList<>();
TaskDefinitionLog taskDefinitionLog1 = new TaskDefinitionLog();
taskDefinitionLog1.setTaskParams("{\"processDefinitionCode\": 123L}");
taskDefinitionLogs.add(taskDefinitionLog1);
Mockito.when(taskDefinitionLogMapper.queryByTaskDefinitions(Mockito.anySet())).thenReturn(taskDefinitionLogs);
List<Long> ids = new ArrayList<>();
processService.recurseFindSubProcess(parentProcessDefineCode, ids);
Assert.assertEquals(0, ids.size());
}
@Test
public void testSwitchVersion() {
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setCode(1L);
processDefinition.setProjectCode(1L);
processDefinition.setId(123);
processDefinition.setName("test");
processDefinition.setVersion(1);
ProcessDefinitionLog processDefinitionLog = new ProcessDefinitionLog();
processDefinitionLog.setCode(1L);
processDefinitionLog.setVersion(2);
Assert.assertEquals(0, processService.switchVersion(processDefinition, processDefinitionLog));
}
@Test
public void testSaveTaskDefine() {
User operator = new User();
operator.setId(-1);
operator.setUserType(UserType.GENERAL_USER);
long projectCode = 751485690568704L;
String taskJson = "[{\"code\":751500437479424,\"name\":\"aa\",\"version\":1,\"description\":\"\",\"delayTime\":0,"
+ "\"taskType\":\"SHELL\",\"taskParams\":{\"resourceList\":[],\"localParams\":[],\"rawScript\":\"sleep 1s\\necho 11\","
+ "\"dependence\":{},\"conditionResult\":{\"successNode\":[\"\"],\"failedNode\":[\"\"]},\"waitStartTimeout\":{}},"
+ "\"flag\":\"YES\",\"taskPriority\":\"MEDIUM\",\"workerGroup\":\"yarn\",\"failRetryTimes\":0,\"failRetryInterval\":1,"
+ "\"timeoutFlag\":\"OPEN\",\"timeoutNotifyStrategy\":\"FAILED\",\"timeout\":1,\"environmentCode\":751496815697920},"
+ "{\"code\":751516889636864,\"name\":\"bb\",\"description\":\"\",\"taskType\":\"SHELL\",\"taskParams\":{\"resourceList\":[],"
+ "\"localParams\":[],\"rawScript\":\"echo 22\",\"dependence\":{},\"conditionResult\":{\"successNode\":[\"\"],\"failedNode\":[\"\"]},"
+ "\"waitStartTimeout\":{}},\"flag\":\"YES\",\"taskPriority\":\"MEDIUM\",\"workerGroup\":\"default\",\"failRetryTimes\":\"0\","
+ "\"failRetryInterval\":\"1\",\"timeoutFlag\":\"CLOSE\",\"timeoutNotifyStrategy\":\"\",\"timeout\":0,\"delayTime\":\"0\",\"environmentCode\":-1}]";
List<TaskDefinitionLog> taskDefinitionLogs = JSONUtils.toList(taskJson, TaskDefinitionLog.class);
TaskDefinitionLog taskDefinition = new TaskDefinitionLog();
taskDefinition.setCode(751500437479424L);
taskDefinition.setName("aa");
taskDefinition.setProjectCode(751485690568704L);
taskDefinition.setTaskType(TaskType.SHELL.getDesc());
taskDefinition.setUserId(-1);
taskDefinition.setVersion(1);
taskDefinition.setCreateTime(new Date());
taskDefinition.setUpdateTime(new Date());
Mockito.when(taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(taskDefinition.getCode(), taskDefinition.getVersion())).thenReturn(taskDefinition);
Mockito.when(taskDefinitionLogMapper.queryMaxVersionForDefinition(taskDefinition.getCode())).thenReturn(1);
Mockito.when(taskDefinitionMapper.queryByCode(taskDefinition.getCode())).thenReturn(taskDefinition);
int result = processService.saveTaskDefine(operator, projectCode, taskDefinitionLogs);
Assert.assertEquals(0, result);
}
@Test
public void testGenDagGraph() {
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setCode(1L);
processDefinition.setId(123);
processDefinition.setName("test");
processDefinition.setVersion(1);
processDefinition.setCode(11L);
ProcessTaskRelation processTaskRelation = new ProcessTaskRelation();
processTaskRelation.setName("def 1");
processTaskRelation.setProcessDefinitionVersion(1);
processTaskRelation.setProjectCode(1L);
processTaskRelation.setProcessDefinitionCode(1L);
processTaskRelation.setPostTaskCode(3L);
processTaskRelation.setPreTaskCode(2L);
processTaskRelation.setUpdateTime(new Date());
processTaskRelation.setCreateTime(new Date());
List<ProcessTaskRelation> list = new ArrayList<>();
list.add(processTaskRelation);
TaskDefinitionLog taskDefinition = new TaskDefinitionLog();
taskDefinition.setCode(3L);
taskDefinition.setName("1-test");
taskDefinition.setProjectCode(1L);
taskDefinition.setTaskType(TaskType.SHELL.getDesc());
taskDefinition.setUserId(1);
taskDefinition.setVersion(2);
taskDefinition.setCreateTime(new Date());
taskDefinition.setUpdateTime(new Date());
TaskDefinitionLog td2 = new TaskDefinitionLog();
td2.setCode(2L);
td2.setName("unit-test");
td2.setProjectCode(1L);
td2.setTaskType(TaskType.SHELL.getDesc());
td2.setUserId(1);
td2.setVersion(1);
td2.setCreateTime(new Date());
td2.setUpdateTime(new Date());
List<TaskDefinitionLog> taskDefinitionLogs = new ArrayList<>();
taskDefinitionLogs.add(taskDefinition);
taskDefinitionLogs.add(td2);
Mockito.when(taskDefinitionLogMapper.queryByTaskDefinitions(any())).thenReturn(taskDefinitionLogs);
Mockito.when(processTaskRelationMapper.queryByProcessCode(Mockito.anyLong(), Mockito.anyLong())).thenReturn(list);
DAG<String, TaskNode, TaskNodeRelation> stringTaskNodeTaskNodeRelationDAG = processService.genDagGraph(processDefinition);
Assert.assertEquals(1, stringTaskNodeTaskNodeRelationDAG.getNodesCount());
}
@Test
public void testCreateCommand() {
Command command = new Command();
command.setProcessDefinitionCode(123);
command.setCommandParam("{\"ProcessInstanceId\":222}");
command.setCommandType(CommandType.START_PROCESS);
int mockResult = 1;
Mockito.when(commandMapper.insert(command)).thenReturn(mockResult);
int exeMethodResult = processService.createCommand(command);
Assert.assertEquals(mockResult, exeMethodResult);
Mockito.verify(commandMapper, Mockito.times(1)).insert(command);
}
@Test
public void testChangeOutParam() {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setProcessInstanceId(62);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(62);
taskInstance.setVarPool("[{\"direct\":\"OUT\",\"prop\":\"test1\",\"type\":\"VARCHAR\",\"value\":\"\"}]");
taskInstance.setTaskParams("{\"type\":\"MYSQL\",\"datasource\":1,\"sql\":\"select id from tb_test limit 1\","
+ "\"udfs\":\"\",\"sqlType\":\"0\",\"sendEmail\":false,\"displayRows\":10,\"title\":\"\","
+ "\"groupId\":null,\"localParams\":[{\"prop\":\"test1\",\"direct\":\"OUT\",\"type\":\"VARCHAR\",\"value\":\"12\"}],"
+ "\"connParams\":\"\",\"preStatements\":[],\"postStatements\":[],\"conditionResult\":\"{\\\"successNode\\\":[\\\"\\\"],"
+ "\\\"failedNode\\\":[\\\"\\\"]}\",\"dependence\":\"{}\"}");
processService.changeOutParam(taskInstance);
}
@Test
public void testUpdateTaskDefinitionResources() throws Exception {
TaskDefinition taskDefinition = new TaskDefinition();
String taskParameters = "{\n"
+ " \"mainClass\": \"org.apache.dolphinscheduler.SparkTest\",\n"
+ " \"mainJar\": {\n"
+ " \"id\": 1\n"
+ " },\n"
+ " \"deployMode\": \"cluster\",\n"
+ " \"resourceList\": [\n"
+ " {\n"
+ " \"id\": 3\n"
+ " },\n"
+ " {\n"
+ " \"id\": 4\n"
+ " }\n"
+ " ],\n"
+ " \"localParams\": [],\n"
+ " \"driverCores\": 1,\n"
+ " \"driverMemory\": \"512M\",\n"
+ " \"numExecutors\": 2,\n"
+ " \"executorMemory\": \"2G\",\n"
+ " \"executorCores\": 2,\n"
+ " \"appName\": \"\",\n"
+ " \"mainArgs\": \"\",\n"
+ " \"others\": \"\",\n"
+ " \"programType\": \"JAVA\",\n"
+ " \"sparkVersion\": \"SPARK2\",\n"
+ " \"dependence\": {},\n"
+ " \"conditionResult\": {\n"
+ " \"successNode\": [\n"
+ " \"\"\n"
+ " ],\n"
+ " \"failedNode\": [\n"
+ " \"\"\n"
+ " ]\n"
+ " },\n"
+ " \"waitStartTimeout\": {}\n"
+ "}";
taskDefinition.setTaskParams(taskParameters);
Map<Integer, Resource> resourceMap =
Stream.of(1, 3, 4)
.map(i -> {
Resource resource = new Resource();
resource.setId(i);
resource.setFileName("file" + i);
resource.setFullName("/file" + i);
return resource;
})
.collect(
Collectors.toMap(
Resource::getId,
resource -> resource)
);
for (Integer integer : Arrays.asList(1, 3, 4)) {
Mockito.when(resourceMapper.selectById(integer))
.thenReturn(resourceMap.get(integer));
}
Whitebox.invokeMethod(processService,
"updateTaskDefinitionResources",
taskDefinition);
String taskParams = taskDefinition.getTaskParams();
SparkParameters sparkParameters = JSONUtils.parseObject(taskParams, SparkParameters.class);
ResourceInfo mainJar = sparkParameters.getMainJar();
Assert.assertEquals(1, mainJar.getId());
Assert.assertEquals("file1", mainJar.getRes());
Assert.assertEquals("/file1", mainJar.getResourceName());
Assert.assertEquals(2, sparkParameters.getResourceList().size());
ResourceInfo res1 = sparkParameters.getResourceList().get(0);
ResourceInfo res2 = sparkParameters.getResourceList().get(1);
Assert.assertEquals(3, res1.getId());
Assert.assertEquals("file3", res1.getRes());
Assert.assertEquals("/file3", res1.getResourceName());
Assert.assertEquals(4, res2.getId());
Assert.assertEquals("file4", res2.getRes());
Assert.assertEquals("/file4", res2.getResourceName());
}
@Test
public void testUpdateResourceInfo() throws Exception {
// test if input is null
ResourceInfo resourceInfoNull = null;
ResourceInfo updatedResourceInfo1 = Whitebox.invokeMethod(processService,
"updateResourceInfo",
resourceInfoNull);
Assert.assertNull(updatedResourceInfo1);
// test if resource id less than 1
ResourceInfo resourceInfoVoid = new ResourceInfo();
ResourceInfo updatedResourceInfo2 = Whitebox.invokeMethod(processService,
"updateResourceInfo",
resourceInfoVoid);
Assert.assertNull(updatedResourceInfo2);
// test normal situation
ResourceInfo resourceInfoNormal = new ResourceInfo();
resourceInfoNormal.setId(1);
Resource resource = new Resource();
resource.setId(1);
resource.setFileName("test.txt");
resource.setFullName("/test.txt");
Mockito.when(resourceMapper.selectById(1)).thenReturn(resource);
ResourceInfo updatedResourceInfo3 = Whitebox.invokeMethod(processService,
"updateResourceInfo",
resourceInfoNormal);
Assert.assertEquals(1, updatedResourceInfo3.getId());
Assert.assertEquals("test.txt", updatedResourceInfo3.getRes());
Assert.assertEquals("/test.txt", updatedResourceInfo3.getResourceName());
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,922 | [Feature][Workflow relationship] The configuration of `i18n` does not fully support English. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
English support is not perfect.
![image](https://user-images.githubusercontent.com/19239641/142577832-73952053-731d-4bf3-90ed-5e22f2e16016.png)
### Use case
Improve English support.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6922 | https://github.com/apache/dolphinscheduler/pull/6936 | 1fdbc3270c010d31485b2dfb79243a2300e58759 | 024aacacb305c7145e73c5f3d8d1209a88f814d7 | "2021-11-19T06:49:35Z" | java | "2021-11-20T14:58:30Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/kinship/_source/graphGridOption.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import _ from 'lodash'
import i18n from '@/module/i18n/index.js'
const getCategory = (categoryDic, { workFlowPublishStatus, schedulePublishStatus, code }, sourceWorkFlowCode) => {
if (code === sourceWorkFlowCode) return categoryDic.active
switch (true) {
case workFlowPublishStatus === '0':
return categoryDic['0']
case workFlowPublishStatus === '1' && schedulePublishStatus === '0':
return categoryDic['10']
case workFlowPublishStatus === '1' && schedulePublishStatus === '1':
default:
return categoryDic['1']
}
}
export default function (locations, links, sourceWorkFlowCode, isShowLabel) {
const categoryDic = {
active: { color: '#2D8DF0', category: i18n.$t('KinshipStateActive') },
1: { color: '#00C800', category: i18n.$t('KinshipState1') },
0: { color: '#999999', category: i18n.$t('KinshipState0') },
10: { color: '#FF8F05', category: i18n.$t('KinshipState10') }
}
const newData = _.map(locations, (item) => {
const { color, category } = getCategory(categoryDic, item, sourceWorkFlowCode)
return {
...item,
id: item.code,
emphasis: {
itemStyle: {
color
}
},
category
}
})
const categories = [
{ name: categoryDic.active.category },
{ name: categoryDic['1'].category },
{ name: categoryDic['0'].category },
{ name: categoryDic['10'].category }
]
const option = {
tooltip: {
trigger: 'item',
triggerOn: 'mousemove',
backgroundColor: '#2D303A',
padding: [8, 12],
formatter: (params) => {
if (!params.data.name) return ''
const { name, scheduleStartTime, scheduleEndTime, crontab, workFlowPublishStatus, schedulePublishStatus } = params.data
const str = `
工作流名字:${name}<br/>
调度开始时间:${scheduleStartTime}<br/>
调度结束时间:${scheduleEndTime}<br/>
crontab表达式:${crontab}<br/>
工作流发布状态:${workFlowPublishStatus}<br/>
调度发布状态:${schedulePublishStatus}<br/>
`
return str
},
color: '#2D303A',
textStyle: {
rich: {
a: {
fontSize: 12,
color: '#2D303A',
lineHeight: 12,
align: 'left',
padding: [4, 4, 4, 4]
}
}
}
},
color: [categoryDic.active.color, categoryDic['1'].color, categoryDic['0'].color, categoryDic['10'].color],
legend: [{
orient: 'horizontal',
top: 6,
left: 6,
data: categories
}],
series: [{
type: 'graph',
layout: 'force',
nodeScaleRatio: 1.2,
draggable: true,
animation: false,
data: newData,
roam: true,
symbol: 'roundRect',
symbolSize: 70,
categories,
label: {
show: isShowLabel,
position: 'inside',
formatter: (params) => {
if (!params.data.name) return ''
const str = params.data.name.split('_').map(item => `{a|${item}\n}`).join('')
return str
},
color: '#222222',
textStyle: {
rich: {
a: {
fontSize: 12,
color: '#222222',
lineHeight: 12,
align: 'left',
padding: [4, 4, 4, 4]
}
}
}
},
edgeSymbol: ['circle', 'arrow'],
edgeSymbolSize: [4, 12],
force: {
repulsion: 1000,
edgeLength: 300
},
links: links,
lineStyle: {
color: '#999999'
}
}]
}
return option
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,922 | [Feature][Workflow relationship] The configuration of `i18n` does not fully support English. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
English support is not perfect.
![image](https://user-images.githubusercontent.com/19239641/142577832-73952053-731d-4bf3-90ed-5e22f2e16016.png)
### Use case
Improve English support.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6922 | https://github.com/apache/dolphinscheduler/pull/6936 | 1fdbc3270c010d31485b2dfb79243a2300e58759 | 024aacacb305c7145e73c5f3d8d1209a88f814d7 | "2021-11-19T06:49:35Z" | java | "2021-11-20T14:58:30Z" | dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
export default {
'User Name': 'User Name',
'Please enter user name': 'Please enter user name',
Password: 'Password',
'Please enter your password': 'Please enter your password',
'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22': 'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22',
Login: 'Login',
Home: 'Home',
'Failed to create node to save': 'Failed to create node to save',
'Global parameters': 'Global parameters',
'Local parameters': 'Local parameters',
'Copy success': 'Copy success',
'The browser does not support automatic copying': 'The browser does not support automatic copying',
'Whether to save the DAG graph': 'Whether to save the DAG graph',
'Current node settings': 'Current node settings',
'View history': 'View history',
'View log': 'View log',
'Force success': 'Force success',
'Enter this child node': 'Enter this child node',
'Node name': 'Node name',
'Please enter name (required)': 'Please enter name (required)',
'Run flag': 'Run flag',
Normal: 'Normal',
'Prohibition execution': 'Prohibition execution',
'Please enter description': 'Please enter description',
'Number of failed retries': 'Number of failed retries',
Times: 'Times',
'Failed retry interval': 'Failed retry interval',
Minute: 'Minute',
'Delay execution time': 'Delay execution time',
'Delay execution': 'Delay execution',
'Forced success': 'Forced success',
Cancel: 'Cancel',
'Confirm add': 'Confirm add',
'The newly created sub-Process has not yet been executed and cannot enter the sub-Process': 'The newly created sub-Process has not yet been executed and cannot enter the sub-Process',
'The task has not been executed and cannot enter the sub-Process': 'The task has not been executed and cannot enter the sub-Process',
'Name already exists': 'Name already exists',
'Download Log': 'Download Log',
'Refresh Log': 'Refresh Log',
'Enter full screen': 'Enter full screen',
'Cancel full screen': 'Cancel full screen',
Close: 'Close',
'Update log success': 'Update log success',
'No more logs': 'No more logs',
'No log': 'No log',
'Loading Log...': 'Loading Log...',
'Set the DAG diagram name': 'Set the DAG diagram name',
'Please enter description(optional)': 'Please enter description(optional)',
'Set global': 'Set global',
'Whether to go online the process definition': 'Whether to go online the process definition',
'Whether to update the process definition': 'Whether to update the process definition',
Add: 'Add',
'DAG graph name cannot be empty': 'DAG graph name cannot be empty',
'Create Datasource': 'Create Datasource',
'Project Home': 'Workflow Monitor',
'Project Manage': 'Project',
'Create Project': 'Create Project',
'Cron Manage': 'Cron Manage',
'Copy Workflow': 'Copy Workflow',
'Tenant Manage': 'Tenant Manage',
'Create Tenant': 'Create Tenant',
'User Manage': 'User Manage',
'Create User': 'Create User',
'User Information': 'User Information',
'Edit Password': 'Edit Password',
Success: 'Success',
Failed: 'Failed',
Delete: 'Delete',
'Please choose': 'Please choose',
'Please enter a positive integer': 'Please enter a positive integer',
'Program Type': 'Program Type',
'Main Class': 'Main Class',
'Main Jar Package': 'Main Jar Package',
'Please enter main jar package': 'Please enter main jar package',
'Please enter main class': 'Please enter main class',
'Main Arguments': 'Main Arguments',
'Please enter main arguments': 'Please enter main arguments',
'Option Parameters': 'Option Parameters',
'Please enter option parameters': 'Please enter option parameters',
Resources: 'Resources',
'Custom Parameters': 'Custom Parameters',
'Custom template': 'Custom template',
Datasource: 'Datasource',
methods: 'methods',
'Please enter the procedure method': 'Please enter the procedure script \n\ncall procedure:{call <procedure-name>[(<arg1>,<arg2>, ...)]}\n\ncall function:{?= call <procedure-name>[(<arg1>,<arg2>, ...)]} ',
'The procedure method script example': 'example:{call <procedure-name>[(?,?, ...)]} or {?= call <procedure-name>[(?,?, ...)]}',
Script: 'Script',
'Please enter script(required)': 'Please enter script(required)',
'Deploy Mode': 'Deploy Mode',
'Driver Cores': 'Driver Cores',
'Please enter Driver cores': 'Please enter Driver cores',
'Driver Memory': 'Driver Memory',
'Please enter Driver memory': 'Please enter Driver memory',
'Executor Number': 'Executor Number',
'Please enter Executor number': 'Please enter Executor number',
'The Executor number should be a positive integer': 'The Executor number should be a positive integer',
'Executor Memory': 'Executor Memory',
'Please enter Executor memory': 'Please enter Executor memory',
'Executor Cores': 'Executor Cores',
'Please enter Executor cores': 'Please enter Executor cores',
'Memory should be a positive integer': 'Memory should be a positive integer',
'Core number should be positive integer': 'Core number should be positive integer',
'Flink Version': 'Flink Version',
'JobManager Memory': 'JobManager Memory',
'Please enter JobManager memory': 'Please enter JobManager memory',
'TaskManager Memory': 'TaskManager Memory',
'Please enter TaskManager memory': 'Please enter TaskManager memory',
'Slot Number': 'Slot Number',
'Please enter Slot number': 'Please enter Slot number',
Parallelism: 'Parallelism',
'Custom Parallelism': 'Configure parallelism',
'Please enter Parallelism': 'Please enter Parallelism',
'Parallelism tip': 'If there are a large number of tasks requiring complement, you can use the custom parallelism to ' +
'set the complement task thread to a reasonable value to avoid too large impact on the server.',
'Parallelism number should be positive integer': 'Parallelism number should be positive integer',
'TaskManager Number': 'TaskManager Number',
'Please enter TaskManager number': 'Please enter TaskManager number',
'App Name': 'App Name',
'Please enter app name(optional)': 'Please enter app name(optional)',
'SQL Type': 'SQL Type',
'Send Email': 'Send Email',
'Log display': 'Log display',
'rows of result': 'rows of result',
Title: 'Title',
'Please enter the title of email': 'Please enter the title of email',
Table: 'Table',
TableMode: 'Table',
Attachment: 'Attachment',
'SQL Parameter': 'SQL Parameter',
'SQL Statement': 'SQL Statement',
'UDF Function': 'UDF Function',
'Please enter a SQL Statement(required)': 'Please enter a SQL Statement(required)',
'Please enter a JSON Statement(required)': 'Please enter a JSON Statement(required)',
'One form or attachment must be selected': 'One form or attachment must be selected',
'Mail subject required': 'Mail subject required',
'Child Node': 'Child Node',
'Please select a sub-Process': 'Please select a sub-Process',
Edit: 'Edit',
'Switch To This Version': 'Switch To This Version',
'Datasource Name': 'Datasource Name',
'Please enter datasource name': 'Please enter datasource name',
IP: 'IP',
'Please enter IP': 'Please enter IP',
Port: 'Port',
'Please enter port': 'Please enter port',
'Database Name': 'Database Name',
'Please enter database name': 'Please enter database name',
'Oracle Connect Type': 'ServiceName or SID',
'Oracle Service Name': 'ServiceName',
'Oracle SID': 'SID',
'jdbc connect parameters': 'jdbc connect parameters',
'Test Connect': 'Test Connect',
'Please enter resource name': 'Please enter resource name',
'Please enter resource folder name': 'Please enter resource folder name',
'Please enter a non-query SQL statement': 'Please enter a non-query SQL statement',
'Please enter IP/hostname': 'Please enter IP/hostname',
'jdbc connection parameters is not a correct JSON format': 'jdbc connection parameters is not a correct JSON format',
'#': '#',
'Datasource Type': 'Datasource Type',
'Datasource Parameter': 'Datasource Parameter',
'Create Time': 'Create Time',
'Update Time': 'Update Time',
Operation: 'Operation',
'Current Version': 'Current Version',
'Click to view': 'Click to view',
'Delete?': 'Delete?',
'Switch Version Successfully': 'Switch Version Successfully',
'Confirm Switch To This Version?': 'Confirm Switch To This Version?',
Confirm: 'Confirm',
'Task status statistics': 'Task Status Statistics',
Number: 'Number',
State: 'State',
'Dry-run flag': 'Dry-run flag',
'Process Status Statistics': 'Process Status Statistics',
'Process Definition Statistics': 'Process Definition Statistics',
'Project Name': 'Project Name',
'Please enter name': 'Please enter name',
'Owned Users': 'Owned Users',
'Process Pid': 'Process Pid',
'Zk registration directory': 'Zk registration directory',
cpuUsage: 'cpuUsage',
memoryUsage: 'memoryUsage',
'Last heartbeat time': 'Last heartbeat time',
'Edit Tenant': 'Edit Tenant',
'OS Tenant Code': 'OS Tenant Code',
'Tenant Name': 'Tenant Name',
Queue: 'Yarn Queue',
'Please select a queue': 'default is tenant association queue',
'Please enter the os tenant code in English': 'Please enter the os tenant code in English',
'Please enter os tenant code in English': 'Please enter os tenant code in English',
'Please enter os tenant code': 'Please enter os tenant code',
'Please enter tenant Name': 'Please enter tenant Name',
'The os tenant code. Only letters or a combination of letters and numbers are allowed': 'The os tenant code. Only letters or a combination of letters and numbers are allowed',
'Edit User': 'Edit User',
Tenant: 'Tenant',
Email: 'Email',
Phone: 'Phone',
'User Type': 'User Type',
'Please enter phone number': 'Please enter phone number',
'Please enter email': 'Please enter email',
'Please enter the correct email format': 'Please enter the correct email format',
'Please enter the correct mobile phone format': 'Please enter the correct mobile phone format',
Project: 'Project',
Authorize: 'Authorize',
'File resources': 'File resources',
'UDF resources': 'UDF resources',
'UDF resources directory': 'UDF resources directory',
'Please select UDF resources directory': 'Please select UDF resources directory',
'Alarm group': 'Alarm group',
'Alarm group required': 'Alarm group required',
'Edit alarm group': 'Edit alarm group',
'Create alarm group': 'Create alarm group',
'Create Alarm Instance': 'Create Alarm Instance',
'Edit Alarm Instance': 'Edit Alarm Instance',
'Group Name': 'Group Name',
'Alarm instance name': 'Alarm instance name',
'Alarm plugin name': 'Alarm plugin name',
'Select plugin': 'Select plugin',
'Select Alarm plugin': 'Please select an Alarm plugin',
'Please enter group name': 'Please enter group name',
'Instance parameter exception': 'Instance parameter exception',
'Group Type': 'Group Type',
'Alarm plugin instance': 'Alarm plugin instance',
'Select Alarm plugin instance': 'Please select an Alarm plugin instance',
Remarks: 'Remarks',
SMS: 'SMS',
'Managing Users': 'Managing Users',
Permission: 'Permission',
Administrator: 'Administrator',
'Confirm Password': 'Confirm Password',
'Please enter confirm password': 'Please enter confirm password',
'Password cannot be in Chinese': 'Password cannot be in Chinese',
'Please enter a password (6-22) character password': 'Please enter a password (6-22) character password',
'Confirmation password cannot be in Chinese': 'Confirmation password cannot be in Chinese',
'Please enter a confirmation password (6-22) character password': 'Please enter a confirmation password (6-22) character password',
'The password is inconsistent with the confirmation password': 'The password is inconsistent with the confirmation password',
'Please select the datasource': 'Please select the datasource',
'Please select resources': 'Please select resources',
Query: 'Query',
'Non Query': 'Non Query',
'prop(required)': 'prop(required)',
'value(optional)': 'value(optional)',
'value(required)': 'value(required)',
'prop is empty': 'prop is empty',
'value is empty': 'value is empty',
'prop is repeat': 'prop is repeat',
'Start Time': 'Start Time',
'End Time': 'End Time',
crontab: 'crontab',
'Failure Strategy': 'Failure Strategy',
online: 'online',
offline: 'offline',
'Task Status': 'Task Status',
'Process Instance': 'Process Instance',
'Task Instance': 'Task Instance',
'Select date range': 'Select date range',
startDate: 'startDate',
endDate: 'endDate',
Date: 'Date',
Waiting: 'Waiting',
Execution: 'Execution',
Finish: 'Finish',
'Create File': 'Create File',
'Create folder': 'Create folder',
'File Name': 'File Name',
'Folder Name': 'Folder Name',
'File Format': 'File Format',
'Folder Format': 'Folder Format',
'File Content': 'File Content',
'Upload File Size': 'Upload File size cannot exceed 1g',
Create: 'Create',
'Please enter the resource content': 'Please enter the resource content',
'Resource content cannot exceed 3000 lines': 'Resource content cannot exceed 3000 lines',
'File Details': 'File Details',
'Download Details': 'Download Details',
Return: 'Return',
Save: 'Save',
'File Manage': 'File Manage',
'Upload Files': 'Upload Files',
'Create UDF Function': 'Create UDF Function',
'Upload UDF Resources': 'Upload UDF Resources',
'Service-Master': 'Service-Master',
'Service-Worker': 'Service-Worker',
'Process Name': 'Process Name',
Executor: 'Executor',
'Run Type': 'Run Type',
'Scheduling Time': 'Scheduling Time',
'Run Times': 'Run Times',
host: 'host',
'fault-tolerant sign': 'fault-tolerant sign',
Rerun: 'Rerun',
'Recovery Failed': 'Recovery Failed',
Stop: 'Stop',
Pause: 'Pause',
'Recovery Suspend': 'Recovery Suspend',
Gantt: 'Gantt',
'Node Type': 'Node Type',
'Submit Time': 'Submit Time',
Duration: 'Duration',
'Retry Count': 'Retry Count',
'Task Name': 'Task Name',
'Task Date': 'Task Date',
'Source Table': 'Source Table',
'Record Number': 'Record Number',
'Target Table': 'Target Table',
'Online viewing type is not supported': 'Online viewing type is not supported',
Size: 'Size',
Rename: 'Rename',
Download: 'Download',
Export: 'Export',
'Version Info': 'Version Info',
Submit: 'Submit',
'Edit UDF Function': 'Edit UDF Function',
type: 'type',
'UDF Function Name': 'UDF Function Name',
FILE: 'FILE',
UDF: 'UDF',
'File Subdirectory': 'File Subdirectory',
'Please enter a function name': 'Please enter a function name',
'Package Name': 'Package Name',
'Please enter a Package name': 'Please enter a Package name',
Parameter: 'Parameter',
'Please enter a parameter': 'Please enter a parameter',
'UDF Resources': 'UDF Resources',
'Upload Resources': 'Upload Resources',
Instructions: 'Instructions',
'Please enter a instructions': 'Please enter a instructions',
'Please enter a UDF function name': 'Please enter a UDF function name',
'Select UDF Resources': 'Select UDF Resources',
'Class Name': 'Class Name',
'Jar Package': 'Jar Package',
'Library Name': 'Library Name',
'UDF Resource Name': 'UDF Resource Name',
'File Size': 'File Size',
Description: 'Description',
'Drag Nodes and Selected Items': 'Drag Nodes and Selected Items',
'Select Line Connection': 'Select Line Connection',
'Delete selected lines or nodes': 'Delete selected lines or nodes',
'Full Screen': 'Full Screen',
Unpublished: 'Unpublished',
'Start Process': 'Start Process',
'Execute from the current node': 'Execute from the current node',
'Recover tolerance fault process': 'Recover tolerance fault process',
'Resume the suspension process': 'Resume the suspension process',
'Execute from the failed nodes': 'Execute from the failed nodes',
'Complement Data': 'Complement Data',
'Scheduling execution': 'Scheduling execution',
'Recovery waiting thread': 'Recovery waiting thread',
'Submitted successfully': 'Submitted successfully',
Executing: 'Executing',
'Ready to pause': 'Ready to pause',
'Ready to stop': 'Ready to stop',
'Need fault tolerance': 'Need fault tolerance',
Kill: 'Kill',
'Waiting for thread': 'Waiting for thread',
'Waiting for dependence': 'Waiting for dependence',
Start: 'Start',
Copy: 'Copy',
'Copy name': 'Copy name',
'Copy path': 'Copy path',
'Please enter keyword': 'Please enter keyword',
'File Upload': 'File Upload',
'Drag the file into the current upload window': 'Drag the file into the current upload window',
'Drag area upload': 'Drag area upload',
Upload: 'Upload',
'ReUpload File': 'ReUpload File',
'Please enter file name': 'Please enter file name',
'Please select the file to upload': 'Please select the file to upload',
'Resources manage': 'Resources',
Security: 'Security',
Logout: 'Logout',
'No data': 'No data',
'Uploading...': 'Uploading...',
'Loading...': 'Loading...',
List: 'List',
'Unable to download without proper url': 'Unable to download without proper url',
Process: 'Process',
'Process definition': 'Process definition',
'Task record': 'Task record',
'Warning group manage': 'Warning group manage',
'Warning instance manage': 'Warning instance manage',
'Servers manage': 'Servers manage',
'UDF manage': 'UDF manage',
'Resource manage': 'Resource manage',
'Function manage': 'Function manage',
'Edit password': 'Edit password',
'Ordinary users': 'Ordinary users',
'Create process': 'Create process',
'Import process': 'Import process',
'Timing state': 'Timing state',
Timing: 'Timing',
Timezone: 'Timezone',
TreeView: 'TreeView',
'Mailbox already exists! Recipients and copyers cannot repeat': 'Mailbox already exists! Recipients and copyers cannot repeat',
'Mailbox input is illegal': 'Mailbox input is illegal',
'Please set the parameters before starting': 'Please set the parameters before starting',
Continue: 'Continue',
End: 'End',
'Node execution': 'Node execution',
'Backward execution': 'Backward execution',
'Forward execution': 'Forward execution',
'Execute only the current node': 'Execute only the current node',
'Notification strategy': 'Notification strategy',
'Notification group': 'Notification group',
'Please select a notification group': 'Please select a notification group',
'Whether it is a complement process?': 'Whether it is a complement process?',
'Schedule date': 'Schedule date',
'Mode of execution': 'Mode of execution',
'Serial execution': 'Serial execution',
'Parallel execution': 'Parallel execution',
'Set parameters before timing': 'Set parameters before timing',
'Start and stop time': 'Start and stop time',
'Please select time': 'Please select time',
'Please enter crontab': 'Please enter crontab',
none_1: 'none',
success_1: 'success',
failure_1: 'failure',
All_1: 'All',
Toolbar: 'Toolbar',
'View variables': 'View variables',
'Format DAG': 'Format DAG',
'Refresh DAG status': 'Refresh DAG status',
Return_1: 'Return',
'Please enter format': 'Please enter format',
'connection parameter': 'connection parameter',
'Process definition details': 'Process definition details',
'Create process definition': 'Create process definition',
'Scheduled task list': 'Scheduled task list',
'Process instance details': 'Process instance details',
'Create Resource': 'Create Resource',
'User Center': 'User Center',
AllStatus: 'All',
None: 'None',
Name: 'Name',
'Process priority': 'Process priority',
'Task priority': 'Task priority',
'Task timeout alarm': 'Task timeout alarm',
'Timeout strategy': 'Timeout strategy',
'Timeout alarm': 'Timeout alarm',
'Timeout failure': 'Timeout failure',
'Timeout period': 'Timeout period',
'Waiting Dependent complete': 'Waiting Dependent complete',
'Waiting Dependent start': 'Waiting Dependent start',
'Check interval': 'Check interval',
'Timeout must be longer than check interval': 'Timeout must be longer than check interval',
'Timeout strategy must be selected': 'Timeout strategy must be selected',
'Timeout must be a positive integer': 'Timeout must be a positive integer',
'Add dependency': 'Add dependency',
'Whether dry-run': 'Whether dry-run',
and: 'and',
or: 'or',
month: 'month',
week: 'week',
day: 'day',
hour: 'hour',
Running: 'Running',
'Waiting for dependency to complete': 'Waiting for dependency to complete',
Selected: 'Selected',
CurrentHour: 'CurrentHour',
Last1Hour: 'Last1Hour',
Last2Hours: 'Last2Hours',
Last3Hours: 'Last3Hours',
Last24Hours: 'Last24Hours',
today: 'today',
Last1Days: 'Last1Days',
Last2Days: 'Last2Days',
Last3Days: 'Last3Days',
Last7Days: 'Last7Days',
ThisWeek: 'ThisWeek',
LastWeek: 'LastWeek',
LastMonday: 'LastMonday',
LastTuesday: 'LastTuesday',
LastWednesday: 'LastWednesday',
LastThursday: 'LastThursday',
LastFriday: 'LastFriday',
LastSaturday: 'LastSaturday',
LastSunday: 'LastSunday',
ThisMonth: 'ThisMonth',
LastMonth: 'LastMonth',
LastMonthBegin: 'LastMonthBegin',
LastMonthEnd: 'LastMonthEnd',
'Refresh status succeeded': 'Refresh status succeeded',
'Queue manage': 'Yarn Queue manage',
'Create queue': 'Create queue',
'Edit queue': 'Edit queue',
'Datasource manage': 'Datasource',
'History task record': 'History task record',
'Please go online': 'Please go online',
'Queue value': 'Queue value',
'Please enter queue value': 'Please enter queue value',
'Worker group manage': 'Worker group manage',
'Create worker group': 'Create worker group',
'Edit worker group': 'Edit worker group',
'Token manage': 'Token manage',
'Create token': 'Create token',
'Edit token': 'Edit token',
Addresses: 'Addresses',
'Worker Addresses': 'Worker Addresses',
'Please select the worker addresses': 'Please select the worker addresses',
'Failure time': 'Failure time',
'Expiration time': 'Expiration time',
User: 'User',
'Please enter token': 'Please enter token',
'Generate token': 'Generate token',
Monitor: 'Monitor',
Group: 'Group',
'Queue statistics': 'Queue statistics',
'Command status statistics': 'Command status statistics',
'Task kill': 'Task Kill',
'Task queue': 'Task queue',
'Error command count': 'Error command count',
'Normal command count': 'Normal command count',
Manage: ' Manage',
'Number of connections': 'Number of connections',
Sent: 'Sent',
Received: 'Received',
'Min latency': 'Min latency',
'Avg latency': 'Avg latency',
'Max latency': 'Max latency',
'Node count': 'Node count',
'Query time': 'Query time',
'Node self-test status': 'Node self-test status',
'Health status': 'Health status',
'Max connections': 'Max connections',
'Threads connections': 'Threads connections',
'Max used connections': 'Max used connections',
'Threads running connections': 'Threads running connections',
'Worker group': 'Worker group',
'Please enter a positive integer greater than 0': 'Please enter a positive integer greater than 0',
'Pre Statement': 'Pre Statement',
'Post Statement': 'Post Statement',
'Statement cannot be empty': 'Statement cannot be empty',
'Process Define Count': 'Work flow Define Count',
'Process Instance Running Count': 'Process Instance Running Count',
'command number of waiting for running': 'command number of waiting for running',
'failure command number': 'failure command number',
'tasks number of waiting running': 'tasks number of waiting running',
'task number of ready to kill': 'task number of ready to kill',
'Statistics manage': 'Statistics Manage',
statistics: 'Statistics',
'select tenant': 'select tenant',
'Please enter Principal': 'Please enter Principal',
'Please enter the kerberos authentication parameter java.security.krb5.conf': 'Please enter the kerberos authentication parameter java.security.krb5.conf',
'Please enter the kerberos authentication parameter login.user.keytab.username': 'Please enter the kerberos authentication parameter login.user.keytab.username',
'Please enter the kerberos authentication parameter login.user.keytab.path': 'Please enter the kerberos authentication parameter login.user.keytab.path',
'The start time must not be the same as the end': 'The start time must not be the same as the end',
'Startup parameter': 'Startup parameter',
'Startup type': 'Startup type',
'warning of timeout': 'warning of timeout',
'Next five execution times': 'Next five execution times',
'Execute time': 'Execute time',
'Complement range': 'Complement range',
'Http Url': 'Http Url',
'Http Method': 'Http Method',
'Http Parameters': 'Http Parameters',
'Http Parameters Key': 'Http Parameters Key',
'Http Parameters Position': 'Http Parameters Position',
'Http Parameters Value': 'Http Parameters Value',
'Http Check Condition': 'Http Check Condition',
'Http Condition': 'Http Condition',
'Please Enter Http Url': 'Please Enter Http Url(required)',
'Please Enter Http Condition': 'Please Enter Http Condition',
'There is no data for this period of time': 'There is no data for this period of time',
'Worker addresses cannot be empty': 'Worker addresses cannot be empty',
'Please generate token': 'Please generate token',
'Please Select token': 'Please select the expiration time of token',
'Spark Version': 'Spark Version',
TargetDataBase: 'target database',
TargetTable: 'target table',
TargetJobName: 'target job name',
'Please enter Pigeon job name': 'Please enter Pigeon job name',
'Please enter the table of target': 'Please enter the table of target',
'Please enter a Target Table(required)': 'Please enter a Target Table(required)',
SpeedByte: 'speed(byte count)',
SpeedRecord: 'speed(record count)',
'0 means unlimited by byte': '0 means unlimited',
'0 means unlimited by count': '0 means unlimited',
'Modify User': 'Modify User',
'Whether directory': 'Whether directory',
Yes: 'Yes',
No: 'No',
'Hadoop Custom Params': 'Hadoop Params',
'Sqoop Advanced Parameters': 'Sqoop Params',
'Sqoop Job Name': 'Job Name',
'Please enter Mysql Database(required)': 'Please enter Mysql Database(required)',
'Please enter Mysql Table(required)': 'Please enter Mysql Table(required)',
'Please enter Columns (Comma separated)': 'Please enter Columns (Comma separated)',
'Please enter Target Dir(required)': 'Please enter Target Dir(required)',
'Please enter Export Dir(required)': 'Please enter Export Dir(required)',
'Please enter Hive Database(required)': 'Please enter Hive Databasec(required)',
'Please enter Hive Table(required)': 'Please enter Hive Table(required)',
'Please enter Hive Partition Keys': 'Please enter Hive Partition Key',
'Please enter Hive Partition Values': 'Please enter Partition Value',
'Please enter Replace Delimiter': 'Please enter Replace Delimiter',
'Please enter Fields Terminated': 'Please enter Fields Terminated',
'Please enter Lines Terminated': 'Please enter Lines Terminated',
'Please enter Concurrency': 'Please enter Concurrency',
'Please enter Update Key': 'Please enter Update Key',
'Please enter Job Name(required)': 'Please enter Job Name(required)',
'Please enter Custom Shell(required)': 'Please enter Custom Shell(required)',
Direct: 'Direct',
Type: 'Type',
ModelType: 'ModelType',
ColumnType: 'ColumnType',
Database: 'Database',
Column: 'Column',
'Map Column Hive': 'Map Column Hive',
'Map Column Java': 'Map Column Java',
'Export Dir': 'Export Dir',
'Hive partition Keys': 'Hive partition Keys',
'Hive partition Values': 'Hive partition Values',
FieldsTerminated: 'FieldsTerminated',
LinesTerminated: 'LinesTerminated',
IsUpdate: 'IsUpdate',
UpdateKey: 'UpdateKey',
UpdateMode: 'UpdateMode',
'Target Dir': 'Target Dir',
DeleteTargetDir: 'DeleteTargetDir',
FileType: 'FileType',
CompressionCodec: 'CompressionCodec',
CreateHiveTable: 'CreateHiveTable',
DropDelimiter: 'DropDelimiter',
OverWriteSrc: 'OverWriteSrc',
ReplaceDelimiter: 'ReplaceDelimiter',
Concurrency: 'Concurrency',
Form: 'Form',
OnlyUpdate: 'OnlyUpdate',
AllowInsert: 'AllowInsert',
'Data Source': 'Data Source',
'Data Target': 'Data Target',
'All Columns': 'All Columns',
'Some Columns': 'Some Columns',
'Branch flow': 'Branch flow',
'Custom Job': 'Custom Job',
'Custom Script': 'Custom Script',
'Cannot select the same node for successful branch flow and failed branch flow': 'Cannot select the same node for successful branch flow and failed branch flow',
'Successful branch flow and failed branch flow are required': 'conditions node Successful and failed branch flow are required',
'No resources exist': 'No resources exist',
'Please delete all non-existing resources': 'Please delete all non-existing resources',
'Unauthorized or deleted resources': 'Unauthorized or deleted resources',
'Please delete all non-existent resources': 'Please delete all non-existent resources',
Kinship: 'Workflow relationship',
Reset: 'Reset',
KinshipStateActive: 'Active',
KinshipState1: 'Online',
KinshipState0: 'Workflow is not online',
KinshipState10: 'Scheduling is not online',
'Dag label display control': 'Dag label display control',
Enable: 'Enable',
Disable: 'Disable',
'The Worker group no longer exists, please select the correct Worker group!': 'The Worker group no longer exists, please select the correct Worker group!',
'Please confirm whether the workflow has been saved before downloading': 'Please confirm whether the workflow has been saved before downloading',
'User name length is between 3 and 39': 'User name length is between 3 and 39',
'Timeout Settings': 'Timeout Settings',
'Connect Timeout': 'Connect Timeout',
'Socket Timeout': 'Socket Timeout',
'Connect timeout be a positive integer': 'Connect timeout be a positive integer',
'Socket Timeout be a positive integer': 'Socket Timeout be a positive integer',
ms: 'ms',
'Please Enter Url': 'Please Enter Url eg. 127.0.0.1:7077',
Master: 'Master',
'Please select the waterdrop resources': 'Please select the waterdrop resources',
zkDirectory: 'zkDirectory',
'Directory detail': 'Directory detail',
'Connection name': 'Connection name',
'Current connection settings': 'Current connection settings',
'Please save the DAG before formatting': 'Please save the DAG before formatting',
'Batch copy': 'Batch copy',
'Related items': 'Related items',
'Project name is required': 'Project name is required',
'Batch move': 'Batch move',
Version: 'Version',
'Pre tasks': 'Pre tasks',
'Running Memory': 'Running Memory',
'Max Memory': 'Max Memory',
'Min Memory': 'Min Memory',
'The workflow canvas is abnormal and cannot be saved, please recreate': 'The workflow canvas is abnormal and cannot be saved, please recreate',
Info: 'Info',
'Datasource userName': 'owner',
'Resource userName': 'owner',
'Environment manage': 'Environment manage',
'Create environment': 'Create environment',
'Edit environment': 'Edit environment',
'Environment value': 'Environment value',
'Environment Name': 'Environment Name',
'Environment Code': 'Environment Code',
'Environment Config': 'Environment Config',
'Environment Desc': 'Environment Desc',
'Environment Worker Group': 'Worker Groups',
'Please enter environment config': 'Please enter environment config',
'Please enter environment desc': 'Please enter environment desc',
'Please select worker groups': 'Please select worker groups',
condition: 'condition',
'The condition content cannot be empty': 'The condition content cannot be empty',
'Reference from': 'Reference from',
'No more...': 'No more...',
'Task Definition': 'Task Definition',
'Create task': 'Create task',
'Task Type': 'Task Type',
'Process execute type': 'Process execute type',
parallel: 'parallel',
'Serial wait': 'Serial wait',
'Serial discard': 'Serial discard',
'Serial priority': 'Serial priority',
'Recover serial wait': 'Recover serial wait',
IsEnableProxy: 'Enable Proxy',
WebHook: 'WebHook',
Keyword: 'Keyword',
Proxy: 'Proxy',
receivers: 'Receivers',
receiverCcs: 'ReceiverCcs',
transportProtocol: 'Transport Protocol',
serverHost: 'SMTP Host',
serverPort: 'SMTP Port',
sender: 'Sender',
enableSmtpAuth: 'SMTP Auth',
starttlsEnable: 'SMTP STARTTLS Enable',
sslEnable: 'SMTP SSL Enable',
smtpSslTrust: 'SMTP SSL Trust',
url: 'URL',
requestType: 'Request Type',
headerParams: 'Headers',
bodyParams: 'Body',
contentField: 'Content Field',
path: 'Script Path',
userParams: 'User Params',
corpId: 'CorpId',
secret: 'Secret',
userSendMsg: 'UserSendMsg',
agentId: 'AgentId',
users: 'Users',
Username: 'Username',
showType: 'Show Type',
'Please select a task type (required)': 'Please select a task type (required)',
layoutType: 'Layout Type',
gridLayout: 'Grid',
dagreLayout: 'Dagre',
rows: 'Rows',
cols: 'Cols',
processOnline: 'Online',
searchNode: 'Search Node',
dagScale: 'Scale'
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,922 | [Feature][Workflow relationship] The configuration of `i18n` does not fully support English. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
English support is not perfect.
![image](https://user-images.githubusercontent.com/19239641/142577832-73952053-731d-4bf3-90ed-5e22f2e16016.png)
### Use case
Improve English support.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6922 | https://github.com/apache/dolphinscheduler/pull/6936 | 1fdbc3270c010d31485b2dfb79243a2300e58759 | 024aacacb305c7145e73c5f3d8d1209a88f814d7 | "2021-11-19T06:49:35Z" | java | "2021-11-20T14:58:30Z" | dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
export default {
'User Name': '用户名',
'Please enter user name': '请输入用户名',
Password: '密码',
'Please enter your password': '请输入密码',
'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22': '密码至少包含数字,字母和字符的两种组合,长度在6-22之间',
Login: '登录',
Home: '首页',
'Failed to create node to save': '未创建节点保存失败',
'Global parameters': '全局参数',
'Local parameters': '局部参数',
'Copy success': '复制成功',
'The browser does not support automatic copying': '该浏览器不支持自动复制',
'Whether to save the DAG graph': '是否保存DAG图',
'Current node settings': '当前节点设置',
'View history': '查看历史',
'View log': '查看日志',
'Force success': '强制成功',
'Enter this child node': '进入该子节点',
'Node name': '节点名称',
'Please enter name (required)': '请输入名称(必填)',
'Run flag': '运行标志',
Normal: '正常',
'Prohibition execution': '禁止执行',
'Please enter description': '请输入描述',
'Number of failed retries': '失败重试次数',
Times: '次',
'Failed retry interval': '失败重试间隔',
Minute: '分',
'Delay execution time': '延时执行时间',
'Delay execution': '延时执行',
'Forced success': '强制成功',
Cancel: '取消',
'Confirm add': '确认添加',
'The newly created sub-Process has not yet been executed and cannot enter the sub-Process': '新创建子工作流还未执行,不能进入子工作流',
'The task has not been executed and cannot enter the sub-Process': '该任务还未执行,不能进入子工作流',
'Name already exists': '名称已存在请重新输入',
'Download Log': '下载日志',
'Refresh Log': '刷新日志',
'Enter full screen': '进入全屏',
'Cancel full screen': '取消全屏',
Close: '关闭',
'Update log success': '更新日志成功',
'No more logs': '暂无更多日志',
'No log': '暂无日志',
'Loading Log...': '正在努力请求日志中...',
'Set the DAG diagram name': '设置DAG图名称',
'Please enter description(optional)': '请输入描述(选填)',
'Set global': '设置全局',
'Whether to go online the process definition': '是否上线流程定义',
'Whether to update the process definition': '是否更新流程定义',
Add: '添加',
'DAG graph name cannot be empty': 'DAG图名称不能为空',
'Create Datasource': '创建数据源',
'Project Home': '工作流监控',
'Project Manage': '项目管理',
'Create Project': '创建项目',
'Cron Manage': '定时管理',
'Copy Workflow': '复制工作流',
'Tenant Manage': '租户管理',
'Create Tenant': '创建租户',
'User Manage': '用户管理',
'Create User': '创建用户',
'User Information': '用户信息',
'Edit Password': '密码修改',
Success: '成功',
Failed: '失败',
Delete: '删除',
'Please choose': '请选择',
'Please enter a positive integer': '请输入正整数',
'Program Type': '程序类型',
'Main Class': '主函数的Class',
'Main Jar Package': '主Jar包',
'Please enter main jar package': '请选择主Jar包',
'Please enter main class': '请填写主函数的Class',
'Main Arguments': '主程序参数',
'Please enter main arguments': '请输入主程序参数',
'Option Parameters': '选项参数',
'Please enter option parameters': '请输入选项参数',
Resources: '资源',
'Custom Parameters': '自定义参数',
'Custom template': '自定义模版',
Datasource: '数据源',
methods: '方法',
'Please enter the procedure method': '请输入存储脚本 \n\n调用存储过程:{call <procedure-name>[(<arg1>,<arg2>, ...)]}\n\n调用存储函数:{?= call <procedure-name>[(<arg1>,<arg2>, ...)]} ',
'The procedure method script example': '示例:{call <procedure-name>[(?,?, ...)]} 或 {?= call <procedure-name>[(?,?, ...)]}',
Script: '脚本',
'Please enter script(required)': '请输入脚本(必填)',
'Deploy Mode': '部署方式',
'Driver Cores': 'Driver核心数',
'Please enter Driver cores': '请输入Driver核心数',
'Driver Memory': 'Driver内存数',
'Please enter Driver memory': '请输入Driver内存数',
'Executor Number': 'Executor数量',
'Please enter Executor number': '请输入Executor数量',
'The Executor number should be a positive integer': 'Executor数量为正整数',
'Executor Memory': 'Executor内存数',
'Please enter Executor memory': '请输入Executor内存数',
'Executor Cores': 'Executor核心数',
'Please enter Executor cores': '请输入Executor核心数',
'Memory should be a positive integer': '内存数为数字',
'Core number should be positive integer': '核心数为正整数',
'Flink Version': 'Flink版本',
'JobManager Memory': 'JobManager内存数',
'Please enter JobManager memory': '请输入JobManager内存数',
'TaskManager Memory': 'TaskManager内存数',
'Please enter TaskManager memory': '请输入TaskManager内存数',
'Slot Number': 'Slot数量',
'Please enter Slot number': '请输入Slot数量',
Parallelism: '并行度',
'Custom Parallelism': '自定义并行度',
'Please enter Parallelism': '请输入并行度',
'Parallelism number should be positive integer': '并行度必须为正整数',
'Parallelism tip': '如果存在大量任务需要补数时,可以利用自定义并行度将补数的任务线程设置成合理的数值,避免对服务器造成过大的影响',
'TaskManager Number': 'TaskManager数量',
'Please enter TaskManager number': '请输入TaskManager数量',
'App Name': '任务名称',
'Please enter app name(optional)': '请输入任务名称(选填)',
'SQL Type': 'sql类型',
'Send Email': '发送邮件',
'Log display': '日志显示',
'rows of result': '行查询结果',
Title: '主题',
'Please enter the title of email': '请输入邮件主题',
Table: '表名',
TableMode: '表格',
Attachment: '附件',
'SQL Parameter': 'sql参数',
'SQL Statement': 'sql语句',
'UDF Function': 'UDF函数',
'Please enter a SQL Statement(required)': '请输入sql语句(必填)',
'Please enter a JSON Statement(required)': '请输入json语句(必填)',
'One form or attachment must be selected': '表格、附件必须勾选一个',
'Mail subject required': '邮件主题必填',
'Child Node': '子节点',
'Please select a sub-Process': '请选择子工作流',
Edit: '编辑',
'Switch To This Version': '切换到该版本',
'Datasource Name': '数据源名称',
'Please enter datasource name': '请输入数据源名称',
IP: 'IP主机名',
'Please enter IP': '请输入IP主机名',
Port: '端口',
'Please enter port': '请输入端口',
'Database Name': '数据库名',
'Please enter database name': '请输入数据库名',
'Oracle Connect Type': '服务名或SID',
'Oracle Service Name': '服务名',
'Oracle SID': 'SID',
'jdbc connect parameters': 'jdbc连接参数',
'Test Connect': '测试连接',
'Please enter resource name': '请输入数据源名称',
'Please enter resource folder name': '请输入资源文件夹名称',
'Please enter a non-query SQL statement': '请输入非查询sql语句',
'Please enter IP/hostname': '请输入IP/主机名',
'jdbc connection parameters is not a correct JSON format': 'jdbc连接参数不是一个正确的JSON格式',
'#': '编号',
'Datasource Type': '数据源类型',
'Datasource Parameter': '数据源参数',
'Create Time': '创建时间',
'Update Time': '更新时间',
Operation: '操作',
'Current Version': '当前版本',
'Click to view': '点击查看',
'Delete?': '确定删除吗?',
'Switch Version Successfully': '切换版本成功',
'Confirm Switch To This Version?': '确定切换到该版本吗?',
Confirm: '确定',
'Task status statistics': '任务状态统计',
Number: '数量',
State: '状态',
'Dry-run flag': '空跑标识',
'Process Status Statistics': '流程状态统计',
'Process Definition Statistics': '流程定义统计',
'Project Name': '项目名称',
'Please enter name': '请输入名称',
'Owned Users': '所属用户',
'Process Pid': '进程Pid',
'Zk registration directory': 'zk注册目录',
cpuUsage: 'cpuUsage',
memoryUsage: 'memoryUsage',
'Last heartbeat time': '最后心跳时间',
'Edit Tenant': '编辑租户',
'OS Tenant Code': '操作系统租户',
'Tenant Name': '租户名称',
Queue: '队列',
'Please select a queue': '默认为租户关联队列',
'Please enter the os tenant code in English': '请输入操作系统租户只允许英文',
'Please enter os tenant code in English': '请输入英文操作系统租户',
'Please enter os tenant code': '请输入操作系统租户',
'Please enter tenant Name': '请输入租户名称',
'The os tenant code. Only letters or a combination of letters and numbers are allowed': '操作系统租户只允许字母或字母与数字组合',
'Edit User': '编辑用户',
Tenant: '租户',
Email: '邮件',
Phone: '手机',
'User Type': '用户类型',
'Please enter phone number': '请输入手机',
'Please enter email': '请输入邮箱',
'Please enter the correct email format': '请输入正确的邮箱格式',
'Please enter the correct mobile phone format': '请输入正确的手机格式',
Project: '项目',
Authorize: '授权',
'File resources': '文件资源',
'UDF resources': 'UDF资源',
'UDF resources directory': 'UDF资源目录',
'Please select UDF resources directory': '请选择UDF资源目录',
'Alarm group': '告警组',
'Alarm group required': '告警组必填',
'Edit alarm group': '编辑告警组',
'Create alarm group': '创建告警组',
'Create Alarm Instance': '创建告警实例',
'Edit Alarm Instance': '编辑告警实例',
'Group Name': '组名称',
'Alarm instance name': '告警实例名称',
'Alarm plugin name': '告警插件名称',
'Select plugin': '选择插件',
'Select Alarm plugin': '请选择告警插件',
'Please enter group name': '请输入组名称',
'Instance parameter exception': '实例参数异常',
'Group Type': '组类型',
'Alarm plugin instance': '告警插件实例',
'Select Alarm plugin instance': '请选择告警插件实例',
Remarks: '备注',
SMS: '短信',
'Managing Users': '管理用户',
Permission: '权限',
Administrator: '管理员',
'Confirm Password': '确认密码',
'Please enter confirm password': '请输入确认密码',
'Password cannot be in Chinese': '密码不能为中文',
'Please enter a password (6-22) character password': '请输入密码(6-22)字符密码',
'Confirmation password cannot be in Chinese': '确认密码不能为中文',
'Please enter a confirmation password (6-22) character password': '请输入确认密码(6-22)字符密码',
'The password is inconsistent with the confirmation password': '密码与确认密码不一致,请重新确认',
'Please select the datasource': '请选择数据源',
'Please select resources': '请选择资源',
Query: '查询',
'Non Query': '非查询',
'prop(required)': 'prop(必填)',
'value(optional)': 'value(选填)',
'value(required)': 'value(必填)',
'prop is empty': 'prop不能为空',
'value is empty': 'value不能为空',
'prop is repeat': 'prop中有重复',
'Start Time': '开始时间',
'End Time': '结束时间',
crontab: 'crontab',
'Failure Strategy': '失败策略',
online: '上线',
offline: '下线',
'Task Status': '任务状态',
'Process Instance': '工作流实例',
'Task Instance': '任务实例',
'Select date range': '选择日期区间',
startDate: '开始日期',
endDate: '结束日期',
Date: '日期',
Waiting: '等待',
Execution: '执行中',
Finish: '完成',
'Create File': '创建文件',
'Create folder': '创建文件夹',
'File Name': '文件名称',
'Folder Name': '文件夹名称',
'File Format': '文件格式',
'Folder Format': '文件夹格式',
'File Content': '文件内容',
'Upload File Size': '文件大小不能超过1G',
Create: '创建',
'Please enter the resource content': '请输入资源内容',
'Resource content cannot exceed 3000 lines': '资源内容不能超过3000行',
'File Details': '文件详情',
'Download Details': '下载详情',
Return: '返回',
Save: '保存',
'File Manage': '文件管理',
'Upload Files': '上传文件',
'Create UDF Function': '创建UDF函数',
'Upload UDF Resources': '上传UDF资源',
'Service-Master': '服务管理-Master',
'Service-Worker': '服务管理-Worker',
'Process Name': '工作流名称',
Executor: '执行用户',
'Run Type': '运行类型',
'Scheduling Time': '调度时间',
'Run Times': '运行次数',
host: 'host',
'fault-tolerant sign': '容错标识',
Rerun: '重跑',
'Recovery Failed': '恢复失败',
Stop: '停止',
Pause: '暂停',
'Recovery Suspend': '恢复运行',
Gantt: '甘特图',
'Node Type': '节点类型',
'Submit Time': '提交时间',
Duration: '运行时长',
'Retry Count': '重试次数',
'Task Name': '任务名称',
'Task Date': '任务日期',
'Source Table': '源表',
'Record Number': '记录数',
'Target Table': '目标表',
'Online viewing type is not supported': '不支持在线查看类型',
Size: '大小',
Rename: '重命名',
Download: '下载',
Export: '导出',
'Version Info': '版本信息',
Submit: '提交',
'Edit UDF Function': '编辑UDF函数',
type: '类型',
'UDF Function Name': 'UDF函数名称',
FILE: '文件',
UDF: 'UDF',
'File Subdirectory': '文件子目录',
'Please enter a function name': '请输入函数名',
'Package Name': '包名类名',
'Please enter a Package name': '请输入包名类名',
Parameter: '参数',
'Please enter a parameter': '请输入参数',
'UDF Resources': 'UDF资源',
'Upload Resources': '上传资源',
Instructions: '使用说明',
'Please enter a instructions': '请输入使用说明',
'Please enter a UDF function name': '请输入UDF函数名称',
'Select UDF Resources': '请选择UDF资源',
'Class Name': '类名',
'Jar Package': 'jar包',
'Library Name': '库名',
'UDF Resource Name': 'UDF资源名称',
'File Size': '文件大小',
Description: '描述',
'Drag Nodes and Selected Items': '拖动节点和选中项',
'Select Line Connection': '选择线条连接',
'Delete selected lines or nodes': '删除选中的线或节点',
'Full Screen': '全屏',
Unpublished: '未发布',
'Start Process': '启动工作流',
'Execute from the current node': '从当前节点开始执行',
'Recover tolerance fault process': '恢复被容错的工作流',
'Resume the suspension process': '恢复运行流程',
'Execute from the failed nodes': '从失败节点开始执行',
'Complement Data': '补数',
'Scheduling execution': '调度执行',
'Recovery waiting thread': '恢复等待线程',
'Submitted successfully': '提交成功',
Executing: '正在执行',
'Ready to pause': '准备暂停',
'Ready to stop': '准备停止',
'Need fault tolerance': '需要容错',
Kill: 'Kill',
'Waiting for thread': '等待线程',
'Waiting for dependence': '等待依赖',
Start: '运行',
Copy: '复制节点',
'Copy name': '复制名称',
'Copy path': '复制路径',
'Please enter keyword': '请输入关键词',
'File Upload': '文件上传',
'Drag the file into the current upload window': '请将文件拖拽到当前上传窗口内!',
'Drag area upload': '拖动区域上传',
Upload: '上传',
'ReUpload File': '重新上传文件',
'Please enter file name': '请输入文件名',
'Please select the file to upload': '请选择要上传的文件',
'Resources manage': '资源中心',
Security: '安全中心',
Logout: '退出',
'No data': '查询无数据',
'Uploading...': '文件上传中',
'Loading...': '正在努力加载中...',
List: '列表',
'Unable to download without proper url': '无下载url无法下载',
Process: '工作流',
'Process definition': '工作流定义',
'Task record': '任务记录',
'Warning group manage': '告警组管理',
'Warning instance manage': '告警实例管理',
'Servers manage': '服务管理',
'UDF manage': 'UDF管理',
'Resource manage': '资源管理',
'Function manage': '函数管理',
'Edit password': '修改密码',
'Ordinary users': '普通用户',
'Create process': '创建工作流',
'Import process': '导入工作流',
'Timing state': '定时状态',
Timing: '定时',
Timezone: '时区',
TreeView: '树形图',
'Mailbox already exists! Recipients and copyers cannot repeat': '邮箱已存在!收件人和抄送人不能重复',
'Mailbox input is illegal': '邮箱输入不合法',
'Please set the parameters before starting': '启动前请先设置参数',
Continue: '继续',
End: '结束',
'Node execution': '节点执行',
'Backward execution': '向后执行',
'Forward execution': '向前执行',
'Execute only the current node': '仅执行当前节点',
'Notification strategy': '通知策略',
'Notification group': '通知组',
'Please select a notification group': '请选择通知组',
'Whether it is a complement process?': '是否补数',
'Schedule date': '调度日期',
'Mode of execution': '执行方式',
'Serial execution': '串行执行',
'Parallel execution': '并行执行',
'Set parameters before timing': '定时前请先设置参数',
'Start and stop time': '起止时间',
'Please select time': '请选择时间',
'Please enter crontab': '请输入crontab',
none_1: '都不发',
success_1: '成功发',
failure_1: '失败发',
All_1: '成功或失败都发',
Toolbar: '工具栏',
'View variables': '查看变量',
'Format DAG': '格式化DAG',
'Refresh DAG status': '刷新DAG状态',
Return_1: '返回上一节点',
'Please enter format': '请输入格式为',
'connection parameter': '连接参数',
'Process definition details': '流程定义详情',
'Create process definition': '创建流程定义',
'Scheduled task list': '定时任务列表',
'Process instance details': '流程实例详情',
'Create Resource': '创建资源',
'User Center': '用户中心',
AllStatus: '全部状态',
None: '无',
Name: '名称',
'Process priority': '流程优先级',
'Task priority': '任务优先级',
'Task timeout alarm': '任务超时告警',
'Timeout strategy': '超时策略',
'Timeout alarm': '超时告警',
'Timeout failure': '超时失败',
'Timeout period': '超时时长',
'Waiting Dependent complete': '等待依赖完成',
'Waiting Dependent start': '等待依赖启动',
'Check interval': '检查间隔',
'Timeout must be longer than check interval': '超时时间必须比检查间隔长',
'Timeout strategy must be selected': '超时策略必须选一个',
'Timeout must be a positive integer': '超时时长必须为正整数',
'Add dependency': '添加依赖',
'Whether dry-run': '是否空跑',
and: '且',
or: '或',
month: '月',
week: '周',
day: '日',
hour: '时',
Running: '正在运行',
'Waiting for dependency to complete': '等待依赖完成',
Selected: '已选',
CurrentHour: '当前小时',
Last1Hour: '前1小时',
Last2Hours: '前2小时',
Last3Hours: '前3小时',
Last24Hours: '前24小时',
today: '今天',
Last1Days: '昨天',
Last2Days: '前两天',
Last3Days: '前三天',
Last7Days: '前七天',
ThisWeek: '本周',
LastWeek: '上周',
LastMonday: '上周一',
LastTuesday: '上周二',
LastWednesday: '上周三',
LastThursday: '上周四',
LastFriday: '上周五',
LastSaturday: '上周六',
LastSunday: '上周日',
ThisMonth: '本月',
LastMonth: '上月',
LastMonthBegin: '上月初',
LastMonthEnd: '上月末',
'Refresh status succeeded': '刷新状态成功',
'Queue manage': 'Yarn 队列管理',
'Create queue': '创建队列',
'Edit queue': '编辑队列',
'Datasource manage': '数据源中心',
'History task record': '历史任务记录',
'Please go online': '不要忘记上线',
'Queue value': '队列值',
'Please enter queue value': '请输入队列值',
'Worker group manage': 'Worker分组管理',
'Create worker group': '创建Worker分组',
'Edit worker group': '编辑Worker分组',
'Token manage': '令牌管理',
'Create token': '创建令牌',
'Edit token': '编辑令牌',
Addresses: '地址',
'Worker Addresses': 'Worker地址',
'Please select the worker addresses': '请选择Worker地址',
'Failure time': '失效时间',
'Expiration time': '失效时间',
User: '用户',
'Please enter token': '请输入令牌',
'Generate token': '生成令牌',
Monitor: '监控中心',
Group: '分组',
'Queue statistics': '队列统计',
'Command status statistics': '命令状态统计',
'Task kill': '等待kill任务',
'Task queue': '等待执行任务',
'Error command count': '错误指令数',
'Normal command count': '正确指令数',
Manage: '管理',
'Number of connections': '连接数',
Sent: '发送量',
Received: '接收量',
'Min latency': '最低延时',
'Avg latency': '平均延时',
'Max latency': '最大延时',
'Node count': '节点数',
'Query time': '当前查询时间',
'Node self-test status': '节点自检状态',
'Health status': '健康状态',
'Max connections': '最大连接数',
'Threads connections': '当前连接数',
'Max used connections': '同时使用连接最大数',
'Threads running connections': '数据库当前活跃连接数',
'Worker group': 'Worker分组',
'Please enter a positive integer greater than 0': '请输入大于 0 的正整数',
'Pre Statement': '前置sql',
'Post Statement': '后置sql',
'Statement cannot be empty': '语句不能为空',
'Process Define Count': '工作流定义数',
'Process Instance Running Count': '正在运行的流程数',
'command number of waiting for running': '待执行的命令数',
'failure command number': '执行失败的命令数',
'tasks number of waiting running': '待运行任务数',
'task number of ready to kill': '待杀死任务数',
'Statistics manage': '统计管理',
statistics: '统计',
'select tenant': '选择租户',
'Please enter Principal': '请输入Principal',
'Please enter the kerberos authentication parameter java.security.krb5.conf': '请输入kerberos认证参数 java.security.krb5.conf',
'Please enter the kerberos authentication parameter login.user.keytab.username': '请输入kerberos认证参数 login.user.keytab.username',
'Please enter the kerberos authentication parameter login.user.keytab.path': '请输入kerberos认证参数 login.user.keytab.path',
'The start time must not be the same as the end': '开始时间和结束时间不能相同',
'Startup parameter': '启动参数',
'Startup type': '启动类型',
'warning of timeout': '超时告警',
'Next five execution times': '接下来五次执行时间',
'Execute time': '执行时间',
'Complement range': '补数范围',
'Http Url': '请求地址',
'Http Method': '请求类型',
'Http Parameters': '请求参数',
'Http Parameters Key': '参数名',
'Http Parameters Position': '参数位置',
'Http Parameters Value': '参数值',
'Http Check Condition': '校验条件',
'Http Condition': '校验内容',
'Please Enter Http Url': '请填写请求地址(必填)',
'Please Enter Http Condition': '请填写校验内容',
'There is no data for this period of time': '该时间段无数据',
'Worker addresses cannot be empty': 'Worker地址不能为空',
'Please generate token': '请生成Token',
'Please Select token': '请选择Token失效时间',
'Spark Version': 'Spark版本',
TargetDataBase: '目标库',
TargetTable: '目标表',
TargetJobName: '目标任务名',
'Please enter Pigeon job name': '请输入Pigeon任务名',
'Please enter the table of target': '请输入目标表名',
'Please enter a Target Table(required)': '请输入目标表(必填)',
SpeedByte: '限流(字节数)',
SpeedRecord: '限流(记录数)',
'0 means unlimited by byte': 'KB,0代表不限制',
'0 means unlimited by count': '0代表不限制',
'Modify User': '修改用户',
'Whether directory': '是否文件夹',
Yes: '是',
No: '否',
'Hadoop Custom Params': 'Hadoop参数',
'Sqoop Advanced Parameters': 'Sqoop参数',
'Sqoop Job Name': '任务名称',
'Please enter Mysql Database(required)': '请输入Mysql数据库(必填)',
'Please enter Mysql Table(required)': '请输入Mysql表名(必填)',
'Please enter Columns (Comma separated)': '请输入列名,用 , 隔开',
'Please enter Target Dir(required)': '请输入目标路径(必填)',
'Please enter Export Dir(required)': '请输入数据源路径(必填)',
'Please enter Hive Database(required)': '请输入Hive数据库(必填)',
'Please enter Hive Table(required)': '请输入Hive表名(必填)',
'Please enter Hive Partition Keys': '请输入分区键',
'Please enter Hive Partition Values': '请输入分区值',
'Please enter Replace Delimiter': '请输入替换分隔符',
'Please enter Fields Terminated': '请输入列分隔符',
'Please enter Lines Terminated': '请输入行分隔符',
'Please enter Concurrency': '请输入并发度',
'Please enter Update Key': '请输入更新列',
'Please enter Job Name(required)': '请输入任务名称(必填)',
'Please enter Custom Shell(required)': '请输入自定义脚本',
Direct: '流向',
Type: '类型',
ModelType: '模式',
ColumnType: '列类型',
Database: '数据库',
Column: '列',
'Map Column Hive': 'Hive类型映射',
'Map Column Java': 'Java类型映射',
'Export Dir': '数据源路径',
'Hive partition Keys': 'Hive 分区键',
'Hive partition Values': 'Hive 分区值',
FieldsTerminated: '列分隔符',
LinesTerminated: '行分隔符',
IsUpdate: '是否更新',
UpdateKey: '更新列',
UpdateMode: '更新类型',
'Target Dir': '目标路径',
DeleteTargetDir: '是否删除目录',
FileType: '保存格式',
CompressionCodec: '压缩类型',
CreateHiveTable: '是否创建新表',
DropDelimiter: '是否删除分隔符',
OverWriteSrc: '是否覆盖数据源',
ReplaceDelimiter: '替换分隔符',
Concurrency: '并发度',
Form: '表单',
OnlyUpdate: '只更新',
AllowInsert: '无更新便插入',
'Data Source': '数据来源',
'Data Target': '数据目的',
'All Columns': '全表导入',
'Some Columns': '选择列',
'Branch flow': '分支流转',
'Custom Job': '自定义任务',
'Custom Script': '自定义脚本',
'Cannot select the same node for successful branch flow and failed branch flow': '成功分支流转和失败分支流转不能选择同一个节点',
'Successful branch flow and failed branch flow are required': 'conditions节点成功和失败分支流转必填',
'No resources exist': '不存在资源',
'Please delete all non-existing resources': '请删除所有不存在资源',
'Unauthorized or deleted resources': '未授权或已删除资源',
'Please delete all non-existent resources': '请删除所有未授权或已删除资源',
Kinship: '工作流关系',
Reset: '重置',
KinshipStateActive: '当前选择',
KinshipState1: '已上线',
KinshipState0: '工作流未上线',
KinshipState10: '调度未上线',
'Dag label display control': 'Dag节点名称显隐',
Enable: '启用',
Disable: '停用',
'The Worker group no longer exists, please select the correct Worker group!': '该Worker分组已经不存在,请选择正确的Worker分组!',
'Please confirm whether the workflow has been saved before downloading': '下载前请确定工作流是否已保存',
'User name length is between 3 and 39': '用户名长度在3~39之间',
'Timeout Settings': '超时设置',
'Connect Timeout': '连接超时',
'Socket Timeout': 'Socket超时',
'Connect timeout be a positive integer': '连接超时必须为数字',
'Socket Timeout be a positive integer': 'Socket超时必须为数字',
ms: '毫秒',
'Please Enter Url': '请直接填写地址,例如:127.0.0.1:7077',
Master: 'Master',
'Please select the waterdrop resources': '请选择waterdrop配置文件',
zkDirectory: 'zk注册目录',
'Directory detail': '查看目录详情',
'Connection name': '连线名',
'Current connection settings': '当前连线设置',
'Please save the DAG before formatting': '格式化前请先保存DAG',
'Batch copy': '批量复制',
'Related items': '关联项目',
'Project name is required': '项目名称必填',
'Batch move': '批量移动',
Version: '版本',
'Pre tasks': '前置任务',
'Running Memory': '运行内存',
'Max Memory': '最大内存',
'Min Memory': '最小内存',
'The workflow canvas is abnormal and cannot be saved, please recreate': '该工作流画布异常,无法保存,请重新创建',
Info: '提示',
'Datasource userName': '所属用户',
'Resource userName': '所属用户',
'Environment manage': '环境管理',
'Create environment': '创建环境',
'Edit environment': '编辑',
'Environment value': 'Environment value',
'Environment Name': '环境名称',
'Environment Code': '环境编码',
'Environment Config': '环境配置',
'Environment Desc': '详细描述',
'Environment Worker Group': 'Worker组',
'Please enter environment config': '请输入环境配置信息',
'Please enter environment desc': '请输入详细描述',
'Please select worker groups': '请选择Worker分组',
condition: '条件',
'The condition content cannot be empty': '条件内容不能为空',
'Reference from': '使用已有任务',
'No more...': '没有更多了...',
'Task Definition': '任务定义',
'Create task': '创建任务',
'Task Type': '任务类型',
'Process execute type': '执行策略',
parallel: '并行',
'Serial wait': '串行等待',
'Serial discard': '串行抛弃',
'Serial priority': '串行优先',
'Recover serial wait': '串行恢复',
IsEnableProxy: '启用代理',
WebHook: 'Web钩子',
Keyword: '密钥',
Proxy: '代理',
receivers: '收件人',
receiverCcs: '抄送人',
transportProtocol: '邮件协议',
serverHost: 'SMTP服务器',
serverPort: 'SMTP端口',
sender: '发件人',
enableSmtpAuth: '请求认证',
starttlsEnable: 'STARTTLS连接',
sslEnable: 'SSL连接',
smtpSslTrust: 'SSL证书信任',
url: 'URL',
requestType: '请求方式',
headerParams: '请求头',
bodyParams: '请求体',
contentField: '内容字段',
path: '脚本路径',
userParams: '自定义参数',
corpId: '企业ID',
secret: '密钥',
teamSendMsg: '群发信息',
userSendMsg: '群员信息',
agentId: '应用ID',
users: '群员',
Username: '用户名',
showType: '内容展示类型',
'Please select a task type (required)': '请选择任务类型(必选)',
layoutType: '布局类型',
gridLayout: '网格布局',
dagreLayout: '层次布局',
rows: '行数',
cols: '列数',
processOnline: '已上线',
searchNode: '搜索节点',
dagScale: '缩放'
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,899 | [Feature][Workflow relationship] When the workflow data of the Workflow relationship is `null`, it will be displayed as `-`. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
At present, when the workflow data is `null`, the front-end page is also displayed as `null`, so it can be judged that when the data is `null`, the page is displayed as `-`.
![image](https://user-images.githubusercontent.com/19239641/142576210-0e62daf2-00a3-41d9-94b1-f6831003d0e0.png)
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6899 | https://github.com/apache/dolphinscheduler/pull/6934 | 024aacacb305c7145e73c5f3d8d1209a88f814d7 | d2666d67391d541468bbfa4fe49dcbd8c4fe2300 | "2021-11-18T06:24:55Z" | java | "2021-11-20T14:59:18Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/kinship/_source/graphGrid.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<div ref="graph-grid" class="graph-grid"></div>
</template>
<script>
import echarts from 'echarts'
import { mapState } from 'vuex'
import graphGridOption from './graphGridOption'
export default {
name: 'graphGrid',
data () {
return {}
},
props: {
isShowLabel: Boolean
},
methods: {
init () {
}
},
created () {
},
mounted () {
const graphGrid = echarts.init(this.$refs['graph-grid'])
graphGrid.setOption(graphGridOption(this.locations, this.connects, this.sourceWorkFlowCode, this.isShowLabel), true)
graphGrid.on('click', (params) => {
// Jump to the definition page
this.$router.push({ path: `/projects/${this.projectCode}/definition/list/${params.data.code}` })
})
},
components: {},
computed: {
...mapState('dag', ['projectCode']),
...mapState('kinship', ['locations', 'connects', 'sourceWorkFlowCode'])
}
}
</script>
<style lang="scss" rel="stylesheet/scss">
.graph-grid {
width: 100%;
height: calc(100vh - 100px);
background: url("./img/dag_bg.png");
}
</style>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,802 | [Bug] [readme] Error link to Docker and k8s in readme | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Error link to Docker and k8s in readme
### What you expected to happen
Change the right link in readme
### How to reproduce
Error link to Docker and k8s in readme
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6802 | https://github.com/apache/dolphinscheduler/pull/6862 | d2666d67391d541468bbfa4fe49dcbd8c4fe2300 | 1b6b5268b6051357f7866a280ab6492d37fe67a8 | "2021-11-11T09:42:52Z" | java | "2021-11-21T09:50:19Z" | README.md | Dolphin Scheduler Official Website
[dolphinscheduler.apache.org](https://dolphinscheduler.apache.org)
============
[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
[![Total Lines](https://tokei.rs/b1/github/apache/dolphinscheduler?category=lines)](https://github.com/apache/dolphinscheduler)
[![codecov](https://codecov.io/gh/apache/dolphinscheduler/branch/dev/graph/badge.svg)](https://codecov.io/gh/apache/dolphinscheduler/branch/dev)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=apache-dolphinscheduler&metric=alert_status)](https://sonarcloud.io/dashboard?id=apache-dolphinscheduler)
[![Twitter Follow](https://img.shields.io/twitter/follow/dolphinschedule.svg?style=social&label=Follow)](https://twitter.com/dolphinschedule)
[![Slack Status](https://img.shields.io/badge/slack-join_chat-white.svg?logo=slack&style=social)](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw)
[![Stargazers over time](https://starchart.cc/apache/dolphinscheduler.svg)](https://starchart.cc/apache/dolphinscheduler)
[![EN doc](https://img.shields.io/badge/document-English-blue.svg)](README.md)
[![CN doc](https://img.shields.io/badge/文档-中文版-blue.svg)](README_zh_CN.md)
## Design Features
DolphinScheduler is a distributed and extensible workflow scheduler platform with powerful DAG visual interfaces, dedicated to solving complex job dependencies in the data pipeline and providing various types of jobs available `out of the box`.
Its main objectives are as follows:
- Associate the tasks according to the dependencies of the tasks in a DAG graph, which can visualize the running state of the task in real-time.
- Support various task types: Shell, MR, Spark, SQL (MySQL, PostgreSQL, hive, spark SQL), Python, Sub_Process, Procedure, etc.
- Support scheduling of workflows and dependencies, manual scheduling to pause/stop/recover task, support failure task retry/alarm, recover specified nodes from failure, kill task, etc.
- Support the priority of workflows & tasks, task failover, and task timeout alarm or failure.
- Support workflow global parameters and node customized parameter settings.
- Support online upload/download/management of resource files, etc. Support online file creation and editing.
- Support task log online viewing and scrolling and downloading, etc.
- Have implemented cluster HA, decentralize Master cluster and Worker cluster through Zookeeper.
- Support the viewing of Master/Worker CPU load, memory, and CPU usage metrics.
- Support displaying workflow history in tree/Gantt chart, as well as statistical analysis on the task status & process status in each workflow.
- Support back-filling data.
- Support multi-tenant.
- Support internationalization.
- More features waiting for partners to explore...
## What's in DolphinScheduler
Stability | Accessibility | Features | Scalability |
-- | -- | -- | --
Decentralized multi-master and multi-worker | Visualization of workflow key information, such as task status, task type, retry times, task operation machine information, visual variables, and so on at a glance. | Support pause, recover operation | Support customized task types
support HA | Visualization of all workflow operations, dragging tasks to draw DAGs, configuring data sources and resources. At the same time, for third-party systems, provide API mode operations. | Users on DolphinScheduler can achieve many-to-one or one-to-one mapping relationship through tenants and Hadoop users, which is very important for scheduling large data jobs. | The scheduler supports distributed scheduling, and the overall scheduling capability will increase linearly with the scale of the cluster. Master and Worker support dynamic adjustment.
Overload processing: By using the task queue mechanism, the number of schedulable tasks on a single machine can be flexibly configured. Machine jam can be avoided with high tolerance to numbers of tasks cached in task queue. | One-click deployment | Support traditional shell tasks, and big data platform task scheduling: MR, Spark, SQL (MySQL, PostgreSQL, hive, spark SQL), Python, Procedure, Sub_Process | |
## User Interface Screenshots
![home page](https://user-images.githubusercontent.com/15833811/75218288-bf286400-57d4-11ea-8263-d639c6511d5f.jpg)
![dag](https://user-images.githubusercontent.com/15833811/75236750-3374fe80-57f9-11ea-857d-62a66a5a559d.png)
![process definition list page](https://user-images.githubusercontent.com/15833811/75216886-6f479e00-57d0-11ea-92dd-66e7640a186f.png)
![view task log online](https://user-images.githubusercontent.com/15833811/75216924-9900c500-57d0-11ea-91dc-3522a76bdbbe.png)
![resource management](https://user-images.githubusercontent.com/15833811/75216984-be8dce80-57d0-11ea-840d-58546edc8788.png)
![monitor](https://user-images.githubusercontent.com/59273635/75625839-c698a480-5bfc-11ea-8bbe-895b561b337f.png)
![security](https://user-images.githubusercontent.com/15833811/75236441-bfd2f180-57f8-11ea-88bd-f24311e01b7e.png)
![treeview](https://user-images.githubusercontent.com/15833811/75217191-3fe56100-57d1-11ea-8856-f19180d9a879.png)
## QuickStart in Docker
Please referer the official website document: [QuickStart in Docker](https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/docker.html)
## QuickStart in Kubernetes
Please referer the official website document: [QuickStart in Kubernetes](https://dolphinscheduler.apache.org/en-us/docs/latest/user_doc/guide/installation/kubernetes.html)
## How to Build
```bash
./mvnw clean install -Prelease
```
Artifact:
```
dolphinscheduler-dist/target/apache-dolphinscheduler-${latest.release.version}-bin.tar.gz: Binary package of DolphinScheduler
dolphinscheduler-dist/target/apache-dolphinscheduler-${latest.release.version}-src.tar.gz: Source code package of DolphinScheduler
```
## Thanks
DolphinScheduler is based on a lot of excellent open-source projects, such as Google guava, guice, grpc, netty, quartz, and many open-source projects of Apache and so on.
We would like to express our deep gratitude to all the open-source projects used in Dolphin Scheduler. We hope that we are not only the beneficiaries of open-source, but also give back to the community. Besides, we hope everyone who have the same enthusiasm and passion for open source could join in and contribute to the open-source community!
## Get Help
1. Submit an [issue](https://github.com/apache/dolphinscheduler/issues/new/choose)
1. Subscribe to this mailing list: https://dolphinscheduler.apache.org/en-us/community/development/subscribe.html, then email dev@dolphinscheduler.apache.org
## Community
You are very welcome to communicate with the developers and users of Dolphin Scheduler. There are two ways to find them:
1. Join the Slack channel by [this invitation link](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw).
2. Follow the [Twitter account of DolphinScheduler](https://twitter.com/dolphinschedule) and get the latest news on time.
### Contributor over time
[![Contributor over time](https://contributor-graph-api.apiseven.com/contributors-svg?chart=contributorOverTime&repo=apache/dolphinscheduler)](https://www.apiseven.com/en/contributor-graph?chart=contributorOverTime&repo=apache/dolphinscheduler)
## How to Contribute
The community welcomes everyone to contribute, please refer to this page to find out more: [How to contribute](https://dolphinscheduler.apache.org/en-us/community/development/contribute.html).
## License
Please refer to the [LICENSE](https://github.com/apache/dolphinscheduler/blob/dev/LICENSE) file.
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,802 | [Bug] [readme] Error link to Docker and k8s in readme | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Error link to Docker and k8s in readme
### What you expected to happen
Change the right link in readme
### How to reproduce
Error link to Docker and k8s in readme
### Anything else
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6802 | https://github.com/apache/dolphinscheduler/pull/6862 | d2666d67391d541468bbfa4fe49dcbd8c4fe2300 | 1b6b5268b6051357f7866a280ab6492d37fe67a8 | "2021-11-11T09:42:52Z" | java | "2021-11-21T09:50:19Z" | README_zh_CN.md | Dolphin Scheduler Official Website
[dolphinscheduler.apache.org](https://dolphinscheduler.apache.org)
============
[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
[![Total Lines](https://tokei.rs/b1/github/apache/dolphinscheduler?category=lines)](https://github.com/apache/dolphinscheduler)
[![codecov](https://codecov.io/gh/apache/dolphinscheduler/branch/dev/graph/badge.svg)](https://codecov.io/gh/apache/dolphinscheduler/branch/dev)
[![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=apache-dolphinscheduler&metric=alert_status)](https://sonarcloud.io/dashboard?id=apache-dolphinscheduler)
[![Stargazers over time](https://starchart.cc/apache/dolphinscheduler.svg)](https://starchart.cc/apache/dolphinscheduler)
[![CN doc](https://img.shields.io/badge/文档-中文版-blue.svg)](README_zh_CN.md)
[![EN doc](https://img.shields.io/badge/document-English-blue.svg)](README.md)
## 设计特点
一个分布式易扩展的可视化DAG工作流任务调度系统。致力于解决数据处理流程中错综复杂的依赖关系,使调度系统在数据处理流程中`开箱即用`。
其主要目标如下:
- 以DAG图的方式将Task按照任务的依赖关系关联起来,可实时可视化监控任务的运行状态
- 支持丰富的任务类型:Shell、MR、Spark、SQL(mysql、postgresql、hive、sparksql),Python,Sub_Process、Procedure等
- 支持工作流定时调度、依赖调度、手动调度、手动暂停/停止/恢复,同时支持失败重试/告警、从指定节点恢复失败、Kill任务等操作
- 支持工作流优先级、任务优先级及任务的故障转移及任务超时告警/失败
- 支持工作流全局参数及节点自定义参数设置
- 支持资源文件的在线上传/下载,管理等,支持在线文件创建、编辑
- 支持任务日志在线查看及滚动、在线下载日志等
- 实现集群HA,通过Zookeeper实现Master集群和Worker集群去中心化
- 支持对`Master/Worker` cpu load,memory,cpu在线查看
- 支持工作流运行历史树形/甘特图展示、支持任务状态统计、流程状态统计
- 支持补数
- 支持多租户
- 支持国际化
- 还有更多等待伙伴们探索
## 系统部分截图
![home page](https://user-images.githubusercontent.com/15833811/75208819-abbad000-57b7-11ea-8d3c-67e7c270671f.jpg)
![dag](https://user-images.githubusercontent.com/15833811/75209584-93e44b80-57b9-11ea-952e-537fb24ec72d.jpg)
![log](https://user-images.githubusercontent.com/15833811/75209645-c55d1700-57b9-11ea-94d4-e3fa91ab5218.jpg)
![gantt](https://user-images.githubusercontent.com/15833811/75209640-c0986300-57b9-11ea-878e-a2098533ad44.jpg)
![resources](https://user-images.githubusercontent.com/15833811/75209403-11f42280-57b9-11ea-9b59-d4be77063553.jpg)
![monitor](https://user-images.githubusercontent.com/15833811/75209631-b5ddce00-57b9-11ea-8d22-cdf15cf0ee25.jpg)
![security](https://user-images.githubusercontent.com/15833811/75209633-baa28200-57b9-11ea-9def-94bef2e212a7.jpg)
## 近期研发计划
DolphinScheduler的工作计划:<a href="https://github.com/apache/dolphinscheduler/projects/1" target="_blank">研发计划</a> ,其中 In Develop卡片下是正在研发的功能,TODO卡片是待做事项(包括 feature ideas)
## 参与贡献
非常欢迎大家来参与贡献,贡献流程请参考:
[[参与贡献](https://dolphinscheduler.apache.org/zh-cn/community/development/contribute.html)]
## 快速试用 Docker
请参考官方文档: [快速试用 Docker 部署](https://dolphinscheduler.apache.org/zh-cn/docs/latest/user_doc/docker-deployment.html)
## 快速试用 Kubernetes
请参考官方文档: [快速试用 Kubernetes 部署](https://dolphinscheduler.apache.org/zh-cn/docs/latest/user_doc/kubernetes-deployment.html)
## 如何构建
```bash
./mvnw clean install -Prelease
```
制品:
```
dolphinscheduler-dist/target/apache-dolphinscheduler-${latest.release.version}-bin.tar.gz: DolphinScheduler 二进制包
dolphinscheduler-dist/target/apache-dolphinscheduler-${latest.release.version}-src.tar.gz: DolphinScheduler 源代码包
```
## 感谢
Dolphin Scheduler使用了很多优秀的开源项目,比如google的guava、guice、grpc,netty,quartz,以及apache的众多开源项目等等,
正是由于站在这些开源项目的肩膀上,才有Dolphin Scheduler的诞生的可能。对此我们对使用的所有开源软件表示非常的感谢!我们也希望自己不仅是开源的受益者,也能成为开源的贡献者,也希望对开源有同样热情和信念的伙伴加入进来,一起为开源献出一份力!
## 获得帮助
1. 提交issue
2. 先订阅邮件开发列表:[订阅邮件列表](https://dolphinscheduler.apache.org/zh-cn/community/development/subscribe.html), 订阅成功后发送邮件到dev@dolphinscheduler.apache.org.
## 社区
1. 通过[该申请链接](https://join.slack.com/t/asf-dolphinscheduler/shared_invite/zt-omtdhuio-_JISsxYhiVsltmC5h38yfw)加入slack channel
2. 关注[Apache Dolphin Scheduler的Twitter账号](https://twitter.com/dolphinschedule)获取实时动态
## 版权
请参考 [LICENSE](https://github.com/apache/dolphinscheduler/blob/dev/LICENSE) 文件.
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,829 | [Bug] [WorkerServer] Too many open files error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
version: 2.0
```
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.common.utils.OSUtils:[175] - /etc/passwd (Too many open files)
java.io.FileNotFoundException: /etc/passwd (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at org.apache.dolphinscheduler.common.utils.OSUtils.getUserListFromLinux(OSUtils.java:189)
at org.apache.dolphinscheduler.common.utils.OSUtils.getUserList(OSUtils.java:172)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[141] - tenantCode: root does not exist
[INFO] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[232] - develop mode is: false
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[252] - delete exec dir failed : Failed to list contents of /tmp/dolphinscheduler/exec/process/851650632065024/851651851886592_1/3524626/4979296
java.io.IOException: Failed to list contents of /tmp/dolphinscheduler/exec/process/851650632065024/851651851886592_1/3524626/4979296
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1647)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.clearTaskExecPath(TaskExecuteThread.java:249)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:220)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
```
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# ulimit -n
65535
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# jps
3767833 Jps
3767487 WorkerServer
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# lsof -p 3767487 | wc -l
66016
```
When I run a dryRun model by more than 6w+ tasks, I found that worker had many `Too many open files` error.
It seems like worker didn't close files, because open files number is continued growth even though tasks are fail and finish.
### What you expected to happen
Worker can close file normally.
### How to reproduce
Run 6w+ tasks with dryRun Model.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6829 | https://github.com/apache/dolphinscheduler/pull/6852 | 1b6b5268b6051357f7866a280ab6492d37fe67a8 | e6239e808b9a508675889fccb7a47dff40e4d172 | "2021-11-12T06:21:47Z" | java | "2021-11-21T09:51:56Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.commons.lang.StringUtils;
import org.apache.commons.lang.SystemUtils;
import java.util.regex.Pattern;
/**
* Constants
*/
public final class Constants {
private Constants() {
throw new UnsupportedOperationException("Construct Constants");
}
/**
* common properties path
*/
public static final String COMMON_PROPERTIES_PATH = "/common.properties";
/**
* alter properties
*/
public static final int ALERT_RPC_PORT = 50052;
/**
* registry properties
*/
public static final String REGISTRY_DOLPHINSCHEDULER_MASTERS = "/nodes/master";
public static final String REGISTRY_DOLPHINSCHEDULER_WORKERS = "/nodes/worker";
public static final String REGISTRY_DOLPHINSCHEDULER_DEAD_SERVERS = "/dead-servers";
public static final String REGISTRY_DOLPHINSCHEDULER_NODE = "/nodes";
public static final String REGISTRY_DOLPHINSCHEDULER_LOCK_MASTERS = "/lock/masters";
public static final String REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS = "/lock/failover/masters";
public static final String REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS = "/lock/failover/workers";
public static final String REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS = "/lock/failover/startup-masters";
/**
* fs.defaultFS
*/
public static final String FS_DEFAULTFS = "fs.defaultFS";
/**
* fs s3a endpoint
*/
public static final String FS_S3A_ENDPOINT = "fs.s3a.endpoint";
/**
* fs s3a access key
*/
public static final String FS_S3A_ACCESS_KEY = "fs.s3a.access.key";
/**
* fs s3a secret key
*/
public static final String FS_S3A_SECRET_KEY = "fs.s3a.secret.key";
/**
* hadoop configuration
*/
public static final String HADOOP_RM_STATE_ACTIVE = "ACTIVE";
public static final String HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT = "resource.manager.httpaddress.port";
/**
* yarn.resourcemanager.ha.rm.ids
*/
public static final String YARN_RESOURCEMANAGER_HA_RM_IDS = "yarn.resourcemanager.ha.rm.ids";
/**
* yarn.application.status.address
*/
public static final String YARN_APPLICATION_STATUS_ADDRESS = "yarn.application.status.address";
/**
* yarn.job.history.status.address
*/
public static final String YARN_JOB_HISTORY_STATUS_ADDRESS = "yarn.job.history.status.address";
/**
* hdfs configuration
* hdfs.root.user
*/
public static final String HDFS_ROOT_USER = "hdfs.root.user";
/**
* hdfs/s3 configuration
* resource.upload.path
*/
public static final String RESOURCE_UPLOAD_PATH = "resource.upload.path";
/**
* data basedir path
*/
public static final String DATA_BASEDIR_PATH = "data.basedir.path";
/**
* dolphinscheduler.env.path
*/
public static final String DOLPHINSCHEDULER_ENV_PATH = "dolphinscheduler.env.path";
/**
* environment properties default path
*/
public static final String ENV_PATH = "env/dolphinscheduler_env.sh";
/**
* resource.view.suffixs
*/
public static final String RESOURCE_VIEW_SUFFIXS = "resource.view.suffixs";
public static final String RESOURCE_VIEW_SUFFIXS_DEFAULT_VALUE = "txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js";
/**
* development.state
*/
public static final String DEVELOPMENT_STATE = "development.state";
/**
* sudo enable
*/
public static final String SUDO_ENABLE = "sudo.enable";
/**
* string true
*/
public static final String STRING_TRUE = "true";
/**
* resource storage type
*/
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* comma ,
*/
public static final String COMMA = ",";
/**
* COLON :
*/
public static final String COLON = ":";
/**
* SINGLE_SLASH /
*/
public static final String SINGLE_SLASH = "/";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* EQUAL SIGN
*/
public static final String EQUAL_SIGN = "=";
/**
* date format of yyyy-MM-dd HH:mm:ss
*/
public static final String YYYY_MM_DD_HH_MM_SS = "yyyy-MM-dd HH:mm:ss";
/**
* date format of yyyyMMdd
*/
public static final String YYYYMMDD = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String YYYYMMDDHHMMSS = "yyyyMMddHHmmss";
/**
* date format of yyyyMMddHHmmssSSS
*/
public static final String YYYYMMDDHHMMSSSSS = "yyyyMMddHHmmssSSS";
/**
* http connect time out
*/
public static final int HTTP_CONNECT_TIMEOUT = 60 * 1000;
/**
* http connect request time out
*/
public static final int HTTP_CONNECTION_REQUEST_TIMEOUT = 60 * 1000;
/**
* httpclient soceket time out
*/
public static final int SOCKET_TIMEOUT = 60 * 1000;
/**
* http header
*/
public static final String HTTP_HEADER_UNKNOWN = "unKnown";
/**
* http X-Forwarded-For
*/
public static final String HTTP_X_FORWARDED_FOR = "X-Forwarded-For";
/**
* http X-Real-IP
*/
public static final String HTTP_X_REAL_IP = "X-Real-IP";
/**
* UTF-8
*/
public static final String UTF_8 = "UTF-8";
/**
* user name regex
*/
public static final Pattern REGEX_USER_NAME = Pattern.compile("^[a-zA-Z0-9._-]{3,39}$");
/**
* email regex
*/
public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^([a-z0-9A-Z]+[_|\\-|\\.]?)+[a-z0-9A-Z]@([a-z0-9A-Z]+(-[a-z0-9A-Z]+)?\\.)+[a-zA-Z]{2,}$");
/**
* read permission
*/
public static final int READ_PERMISSION = 2;
/**
* write permission
*/
public static final int WRITE_PERMISSION = 2 * 2;
/**
* execute permission
*/
public static final int EXECUTE_PERMISSION = 1;
/**
* default admin permission
*/
public static final int DEFAULT_ADMIN_PERMISSION = 7;
/**
* all permissions
*/
public static final int ALL_PERMISSIONS = READ_PERMISSION | WRITE_PERMISSION | EXECUTE_PERMISSION;
/**
* max task timeout
*/
public static final int MAX_TASK_TIMEOUT = 24 * 3600;
/**
* worker host weight
*/
public static final int DEFAULT_WORKER_HOST_WEIGHT = 100;
/**
* time unit secong to minutes
*/
public static final int SEC_2_MINUTES_TIME_UNIT = 60;
/***
*
* rpc port
*/
public static final int RPC_PORT = 50051;
/**
* forbid running task
*/
public static final String FLOWNODE_RUN_FLAG_FORBIDDEN = "FORBIDDEN";
/**
* normal running task
*/
public static final String FLOWNODE_RUN_FLAG_NORMAL = "NORMAL";
public static final String COMMON_TASK_TYPE = "common";
public static final String DEFAULT = "default";
public static final String PASSWORD = "password";
public static final String XXXXXX = "******";
public static final String NULL = "NULL";
public static final String THREAD_NAME_MASTER_SERVER = "Master-Server";
public static final String THREAD_NAME_WORKER_SERVER = "Worker-Server";
/**
* command parameter keys
*/
public static final String CMD_PARAM_RECOVER_PROCESS_ID_STRING = "ProcessInstanceId";
public static final String CMD_PARAM_RECOVERY_START_NODE_STRING = "StartNodeIdList";
public static final String CMD_PARAM_RECOVERY_WAITING_THREAD = "WaitingThreadInstanceId";
public static final String CMD_PARAM_SUB_PROCESS = "processInstanceId";
public static final String CMD_PARAM_EMPTY_SUB_PROCESS = "0";
public static final String CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID = "parentProcessInstanceId";
public static final String CMD_PARAM_SUB_PROCESS_DEFINE_CODE = "processDefinitionCode";
public static final String CMD_PARAM_START_NODES = "StartNodeList";
public static final String CMD_PARAM_START_PARAMS = "StartParams";
public static final String CMD_PARAM_FATHER_PARAMS = "fatherParams";
/**
* complement data start date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_START_DATE = "complementStartDate";
/**
* complement data end date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_END_DATE = "complementEndDate";
/**
* complement date default cron string
*/
public static final String DEFAULT_CRON_STRING = "0 0 0 * * ? *";
public static final int SLEEP_TIME_MILLIS = 1000;
/**
* one second mils
*/
public static final int SECOND_TIME_MILLIS = 1000;
/**
* master task instance cache-database refresh interval
*/
public static final int CACHE_REFRESH_TIME_MILLIS = 20 * 1000;
/**
* heartbeat for zk info length
*/
public static final int HEARTBEAT_FOR_ZOOKEEPER_INFO_LENGTH = 13;
/**
* jar
*/
public static final String JAR = "jar";
/**
* hadoop
*/
public static final String HADOOP = "hadoop";
/**
* -D <property>=<value>
*/
public static final String D = "-D";
/**
* exit code success
*/
public static final int EXIT_CODE_SUCCESS = 0;
/**
* exit code failure
*/
public static final int EXIT_CODE_FAILURE = -1;
/**
* process or task definition failure
*/
public static final int DEFINITION_FAILURE = -1;
/**
* process or task definition first version
*/
public static final int VERSION_FIRST = 1;
/**
* date format of yyyyMMdd
*/
public static final String PARAMETER_FORMAT_DATE = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String PARAMETER_FORMAT_TIME = "yyyyMMddHHmmss";
/**
* system date(yyyyMMddHHmmss)
*/
public static final String PARAMETER_DATETIME = "system.datetime";
/**
* system date(yyyymmdd) today
*/
public static final String PARAMETER_CURRENT_DATE = "system.biz.curdate";
/**
* system date(yyyymmdd) yesterday
*/
public static final String PARAMETER_BUSINESS_DATE = "system.biz.date";
/**
* ACCEPTED
*/
public static final String ACCEPTED = "ACCEPTED";
/**
* SUCCEEDED
*/
public static final String SUCCEEDED = "SUCCEEDED";
/**
* NEW
*/
public static final String NEW = "NEW";
/**
* NEW_SAVING
*/
public static final String NEW_SAVING = "NEW_SAVING";
/**
* SUBMITTED
*/
public static final String SUBMITTED = "SUBMITTED";
/**
* FAILED
*/
public static final String FAILED = "FAILED";
/**
* KILLED
*/
public static final String KILLED = "KILLED";
/**
* RUNNING
*/
public static final String RUNNING = "RUNNING";
/**
* underline "_"
*/
public static final String UNDERLINE = "_";
/**
* quartz job prifix
*/
public static final String QUARTZ_JOB_PRIFIX = "job";
/**
* quartz job group prifix
*/
public static final String QUARTZ_JOB_GROUP_PRIFIX = "jobgroup";
/**
* projectId
*/
public static final String PROJECT_ID = "projectId";
/**
* processId
*/
public static final String SCHEDULE_ID = "scheduleId";
/**
* schedule
*/
public static final String SCHEDULE = "schedule";
/**
* application regex
*/
public static final String APPLICATION_REGEX = "application_\\d+_\\d+";
public static final String PID = SystemUtils.IS_OS_WINDOWS ? "handle" : "pid";
/**
* month_begin
*/
public static final String MONTH_BEGIN = "month_begin";
/**
* add_months
*/
public static final String ADD_MONTHS = "add_months";
/**
* month_end
*/
public static final String MONTH_END = "month_end";
/**
* week_begin
*/
public static final String WEEK_BEGIN = "week_begin";
/**
* week_end
*/
public static final String WEEK_END = "week_end";
/**
* timestamp
*/
public static final String TIMESTAMP = "timestamp";
public static final char SUBTRACT_CHAR = '-';
public static final char ADD_CHAR = '+';
public static final char MULTIPLY_CHAR = '*';
public static final char DIVISION_CHAR = '/';
public static final char LEFT_BRACE_CHAR = '(';
public static final char RIGHT_BRACE_CHAR = ')';
public static final String ADD_STRING = "+";
public static final String STAR = "*";
public static final String DIVISION_STRING = "/";
public static final String LEFT_BRACE_STRING = "(";
public static final char P = 'P';
public static final char N = 'N';
public static final String SUBTRACT_STRING = "-";
public static final String GLOBAL_PARAMS = "globalParams";
public static final String LOCAL_PARAMS = "localParams";
public static final String SUBPROCESS_INSTANCE_ID = "subProcessInstanceId";
public static final String PROCESS_INSTANCE_STATE = "processInstanceState";
public static final String PARENT_WORKFLOW_INSTANCE = "parentWorkflowInstance";
public static final String CONDITION_RESULT = "conditionResult";
public static final String SWITCH_RESULT = "switchResult";
public static final String WAIT_START_TIMEOUT = "waitStartTimeout";
public static final String DEPENDENCE = "dependence";
public static final String TASK_LIST = "taskList";
public static final String QUEUE = "queue";
public static final String QUEUE_NAME = "queueName";
public static final int LOG_QUERY_SKIP_LINE_NUMBER = 0;
public static final int LOG_QUERY_LIMIT = 4096;
/**
* master/worker server use for zk
*/
public static final String MASTER_TYPE = "master";
public static final String WORKER_TYPE = "worker";
public static final String DELETE_OP = "delete";
public static final String ADD_OP = "add";
public static final String ALIAS = "alias";
public static final String CONTENT = "content";
public static final String DEPENDENT_SPLIT = ":||";
public static final long DEPENDENT_ALL_TASK_CODE = 0;
/**
* preview schedule execute count
*/
public static final int PREVIEW_SCHEDULE_EXECUTE_COUNT = 5;
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* kerberos expire time
*/
public static final String KERBEROS_EXPIRE_TIME = "kerberos.expire.time";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* com.amazonaws.services.s3.enableV4
*/
public static final String AWS_S3_V4 = "com.amazonaws.services.s3.enableV4";
/**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* task log info format
*/
public static final String TASK_LOG_INFO_FORMAT = "TaskLogInfo-%s";
public static final int[] NOT_TERMINATED_STATES = new int[] {
ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.DELAY_EXECUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal(),
ExecutionStatus.NEED_FAULT_TOLERANCE.ordinal(),
ExecutionStatus.WAITING_THREAD.ordinal(),
ExecutionStatus.WAITING_DEPEND.ordinal()
};
public static final int[] RUNNING_PROCESS_STATE = new int[] {
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.SERIAL_WAIT.ordinal()
};
/**
* status
*/
public static final String STATUS = "status";
/**
* message
*/
public static final String MSG = "msg";
/**
* data total
*/
public static final String COUNT = "count";
/**
* page size
*/
public static final String PAGE_SIZE = "pageSize";
/**
* current page no
*/
public static final String PAGE_NUMBER = "pageNo";
/**
*
*/
public static final String DATA_LIST = "data";
public static final String TOTAL_LIST = "totalList";
public static final String CURRENT_PAGE = "currentPage";
public static final String TOTAL_PAGE = "totalPage";
public static final String TOTAL = "total";
/**
* workflow
*/
public static final String WORKFLOW_LIST = "workFlowList";
public static final String WORKFLOW_RELATION_LIST = "workFlowRelationList";
/**
* session user
*/
public static final String SESSION_USER = "session.user";
public static final String SESSION_ID = "sessionId";
/**
* locale
*/
public static final String LOCALE_LANGUAGE = "language";
/**
* database type
*/
public static final String MYSQL = "MYSQL";
public static final String HIVE = "HIVE";
public static final String ADDRESS = "address";
public static final String DATABASE = "database";
public static final String OTHER = "other";
/**
* session timeout
*/
public static final int SESSION_TIME_OUT = 7200;
public static final int MAX_FILE_SIZE = 1024 * 1024 * 1024;
public static final String UDF = "UDF";
public static final String CLASS = "class";
/**
* dataSource sensitive param
*/
public static final String DATASOURCE_PASSWORD_REGEX = "(?<=((?i)password((\\\\\":\\\\\")|(=')))).*?(?=((\\\\\")|(')))";
/**
* default worker group
*/
public static final String DEFAULT_WORKER_GROUP = "default";
/**
* authorize writable perm
*/
public static final int AUTHORIZE_WRITABLE_PERM = 7;
/**
* authorize readable perm
*/
public static final int AUTHORIZE_READABLE_PERM = 4;
public static final int NORMAL_NODE_STATUS = 0;
public static final int ABNORMAL_NODE_STATUS = 1;
public static final int BUSY_NODE_STATUE = 2;
public static final String START_TIME = "start time";
public static final String END_TIME = "end time";
public static final String START_END_DATE = "startDate,endDate";
/**
* system line separator
*/
public static final String SYSTEM_LINE_SEPARATOR = System.getProperty("line.separator");
/**
* datasource encryption salt
*/
public static final String DATASOURCE_ENCRYPTION_SALT_DEFAULT = "!@#$%^&*";
public static final String DATASOURCE_ENCRYPTION_ENABLE = "datasource.encryption.enable";
public static final String DATASOURCE_ENCRYPTION_SALT = "datasource.encryption.salt";
/**
* network interface preferred
*/
public static final String DOLPHIN_SCHEDULER_NETWORK_INTERFACE_PREFERRED = "dolphin.scheduler.network.interface.preferred";
/**
* network IP gets priority, default inner outer
*/
public static final String DOLPHIN_SCHEDULER_NETWORK_PRIORITY_STRATEGY = "dolphin.scheduler.network.priority.strategy";
/**
* exec shell scripts
*/
public static final String SH = "sh";
/**
* pstree, get pud and sub pid
*/
public static final String PSTREE = "pstree";
public static final Boolean KUBERNETES_MODE = !StringUtils.isEmpty(System.getenv("KUBERNETES_SERVICE_HOST")) && !StringUtils.isEmpty(System.getenv("KUBERNETES_SERVICE_PORT"));
/**
* dry run flag
*/
public static final int DRY_RUN_FLAG_NO = 0;
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,829 | [Bug] [WorkerServer] Too many open files error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
version: 2.0
```
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.common.utils.OSUtils:[175] - /etc/passwd (Too many open files)
java.io.FileNotFoundException: /etc/passwd (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at org.apache.dolphinscheduler.common.utils.OSUtils.getUserListFromLinux(OSUtils.java:189)
at org.apache.dolphinscheduler.common.utils.OSUtils.getUserList(OSUtils.java:172)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[141] - tenantCode: root does not exist
[INFO] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[232] - develop mode is: false
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[252] - delete exec dir failed : Failed to list contents of /tmp/dolphinscheduler/exec/process/851650632065024/851651851886592_1/3524626/4979296
java.io.IOException: Failed to list contents of /tmp/dolphinscheduler/exec/process/851650632065024/851651851886592_1/3524626/4979296
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1647)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.clearTaskExecPath(TaskExecuteThread.java:249)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:220)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
```
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# ulimit -n
65535
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# jps
3767833 Jps
3767487 WorkerServer
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# lsof -p 3767487 | wc -l
66016
```
When I run a dryRun model by more than 6w+ tasks, I found that worker had many `Too many open files` error.
It seems like worker didn't close files, because open files number is continued growth even though tasks are fail and finish.
### What you expected to happen
Worker can close file normally.
### How to reproduce
Run 6w+ tasks with dryRun Model.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6829 | https://github.com/apache/dolphinscheduler/pull/6852 | 1b6b5268b6051357f7866a280ab6492d37fe67a8 | e6239e808b9a508675889fccb7a47dff40e4d172 | "2021-11-12T06:21:47Z" | java | "2021-11-21T09:51:56Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/processor/TaskExecuteProcessor.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.worker.processor;
import org.apache.dolphinscheduler.common.enums.Event;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.TaskType;
import org.apache.dolphinscheduler.common.utils.CommonUtils;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.FileUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import org.apache.dolphinscheduler.remote.command.Command;
import org.apache.dolphinscheduler.remote.command.CommandType;
import org.apache.dolphinscheduler.remote.command.TaskExecuteAckCommand;
import org.apache.dolphinscheduler.remote.command.TaskExecuteRequestCommand;
import org.apache.dolphinscheduler.remote.processor.NettyRemoteChannel;
import org.apache.dolphinscheduler.remote.processor.NettyRequestProcessor;
import org.apache.dolphinscheduler.server.utils.LogUtils;
import org.apache.dolphinscheduler.server.worker.cache.ResponceCache;
import org.apache.dolphinscheduler.server.worker.config.WorkerConfig;
import org.apache.dolphinscheduler.server.worker.plugin.TaskPluginManager;
import org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread;
import org.apache.dolphinscheduler.server.worker.runner.WorkerManagerThread;
import org.apache.dolphinscheduler.service.alert.AlertClientService;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import org.apache.dolphinscheduler.service.queue.entity.TaskExecutionContext;
import org.apache.dolphinscheduler.spi.task.TaskExecutionContextCacheManager;
import org.apache.dolphinscheduler.spi.task.request.TaskRequest;
import java.util.Date;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.google.common.base.Preconditions;
import io.netty.channel.Channel;
/**
* worker request processor
*/
public class TaskExecuteProcessor implements NettyRequestProcessor {
private static final Logger logger = LoggerFactory.getLogger(TaskExecuteProcessor.class);
/**
* worker config
*/
private final WorkerConfig workerConfig;
/**
* task callback service
*/
private final TaskCallbackService taskCallbackService;
/**
* alert client service
*/
private AlertClientService alertClientService;
private TaskPluginManager taskPluginManager;
/*
* task execute manager
*/
private final WorkerManagerThread workerManager;
public TaskExecuteProcessor() {
this.taskCallbackService = SpringApplicationContext.getBean(TaskCallbackService.class);
this.workerConfig = SpringApplicationContext.getBean(WorkerConfig.class);
this.workerManager = SpringApplicationContext.getBean(WorkerManagerThread.class);
}
/**
* Pre-cache task to avoid extreme situations when kill task. There is no such task in the cache
*
* @param taskExecutionContext task
*/
private void setTaskCache(TaskExecutionContext taskExecutionContext) {
TaskExecutionContext preTaskCache = new TaskExecutionContext();
preTaskCache.setTaskInstanceId(taskExecutionContext.getTaskInstanceId());
TaskRequest taskRequest = JSONUtils.parseObject(JSONUtils.toJsonString(taskExecutionContext), TaskRequest.class);
TaskExecutionContextCacheManager.cacheTaskExecutionContext(taskRequest);
}
public TaskExecuteProcessor(AlertClientService alertClientService, TaskPluginManager taskPluginManager) {
this();
this.alertClientService = alertClientService;
this.taskPluginManager = taskPluginManager;
}
@Override
public void process(Channel channel, Command command) {
Preconditions.checkArgument(CommandType.TASK_EXECUTE_REQUEST == command.getType(),
String.format("invalid command type : %s", command.getType()));
TaskExecuteRequestCommand taskRequestCommand = JSONUtils.parseObject(
command.getBody(), TaskExecuteRequestCommand.class);
logger.info("received command : {}", taskRequestCommand);
if (taskRequestCommand == null) {
logger.error("task execute request command is null");
return;
}
String contextJson = taskRequestCommand.getTaskExecutionContext();
TaskExecutionContext taskExecutionContext = JSONUtils.parseObject(contextJson, TaskExecutionContext.class);
if (taskExecutionContext == null) {
logger.error("task execution context is null");
return;
}
setTaskCache(taskExecutionContext);
// todo custom logger
taskExecutionContext.setHost(NetUtils.getAddr(workerConfig.getListenPort()));
taskExecutionContext.setLogPath(LogUtils.getTaskLogPath(taskExecutionContext));
// local execute path
String execLocalPath = getExecLocalPath(taskExecutionContext);
logger.info("task instance local execute path : {}", execLocalPath);
taskExecutionContext.setExecutePath(execLocalPath);
try {
FileUtils.createWorkDirIfAbsent(execLocalPath);
if (CommonUtils.isSudoEnable() && workerConfig.isTenantAutoCreate()) {
OSUtils.createUserIfAbsent(taskExecutionContext.getTenantCode());
}
} catch (Throwable ex) {
logger.error("create execLocalPath: {}", execLocalPath, ex);
TaskExecutionContextCacheManager.removeByTaskInstanceId(taskExecutionContext.getTaskInstanceId());
}
taskCallbackService.addRemoteChannel(taskExecutionContext.getTaskInstanceId(),
new NettyRemoteChannel(channel, command.getOpaque()));
// delay task process
long remainTime = DateUtils.getRemainTime(taskExecutionContext.getFirstSubmitTime(), taskExecutionContext.getDelayTime() * 60L);
if (remainTime > 0) {
logger.info("delay the execution of task instance {}, delay time: {} s", taskExecutionContext.getTaskInstanceId(), remainTime);
taskExecutionContext.setCurrentExecutionStatus(ExecutionStatus.DELAY_EXECUTION);
taskExecutionContext.setStartTime(null);
} else {
taskExecutionContext.setCurrentExecutionStatus(ExecutionStatus.RUNNING_EXECUTION);
taskExecutionContext.setStartTime(new Date());
}
this.doAck(taskExecutionContext);
// submit task to manager
if (!workerManager.offer(new TaskExecuteThread(taskExecutionContext, taskCallbackService, alertClientService, taskPluginManager))) {
logger.info("submit task to manager error, queue is full, queue size is {}", workerManager.getDelayQueueSize());
}
}
private void doAck(TaskExecutionContext taskExecutionContext) {
// tell master that task is in executing
TaskExecuteAckCommand ackCommand = buildAckCommand(taskExecutionContext);
ResponceCache.get().cache(taskExecutionContext.getTaskInstanceId(), ackCommand.convert2Command(), Event.ACK);
taskCallbackService.sendAck(taskExecutionContext.getTaskInstanceId(), ackCommand.convert2Command());
}
/**
* build ack command
*
* @param taskExecutionContext taskExecutionContext
* @return TaskExecuteAckCommand
*/
private TaskExecuteAckCommand buildAckCommand(TaskExecutionContext taskExecutionContext) {
TaskExecuteAckCommand ackCommand = new TaskExecuteAckCommand();
ackCommand.setTaskInstanceId(taskExecutionContext.getTaskInstanceId());
ackCommand.setStatus(taskExecutionContext.getCurrentExecutionStatus().getCode());
ackCommand.setLogPath(LogUtils.getTaskLogPath(taskExecutionContext));
ackCommand.setHost(taskExecutionContext.getHost());
ackCommand.setStartTime(taskExecutionContext.getStartTime());
if (TaskType.SQL.getDesc().equalsIgnoreCase(taskExecutionContext.getTaskType()) || TaskType.PROCEDURE.getDesc().equalsIgnoreCase(taskExecutionContext.getTaskType())) {
ackCommand.setExecutePath(null);
} else {
ackCommand.setExecutePath(taskExecutionContext.getExecutePath());
}
taskExecutionContext.setLogPath(ackCommand.getLogPath());
ackCommand.setProcessInstanceId(taskExecutionContext.getProcessInstanceId());
return ackCommand;
}
/**
* get execute local path
*
* @param taskExecutionContext taskExecutionContext
* @return execute local path
*/
private String getExecLocalPath(TaskExecutionContext taskExecutionContext) {
return FileUtils.getProcessExecDir(taskExecutionContext.getProjectCode(),
taskExecutionContext.getProcessDefineCode(),
taskExecutionContext.getProcessDefineVersion(),
taskExecutionContext.getProcessInstanceId(),
taskExecutionContext.getTaskInstanceId());
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,829 | [Bug] [WorkerServer] Too many open files error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
version: 2.0
```
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.common.utils.OSUtils:[175] - /etc/passwd (Too many open files)
java.io.FileNotFoundException: /etc/passwd (Too many open files)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at org.apache.dolphinscheduler.common.utils.OSUtils.getUserListFromLinux(OSUtils.java:189)
at org.apache.dolphinscheduler.common.utils.OSUtils.getUserList(OSUtils.java:172)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:139)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[141] - tenantCode: root does not exist
[INFO] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[232] - develop mode is: false
[ERROR] 2021-11-12 14:13:35.611 org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread:[252] - delete exec dir failed : Failed to list contents of /tmp/dolphinscheduler/exec/process/851650632065024/851651851886592_1/3524626/4979296
java.io.IOException: Failed to list contents of /tmp/dolphinscheduler/exec/process/851650632065024/851651851886592_1/3524626/4979296
at org.apache.commons.io.FileUtils.cleanDirectory(FileUtils.java:1647)
at org.apache.commons.io.FileUtils.deleteDirectory(FileUtils.java:1535)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.clearTaskExecPath(TaskExecuteThread.java:249)
at org.apache.dolphinscheduler.server.worker.runner.TaskExecuteThread.run(TaskExecuteThread.java:220)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
```
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# ulimit -n
65535
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# jps
3767833 Jps
3767487 WorkerServer
[root@ds8 apache-dolphinscheduler-2.0.1-alpha-SNAPSHOT-bin]# lsof -p 3767487 | wc -l
66016
```
When I run a dryRun model by more than 6w+ tasks, I found that worker had many `Too many open files` error.
It seems like worker didn't close files, because open files number is continued growth even though tasks are fail and finish.
### What you expected to happen
Worker can close file normally.
### How to reproduce
Run 6w+ tasks with dryRun Model.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6829 | https://github.com/apache/dolphinscheduler/pull/6852 | 1b6b5268b6051357f7866a280ab6492d37fe67a8 | e6239e808b9a508675889fccb7a47dff40e4d172 | "2021-11-12T06:21:47Z" | java | "2021-11-21T09:51:56Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/worker/runner/TaskExecuteThread.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.worker.runner;
import static java.util.Calendar.DAY_OF_MONTH;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.Event;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.TaskType;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.utils.CommonUtils;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.HadoopUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.LoggerUtils;
import org.apache.dolphinscheduler.common.utils.OSUtils;
import org.apache.dolphinscheduler.common.utils.RetryerUtils;
import org.apache.dolphinscheduler.remote.command.Command;
import org.apache.dolphinscheduler.remote.command.TaskExecuteAckCommand;
import org.apache.dolphinscheduler.remote.command.TaskExecuteResponseCommand;
import org.apache.dolphinscheduler.server.utils.ProcessUtils;
import org.apache.dolphinscheduler.server.worker.cache.ResponceCache;
import org.apache.dolphinscheduler.server.worker.plugin.TaskPluginManager;
import org.apache.dolphinscheduler.server.worker.processor.TaskCallbackService;
import org.apache.dolphinscheduler.service.alert.AlertClientService;
import org.apache.dolphinscheduler.service.queue.entity.TaskExecutionContext;
import org.apache.dolphinscheduler.spi.task.AbstractTask;
import org.apache.dolphinscheduler.spi.task.TaskAlertInfo;
import org.apache.dolphinscheduler.spi.task.TaskChannel;
import org.apache.dolphinscheduler.spi.task.TaskExecutionContextCacheManager;
import org.apache.dolphinscheduler.spi.task.request.TaskRequest;
import org.apache.commons.collections.MapUtils;
import org.apache.commons.lang.StringUtils;
import java.io.File;
import java.io.IOException;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.Delayed;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import java.util.stream.Collectors;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.github.rholder.retry.RetryException;
/**
* task scheduler thread
*/
public class TaskExecuteThread implements Runnable, Delayed {
/**
* logger
*/
private final Logger logger = LoggerFactory.getLogger(TaskExecuteThread.class);
/**
* task instance
*/
private TaskExecutionContext taskExecutionContext;
/**
* abstract task
*/
private AbstractTask task;
/**
* task callback service
*/
private TaskCallbackService taskCallbackService;
/**
* alert client server
*/
private AlertClientService alertClientService;
private TaskPluginManager taskPluginManager;
/**
* constructor
*
* @param taskExecutionContext taskExecutionContext
* @param taskCallbackService taskCallbackService
*/
public TaskExecuteThread(TaskExecutionContext taskExecutionContext,
TaskCallbackService taskCallbackService,
AlertClientService alertClientService) {
this.taskExecutionContext = taskExecutionContext;
this.taskCallbackService = taskCallbackService;
this.alertClientService = alertClientService;
}
public TaskExecuteThread(TaskExecutionContext taskExecutionContext,
TaskCallbackService taskCallbackService,
AlertClientService alertClientService,
TaskPluginManager taskPluginManager) {
this.taskExecutionContext = taskExecutionContext;
this.taskCallbackService = taskCallbackService;
this.alertClientService = alertClientService;
this.taskPluginManager = taskPluginManager;
}
@Override
public void run() {
TaskExecuteResponseCommand responseCommand = new TaskExecuteResponseCommand(taskExecutionContext.getTaskInstanceId(), taskExecutionContext.getProcessInstanceId());
try {
logger.info("script path : {}", taskExecutionContext.getExecutePath());
// check if the OS user exists
if (!OSUtils.getUserList().contains(taskExecutionContext.getTenantCode())) {
String errorLog = String.format("tenantCode: %s does not exist", taskExecutionContext.getTenantCode());
logger.error(errorLog);
responseCommand.setStatus(ExecutionStatus.FAILURE.getCode());
responseCommand.setEndTime(new Date());
return;
}
if (taskExecutionContext.getStartTime() == null) {
taskExecutionContext.setStartTime(new Date());
}
if (taskExecutionContext.getCurrentExecutionStatus() != ExecutionStatus.RUNNING_EXECUTION) {
changeTaskExecutionStatusToRunning();
}
logger.info("the task begins to execute. task instance id: {}", taskExecutionContext.getTaskInstanceId());
int dryRun = taskExecutionContext.getDryRun();
// copy hdfs/minio file to local
if (dryRun == Constants.DRY_RUN_FLAG_NO) {
downloadResource(taskExecutionContext.getExecutePath(),
taskExecutionContext.getResources(),
logger);
}
taskExecutionContext.setEnvFile(CommonUtils.getSystemEnvPath());
taskExecutionContext.setDefinedParams(getGlobalParamsMap());
taskExecutionContext.setTaskAppId(String.format("%s_%s",
taskExecutionContext.getProcessInstanceId(),
taskExecutionContext.getTaskInstanceId()));
preBuildBusinessParams();
TaskChannel taskChannel = taskPluginManager.getTaskChannelMap().get(taskExecutionContext.getTaskType());
if (null == taskChannel) {
throw new RuntimeException(String.format("%s Task Plugin Not Found,Please Check Config File.", taskExecutionContext.getTaskType()));
}
TaskRequest taskRequest = JSONUtils.parseObject(JSONUtils.toJsonString(taskExecutionContext), TaskRequest.class);
String taskLogName = LoggerUtils.buildTaskId(LoggerUtils.TASK_LOGGER_INFO_PREFIX,
taskExecutionContext.getFirstSubmitTime(),
taskExecutionContext.getProcessDefineCode(),
taskExecutionContext.getProcessDefineVersion(),
taskExecutionContext.getProcessInstanceId(),
taskExecutionContext.getTaskInstanceId());
taskRequest.setTaskLogName(taskLogName);
task = taskChannel.createTask(taskRequest);
// task init
this.task.init();
//init varPool
this.task.getParameters().setVarPool(taskExecutionContext.getVarPool());
if (dryRun == Constants.DRY_RUN_FLAG_NO) {
// task handle
this.task.handle();
// task result process
if (this.task.getNeedAlert()) {
sendAlert(this.task.getTaskAlertInfo());
}
responseCommand.setStatus(this.task.getExitStatus().getCode());
} else {
responseCommand.setStatus(ExecutionStatus.SUCCESS.getCode());
task.setExitStatusCode(Constants.EXIT_CODE_SUCCESS);
}
responseCommand.setEndTime(new Date());
responseCommand.setProcessId(this.task.getProcessId());
responseCommand.setAppIds(this.task.getAppIds());
responseCommand.setVarPool(JSONUtils.toJsonString(this.task.getParameters().getVarPool()));
logger.info("task instance id : {},task final status : {}", taskExecutionContext.getTaskInstanceId(), this.task.getExitStatus());
} catch (Throwable e) {
logger.error("task scheduler failure", e);
kill();
responseCommand.setStatus(ExecutionStatus.FAILURE.getCode());
responseCommand.setEndTime(new Date());
responseCommand.setProcessId(task.getProcessId());
responseCommand.setAppIds(task.getAppIds());
} finally {
TaskExecutionContextCacheManager.removeByTaskInstanceId(taskExecutionContext.getTaskInstanceId());
ResponceCache.get().cache(taskExecutionContext.getTaskInstanceId(), responseCommand.convert2Command(), Event.RESULT);
taskCallbackService.sendResult(taskExecutionContext.getTaskInstanceId(), responseCommand.convert2Command());
clearTaskExecPath();
}
}
private void sendAlert(TaskAlertInfo taskAlertInfo) {
alertClientService.sendAlert(taskAlertInfo.getAlertGroupId(), taskAlertInfo.getTitle(), taskAlertInfo.getContent());
}
/**
* when task finish, clear execute path.
*/
private void clearTaskExecPath() {
logger.info("develop mode is: {}", CommonUtils.isDevelopMode());
if (!CommonUtils.isDevelopMode()) {
// get exec dir
String execLocalPath = taskExecutionContext.getExecutePath();
if (StringUtils.isEmpty(execLocalPath)) {
logger.warn("task: {} exec local path is empty.", taskExecutionContext.getTaskName());
return;
}
if ("/".equals(execLocalPath)) {
logger.warn("task: {} exec local path is '/', direct deletion is not allowed", taskExecutionContext.getTaskName());
return;
}
try {
org.apache.commons.io.FileUtils.deleteDirectory(new File(execLocalPath));
logger.info("exec local path: {} cleared.", execLocalPath);
} catch (IOException e) {
logger.error("delete exec dir failed : {}", e.getMessage(), e);
}
}
}
/**
* get global paras map
*
* @return map
*/
private Map<String, String> getGlobalParamsMap() {
Map<String, String> globalParamsMap = new HashMap<>(16);
// global params string
String globalParamsStr = taskExecutionContext.getGlobalParams();
if (globalParamsStr != null) {
List<Property> globalParamsList = JSONUtils.toList(globalParamsStr, Property.class);
globalParamsMap.putAll(globalParamsList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue)));
}
return globalParamsMap;
}
/**
* kill task
*/
public void kill() {
if (task != null) {
try {
task.cancelApplication(true);
ProcessUtils.killYarnJob(taskExecutionContext);
} catch (Exception e) {
logger.error(e.getMessage(), e);
}
}
}
/**
* download resource file
*
* @param execLocalPath execLocalPath
* @param projectRes projectRes
* @param logger logger
*/
private void downloadResource(String execLocalPath, Map<String, String> projectRes, Logger logger) {
if (MapUtils.isEmpty(projectRes)) {
return;
}
Set<Map.Entry<String, String>> resEntries = projectRes.entrySet();
for (Map.Entry<String, String> resource : resEntries) {
String fullName = resource.getKey();
String tenantCode = resource.getValue();
File resFile = new File(execLocalPath, fullName);
if (!resFile.exists()) {
try {
// query the tenant code of the resource according to the name of the resource
String resHdfsPath = HadoopUtils.getHdfsResourceFileName(tenantCode, fullName);
logger.info("get resource file from hdfs :{}", resHdfsPath);
HadoopUtils.getInstance().copyHdfsToLocal(resHdfsPath, execLocalPath + File.separator + fullName, false, true);
} catch (Exception e) {
logger.error(e.getMessage(), e);
throw new RuntimeException(e.getMessage());
}
} else {
logger.info("file : {} exists ", resFile.getName());
}
}
}
/**
* send an ack to change the status of the task.
*/
private void changeTaskExecutionStatusToRunning() {
taskExecutionContext.setCurrentExecutionStatus(ExecutionStatus.RUNNING_EXECUTION);
Command ackCommand = buildAckCommand().convert2Command();
try {
RetryerUtils.retryCall(() -> {
taskCallbackService.sendAck(taskExecutionContext.getTaskInstanceId(), ackCommand);
return Boolean.TRUE;
});
} catch (ExecutionException | RetryException e) {
logger.error(e.getMessage(), e);
}
}
/**
* build ack command.
*
* @return TaskExecuteAckCommand
*/
private TaskExecuteAckCommand buildAckCommand() {
TaskExecuteAckCommand ackCommand = new TaskExecuteAckCommand();
ackCommand.setTaskInstanceId(taskExecutionContext.getTaskInstanceId());
ackCommand.setStatus(taskExecutionContext.getCurrentExecutionStatus().getCode());
ackCommand.setStartTime(taskExecutionContext.getStartTime());
ackCommand.setLogPath(taskExecutionContext.getLogPath());
ackCommand.setHost(taskExecutionContext.getHost());
if (TaskType.SQL.getDesc().equalsIgnoreCase(taskExecutionContext.getTaskType()) || TaskType.PROCEDURE.getDesc().equalsIgnoreCase(taskExecutionContext.getTaskType())) {
ackCommand.setExecutePath(null);
} else {
ackCommand.setExecutePath(taskExecutionContext.getExecutePath());
}
return ackCommand;
}
/**
* get current TaskExecutionContext
*
* @return TaskExecutionContext
*/
public TaskExecutionContext getTaskExecutionContext() {
return this.taskExecutionContext;
}
@Override
public long getDelay(TimeUnit unit) {
return unit.convert(DateUtils.getRemainTime(taskExecutionContext.getFirstSubmitTime(),
taskExecutionContext.getDelayTime() * 60L), TimeUnit.SECONDS);
}
@Override
public int compareTo(Delayed o) {
if (o == null) {
return 1;
}
return Long.compare(this.getDelay(TimeUnit.MILLISECONDS), o.getDelay(TimeUnit.MILLISECONDS));
}
private void preBuildBusinessParams() {
Map<String, Property> paramsMap = new HashMap<>();
// replace variable TIME with $[YYYYmmddd...] in shell file when history run job and batch complement job
if (taskExecutionContext.getScheduleTime() != null) {
Date date = taskExecutionContext.getScheduleTime();
if (CommandType.COMPLEMENT_DATA.getCode() == taskExecutionContext.getCmdTypeIfComplement()) {
date = DateUtils.add(taskExecutionContext.getScheduleTime(), DAY_OF_MONTH, 1);
}
String dateTime = DateUtils.format(date, Constants.PARAMETER_FORMAT_TIME);
Property p = new Property();
p.setValue(dateTime);
p.setProp(Constants.PARAMETER_DATETIME);
paramsMap.put(Constants.PARAMETER_DATETIME, p);
}
taskExecutionContext.setParamsMap(paramsMap);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,950 | [Feature][Workflow relationship] The status is currently a number and it should be supported by `i18n`. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
The current status is a number and should be changed to English or Chinese.
![image](https://user-images.githubusercontent.com/19239641/142795372-aa006f07-1ffd-41d4-a017-c2b4bef54af8.png)
### Use case
Change the status `0` to offline, and `1` to online.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6950 | https://github.com/apache/dolphinscheduler/pull/6954 | 199a84aa059234d76f01fc4ed5daf96abcfeb630 | da35c17cc07ea7fc2895eda4baf4e84b0c67b2fe | "2021-11-22T02:06:46Z" | java | "2021-11-22T05:47:14Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/kinship/_source/graphGridOption.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import _ from 'lodash'
import i18n from '@/module/i18n/index.js'
const getCategory = (categoryDic, { workFlowPublishStatus, schedulePublishStatus, code }, sourceWorkFlowCode) => {
if (code === sourceWorkFlowCode) return categoryDic.active
switch (true) {
case workFlowPublishStatus === '0':
return categoryDic['0']
case workFlowPublishStatus === '1' && schedulePublishStatus === '0':
return categoryDic['10']
case workFlowPublishStatus === '1' && schedulePublishStatus === '1':
default:
return categoryDic['1']
}
}
export default function (locations, links, sourceWorkFlowCode, isShowLabel) {
const categoryDic = {
active: { color: '#2D8DF0', category: i18n.$t('KinshipStateActive') },
1: { color: '#00C800', category: i18n.$t('KinshipState1') },
0: { color: '#999999', category: i18n.$t('KinshipState0') },
10: { color: '#FF8F05', category: i18n.$t('KinshipState10') }
}
const newData = _.map(locations, (item) => {
const { color, category } = getCategory(categoryDic, item, sourceWorkFlowCode)
return {
...item,
id: item.code,
emphasis: {
itemStyle: {
color
}
},
category
}
})
const categories = [
{ name: categoryDic.active.category },
{ name: categoryDic['1'].category },
{ name: categoryDic['0'].category },
{ name: categoryDic['10'].category }
]
const option = {
tooltip: {
trigger: 'item',
triggerOn: 'mousemove',
backgroundColor: '#2D303A',
padding: [8, 12],
formatter: (params) => {
if (!params.data.name) return ''
const { name, scheduleStartTime, scheduleEndTime, crontab, workFlowPublishStatus, schedulePublishStatus } = params.data
return `
${i18n.$t('workflowName')}:${name}<br/>
${i18n.$t('scheduleStartTime')}:${scheduleStartTime}<br/>
${i18n.$t('scheduleEndTime')}:${scheduleEndTime}<br/>
${i18n.$t('crontabExpression')}:${crontab}<br/>
${i18n.$t('workflowPublishStatus')}:${workFlowPublishStatus}<br/>
${i18n.$t('schedulePublishStatus')}:${schedulePublishStatus}<br/>
`
},
color: '#2D303A',
textStyle: {
rich: {
a: {
fontSize: 12,
color: '#2D303A',
lineHeight: 12,
align: 'left',
padding: [4, 4, 4, 4]
}
}
}
},
color: [categoryDic.active.color, categoryDic['1'].color, categoryDic['0'].color, categoryDic['10'].color],
legend: [{
orient: 'horizontal',
top: 6,
left: 6,
data: categories
}],
series: [{
type: 'graph',
layout: 'force',
nodeScaleRatio: 1.2,
draggable: true,
animation: false,
data: newData,
roam: true,
symbol: 'roundRect',
symbolSize: 70,
categories,
label: {
show: isShowLabel,
position: 'inside',
formatter: (params) => {
if (!params.data.name) return ''
const str = params.data.name.split('_').map(item => `{a|${item}\n}`).join('')
return str
},
color: '#222222',
textStyle: {
rich: {
a: {
fontSize: 12,
color: '#222222',
lineHeight: 12,
align: 'left',
padding: [4, 4, 4, 4]
}
}
}
},
edgeSymbol: ['circle', 'arrow'],
edgeSymbolSize: [4, 12],
force: {
repulsion: 1000,
edgeLength: 300
},
links: links,
lineStyle: {
color: '#999999'
}
}]
}
return option
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,941 | [Bug] [API] edit alarm group page is abnormal | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
edit alarm group page is abnormal:
```
vue.runtime.esm.js?2b0e:1897 TypeError: Cannot read property 'split' of undefined
at VueComponent.created (createWarning.vue?045c:146)
at invokeWithErrorHandling (vue.runtime.esm.js?2b0e:1863)
at callHook (vue.runtime.esm.js?2b0e:4235)
at VueComponent.Vue._init (vue.runtime.esm.js?2b0e:5022)
at new VueComponent (vue.runtime.esm.js?2b0e:5168)
at createComponentInstanceForVnode (vue.runtime.esm.js?2b0e:3304)
at init (vue.runtime.esm.js?2b0e:3133)
at createComponent (vue.runtime.esm.js?2b0e:6022)
at createElm (vue.runtime.esm.js?2b0e:5969)
at createChildren (vue.runtime.esm.js?2b0e:6097)
```
### What you expected to happen
The page is displayed normally
### How to reproduce
Edit an alarm group on the alarm group form page.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6941 | https://github.com/apache/dolphinscheduler/pull/6942 | ad33d434987863b4ee7b3f8ed89d7e59734687c0 | 38b14410ab622685b961cc8fc6728963b8552779 | "2021-11-21T06:34:50Z" | java | "2021-11-23T01:28:27Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/AlertGroupServiceImpl.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.service.impl;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.AlertGroupService;
import org.apache.dolphinscheduler.api.utils.PageInfo;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.dao.entity.AlertGroup;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.AlertGroupMapper;
import org.apache.dolphinscheduler.dao.vo.AlertGroupVo;
import org.apache.commons.lang.StringUtils;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.dao.DuplicateKeyException;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
/**
* alert group service impl
*/
@Service
public class AlertGroupServiceImpl extends BaseServiceImpl implements AlertGroupService {
private Logger logger = LoggerFactory.getLogger(AlertGroupServiceImpl.class);
@Autowired
private AlertGroupMapper alertGroupMapper;
/**
* query alert group list
*
* @return alert group list
*/
@Override
public Map<String, Object> queryAlertgroup() {
HashMap<String, Object> result = new HashMap<>();
List<AlertGroup> alertGroups = alertGroupMapper.queryAllGroupList();
result.put(Constants.DATA_LIST, alertGroups);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* query alert group by id
*
* @param loginUser login user
* @param id alert group id
* @return one alert group
*/
@Override
public Map<String, Object> queryAlertGroupById(User loginUser, Integer id) {
Map<String, Object> result = new HashMap<>();
result.put(Constants.STATUS, false);
//only admin can operate
if (isNotAdmin(loginUser, result)) {
return result;
}
//check if exist
AlertGroup alertGroup = alertGroupMapper.selectById(id);
if (alertGroup == null) {
putMsg(result, Status.ALERT_GROUP_NOT_EXIST);
return result;
}
result.put("data", alertGroup);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* paging query alarm group list
*
* @param loginUser login user
* @param searchVal search value
* @param pageNo page number
* @param pageSize page size
* @return alert group list page
*/
@Override
public Result listPaging(User loginUser, String searchVal, Integer pageNo, Integer pageSize) {
Result result = new Result();
if (!isAdmin(loginUser)) {
putMsg(result,Status.USER_NO_OPERATION_PERM);
return result;
}
Page<AlertGroupVo> page = new Page<>(pageNo, pageSize);
IPage<AlertGroupVo> alertGroupVoIPage = alertGroupMapper.queryAlertGroupVo(page, searchVal);
PageInfo<AlertGroupVo> pageInfo = new PageInfo<>(pageNo, pageSize);
pageInfo.setTotal((int) alertGroupVoIPage.getTotal());
pageInfo.setTotalList(alertGroupVoIPage.getRecords());
result.setData(pageInfo);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* create alert group
*
* @param loginUser login user
* @param groupName group name
* @param desc description
* @param alertInstanceIds alertInstanceIds
* @return create result code
*/
@Override
public Map<String, Object> createAlertgroup(User loginUser, String groupName, String desc, String alertInstanceIds) {
Map<String, Object> result = new HashMap<>();
//only admin can operate
if (isNotAdmin(loginUser, result)) {
return result;
}
AlertGroup alertGroup = new AlertGroup();
Date now = new Date();
alertGroup.setGroupName(groupName);
alertGroup.setAlertInstanceIds(alertInstanceIds);
alertGroup.setDescription(desc);
alertGroup.setCreateTime(now);
alertGroup.setUpdateTime(now);
alertGroup.setCreateUserId(loginUser.getId());
// insert
try {
int insert = alertGroupMapper.insert(alertGroup);
putMsg(result, insert > 0 ? Status.SUCCESS : Status.CREATE_ALERT_GROUP_ERROR);
} catch (DuplicateKeyException ex) {
logger.error("Create alert group error.", ex);
putMsg(result, Status.ALERT_GROUP_EXIST);
}
return result;
}
/**
* updateProcessInstance alert group
*
* @param loginUser login user
* @param id alert group id
* @param groupName group name
* @param desc description
* @param alertInstanceIds alertInstanceIds
* @return update result code
*/
@Override
public Map<String, Object> updateAlertgroup(User loginUser, int id, String groupName, String desc, String alertInstanceIds) {
Map<String, Object> result = new HashMap<>();
if (isNotAdmin(loginUser, result)) {
return result;
}
AlertGroup alertGroup = alertGroupMapper.selectById(id);
if (alertGroup == null) {
putMsg(result, Status.ALERT_GROUP_NOT_EXIST);
return result;
}
Date now = new Date();
if (!StringUtils.isEmpty(groupName)) {
alertGroup.setGroupName(groupName);
}
alertGroup.setDescription(desc);
alertGroup.setUpdateTime(now);
alertGroup.setCreateUserId(loginUser.getId());
alertGroup.setAlertInstanceIds(alertInstanceIds);
try {
alertGroupMapper.updateById(alertGroup);
putMsg(result, Status.SUCCESS);
} catch (DuplicateKeyException ex) {
logger.error("Update alert group error.", ex);
putMsg(result, Status.ALERT_GROUP_EXIST);
}
return result;
}
/**
* delete alert group by id
*
* @param loginUser login user
* @param id alert group id
* @return delete result code
*/
@Override
@Transactional(rollbackFor = RuntimeException.class)
public Map<String, Object> delAlertgroupById(User loginUser, int id) {
Map<String, Object> result = new HashMap<>();
result.put(Constants.STATUS, false);
//only admin can operate
if (isNotAdmin(loginUser, result)) {
return result;
}
//check exist
AlertGroup alertGroup = alertGroupMapper.selectById(id);
if (alertGroup == null) {
putMsg(result, Status.ALERT_GROUP_NOT_EXIST);
return result;
}
alertGroupMapper.deleteById(id);
putMsg(result, Status.SUCCESS);
return result;
}
/**
* verify group name exists
*
* @param groupName group name
* @return check result code
*/
@Override
public boolean existGroupName(String groupName) {
return alertGroupMapper.existGroupName(groupName) == Boolean.TRUE;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,941 | [Bug] [API] edit alarm group page is abnormal | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
edit alarm group page is abnormal:
```
vue.runtime.esm.js?2b0e:1897 TypeError: Cannot read property 'split' of undefined
at VueComponent.created (createWarning.vue?045c:146)
at invokeWithErrorHandling (vue.runtime.esm.js?2b0e:1863)
at callHook (vue.runtime.esm.js?2b0e:4235)
at VueComponent.Vue._init (vue.runtime.esm.js?2b0e:5022)
at new VueComponent (vue.runtime.esm.js?2b0e:5168)
at createComponentInstanceForVnode (vue.runtime.esm.js?2b0e:3304)
at init (vue.runtime.esm.js?2b0e:3133)
at createComponent (vue.runtime.esm.js?2b0e:6022)
at createElm (vue.runtime.esm.js?2b0e:5969)
at createChildren (vue.runtime.esm.js?2b0e:6097)
```
### What you expected to happen
The page is displayed normally
### How to reproduce
Edit an alarm group on the alarm group form page.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6941 | https://github.com/apache/dolphinscheduler/pull/6942 | ad33d434987863b4ee7b3f8ed89d7e59734687c0 | 38b14410ab622685b961cc8fc6728963b8552779 | "2021-11-21T06:34:50Z" | java | "2021-11-23T01:28:27Z" | dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/AlertGroupServiceTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.service;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.eq;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.impl.AlertGroupServiceImpl;
import org.apache.dolphinscheduler.api.utils.PageInfo;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.UserType;
import org.apache.dolphinscheduler.dao.entity.AlertGroup;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.AlertGroupMapper;
import org.apache.dolphinscheduler.dao.vo.AlertGroupVo;
import org.apache.commons.collections.CollectionUtils;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import org.junit.Assert;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.mockito.junit.MockitoJUnitRunner;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.dao.DuplicateKeyException;
import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
/**
* alert group service test
*/
@RunWith(MockitoJUnitRunner.class)
public class AlertGroupServiceTest {
private static final Logger logger = LoggerFactory.getLogger(AlertGroupServiceTest.class);
@InjectMocks
private AlertGroupServiceImpl alertGroupService;
@Mock
private AlertGroupMapper alertGroupMapper;
private String groupName = "AlertGroupServiceTest";
@Test
public void testQueryAlertGroup() {
Mockito.when(alertGroupMapper.queryAllGroupList()).thenReturn(getList());
Map<String, Object> result = alertGroupService.queryAlertgroup();
logger.info(result.toString());
List<AlertGroup> alertGroups = (List<AlertGroup>) result.get(Constants.DATA_LIST);
Assert.assertTrue(CollectionUtils.isNotEmpty(alertGroups));
}
@Test
public void testListPaging() {
IPage<AlertGroupVo> page = new Page<>(1, 10);
page.setTotal(1L);
page.setRecords(getAlertGroupVoList());
Mockito.when(alertGroupMapper.queryAlertGroupVo(any(Page.class), eq(groupName))).thenReturn(page);
User user = new User();
// no operate
Result result = alertGroupService.listPaging(user, groupName, 1, 10);
logger.info(result.toString());
Assert.assertEquals(Status.USER_NO_OPERATION_PERM.getCode(), (int) result.getCode());
//success
user.setUserType(UserType.ADMIN_USER);
result = alertGroupService.listPaging(user, groupName, 1, 10);
logger.info(result.toString());
PageInfo<AlertGroupVo> pageInfo = (PageInfo<AlertGroupVo>) result.getData();
Assert.assertTrue(CollectionUtils.isNotEmpty(pageInfo.getTotalList()));
}
@Test
public void testCreateAlertgroup() {
Mockito.when(alertGroupMapper.insert(any(AlertGroup.class))).thenReturn(2);
User user = new User();
//no operate
Map<String, Object> result = alertGroupService.createAlertgroup(user, groupName, groupName, null);
logger.info(result.toString());
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, result.get(Constants.STATUS));
user.setUserType(UserType.ADMIN_USER);
//success
result = alertGroupService.createAlertgroup(user, groupName, groupName, null);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
}
@Test
public void testCreateAlertgroupDuplicate() {
Mockito.when(alertGroupMapper.insert(any(AlertGroup.class))).thenThrow(new DuplicateKeyException("group name exist"));
User user = new User();
user.setUserType(UserType.ADMIN_USER);
Map<String, Object> result = alertGroupService.createAlertgroup(user, groupName, groupName, null);
logger.info(result.toString());
Assert.assertEquals(Status.ALERT_GROUP_EXIST, result.get(Constants.STATUS));
}
@Test
public void testUpdateAlertgroup() {
User user = new User();
// no operate
Map<String, Object> result = alertGroupService.updateAlertgroup(user, 1, groupName, groupName, null);
logger.info(result.toString());
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, result.get(Constants.STATUS));
user.setUserType(UserType.ADMIN_USER);
// not exist
result = alertGroupService.updateAlertgroup(user, 1, groupName, groupName, null);
logger.info(result.toString());
Assert.assertEquals(Status.ALERT_GROUP_NOT_EXIST, result.get(Constants.STATUS));
//success
Mockito.when(alertGroupMapper.selectById(2)).thenReturn(getEntity());
result = alertGroupService.updateAlertgroup(user, 2, groupName, groupName, null);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
}
@Test
public void testUpdateAlertgroupDuplicate() {
User user = new User();
user.setUserType(UserType.ADMIN_USER);
Mockito.when(alertGroupMapper.selectById(2)).thenReturn(getEntity());
Mockito.when(alertGroupMapper.updateById(Mockito.any())).thenThrow(new DuplicateKeyException("group name exist"));
Map<String, Object> result = alertGroupService.updateAlertgroup(user, 2, groupName, groupName, null);
Assert.assertEquals(Status.ALERT_GROUP_EXIST, result.get(Constants.STATUS));
}
@Test
public void testDelAlertgroupById() {
User user = new User();
// no operate
Map<String, Object> result = alertGroupService.delAlertgroupById(user, 1);
logger.info(result.toString());
Assert.assertEquals(Status.USER_NO_OPERATION_PERM, result.get(Constants.STATUS));
user.setUserType(UserType.ADMIN_USER);
// not exist
result = alertGroupService.delAlertgroupById(user, 2);
logger.info(result.toString());
Assert.assertEquals(Status.ALERT_GROUP_NOT_EXIST, result.get(Constants.STATUS));
//success
Mockito.when(alertGroupMapper.selectById(2)).thenReturn(getEntity());
result = alertGroupService.delAlertgroupById(user, 2);
logger.info(result.toString());
Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS));
}
@Test
public void testVerifyGroupName() {
//group name not exist
boolean result = alertGroupService.existGroupName(groupName);
Assert.assertFalse(result);
Mockito.when(alertGroupMapper.existGroupName(groupName)).thenReturn(true);
//group name exist
result = alertGroupService.existGroupName(groupName);
Assert.assertTrue(result);
}
/**
* create admin user
*/
private User getLoginUser() {
User loginUser = new User();
loginUser.setUserType(UserType.ADMIN_USER);
loginUser.setId(99999999);
return loginUser;
}
/**
* get list
*/
private List<AlertGroup> getList() {
List<AlertGroup> alertGroups = new ArrayList<>();
alertGroups.add(getEntity());
return alertGroups;
}
/**
* get entity
*/
private AlertGroup getEntity() {
AlertGroup alertGroup = new AlertGroup();
alertGroup.setId(1);
alertGroup.setGroupName(groupName);
return alertGroup;
}
/**
* get AlertGroupVo list
*/
private List<AlertGroupVo> getAlertGroupVoList() {
List<AlertGroupVo> alertGroupVos = new ArrayList<>();
alertGroupVos.add(getAlertGroupVoEntity());
return alertGroupVos;
}
/**
* get AlertGroupVo entity
*/
private AlertGroupVo getAlertGroupVoEntity() {
AlertGroupVo alertGroupVo = new AlertGroupVo();
alertGroupVo.setId(1);
alertGroupVo.setGroupName(groupName);
return alertGroupVo;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,941 | [Bug] [API] edit alarm group page is abnormal | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
edit alarm group page is abnormal:
```
vue.runtime.esm.js?2b0e:1897 TypeError: Cannot read property 'split' of undefined
at VueComponent.created (createWarning.vue?045c:146)
at invokeWithErrorHandling (vue.runtime.esm.js?2b0e:1863)
at callHook (vue.runtime.esm.js?2b0e:4235)
at VueComponent.Vue._init (vue.runtime.esm.js?2b0e:5022)
at new VueComponent (vue.runtime.esm.js?2b0e:5168)
at createComponentInstanceForVnode (vue.runtime.esm.js?2b0e:3304)
at init (vue.runtime.esm.js?2b0e:3133)
at createComponent (vue.runtime.esm.js?2b0e:6022)
at createElm (vue.runtime.esm.js?2b0e:5969)
at createChildren (vue.runtime.esm.js?2b0e:6097)
```
### What you expected to happen
The page is displayed normally
### How to reproduce
Edit an alarm group on the alarm group form page.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6941 | https://github.com/apache/dolphinscheduler/pull/6942 | ad33d434987863b4ee7b3f8ed89d7e59734687c0 | 38b14410ab622685b961cc8fc6728963b8552779 | "2021-11-21T06:34:50Z" | java | "2021-11-23T01:28:27Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/mapper/AlertGroupMapper.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.dao.mapper;
import org.apache.dolphinscheduler.dao.entity.AlertGroup;
import org.apache.dolphinscheduler.dao.vo.AlertGroupVo;
import org.apache.ibatis.annotations.Param;
import java.util.List;
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
/**
* alertgroup mapper interface
*/
public interface AlertGroupMapper extends BaseMapper<AlertGroup> {
/**
* alertgroup page
* @param page page
* @param groupName groupName
* @return alertgroup Ipage
*/
IPage<AlertGroup> queryAlertGroupPage(Page page,
@Param("groupName") String groupName);
/**
* query by group name
* @param groupName groupName
* @return alertgroup list
*/
List<AlertGroup> queryByGroupName(@Param("groupName") String groupName);
/**
* Judge whether the alert group exist
* @param groupName groupName
* @return if exist return true else return null
*/
Boolean existGroupName(@Param("groupName") String groupName);
/**
* query by userId
* @param userId userId
* @return alertgroup list
*/
List<AlertGroup> queryByUserId(@Param("userId") int userId);
/**
* query all group list
* @return alertgroup list
*/
List<AlertGroup> queryAllGroupList();
/**
* query instance ids All
* @return list
*/
List<String> queryInstanceIdsList();
/**
* queryAlertGroupInstanceIdsById
* @param alertGroupId
* @return
*/
String queryAlertGroupInstanceIdsById(@Param("alertGroupId") int alertGroupId);
/**
* query alertGroupVo page list
* @param page page
* @param groupName groupName
* @return IPage<AlertGroupVo>: include alert group id and group_name
*/
IPage<AlertGroupVo> queryAlertGroupVo(Page<AlertGroupVo> page,
@Param("groupName") String groupName);
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,941 | [Bug] [API] edit alarm group page is abnormal | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
edit alarm group page is abnormal:
```
vue.runtime.esm.js?2b0e:1897 TypeError: Cannot read property 'split' of undefined
at VueComponent.created (createWarning.vue?045c:146)
at invokeWithErrorHandling (vue.runtime.esm.js?2b0e:1863)
at callHook (vue.runtime.esm.js?2b0e:4235)
at VueComponent.Vue._init (vue.runtime.esm.js?2b0e:5022)
at new VueComponent (vue.runtime.esm.js?2b0e:5168)
at createComponentInstanceForVnode (vue.runtime.esm.js?2b0e:3304)
at init (vue.runtime.esm.js?2b0e:3133)
at createComponent (vue.runtime.esm.js?2b0e:6022)
at createElm (vue.runtime.esm.js?2b0e:5969)
at createChildren (vue.runtime.esm.js?2b0e:6097)
```
### What you expected to happen
The page is displayed normally
### How to reproduce
Edit an alarm group on the alarm group form page.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6941 | https://github.com/apache/dolphinscheduler/pull/6942 | ad33d434987863b4ee7b3f8ed89d7e59734687c0 | 38b14410ab622685b961cc8fc6728963b8552779 | "2021-11-21T06:34:50Z" | java | "2021-11-23T01:28:27Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/vo/AlertGroupVo.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.dao.vo;
import java.util.Date;
/**
* AlertGroupVo
*/
public class AlertGroupVo {
/**
* primary key
*/
private int id;
/**
* group_name
*/
private String groupName;
/**
* description
*/
private String description;
/**
* create_time
*/
private Date createTime;
/**
* update_time
*/
private Date updateTime;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public String getGroupName() {
return groupName;
}
public void setGroupName(String groupName) {
this.groupName = groupName;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public Date getCreateTime() {
return createTime;
}
public void setCreateTime(Date createTime) {
this.createTime = createTime;
}
public Date getUpdateTime() {
return updateTime;
}
public void setUpdateTime(Date updateTime) {
this.updateTime = updateTime;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,941 | [Bug] [API] edit alarm group page is abnormal | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
edit alarm group page is abnormal:
```
vue.runtime.esm.js?2b0e:1897 TypeError: Cannot read property 'split' of undefined
at VueComponent.created (createWarning.vue?045c:146)
at invokeWithErrorHandling (vue.runtime.esm.js?2b0e:1863)
at callHook (vue.runtime.esm.js?2b0e:4235)
at VueComponent.Vue._init (vue.runtime.esm.js?2b0e:5022)
at new VueComponent (vue.runtime.esm.js?2b0e:5168)
at createComponentInstanceForVnode (vue.runtime.esm.js?2b0e:3304)
at init (vue.runtime.esm.js?2b0e:3133)
at createComponent (vue.runtime.esm.js?2b0e:6022)
at createElm (vue.runtime.esm.js?2b0e:5969)
at createChildren (vue.runtime.esm.js?2b0e:6097)
```
### What you expected to happen
The page is displayed normally
### How to reproduce
Edit an alarm group on the alarm group form page.
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6941 | https://github.com/apache/dolphinscheduler/pull/6942 | ad33d434987863b4ee7b3f8ed89d7e59734687c0 | 38b14410ab622685b961cc8fc6728963b8552779 | "2021-11-21T06:34:50Z" | java | "2021-11-23T01:28:27Z" | dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/AlertGroupMapper.xml | <?xml version="1.0" encoding="UTF-8" ?>
<!--
~ Licensed to the Apache Software Foundation (ASF) under one or more
~ contributor license agreements. See the NOTICE file distributed with
~ this work for additional information regarding copyright ownership.
~ The ASF licenses this file to You under the Apache License, Version 2.0
~ (the "License"); you may not use this file except in compliance with
~ the License. You may obtain a copy of the License at
~
~ http://www.apache.org/licenses/LICENSE-2.0
~
~ Unless required by applicable law or agreed to in writing, software
~ distributed under the License is distributed on an "AS IS" BASIS,
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~ See the License for the specific language governing permissions and
~ limitations under the License.
-->
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="org.apache.dolphinscheduler.dao.mapper.AlertGroupMapper">
<sql id="baseSql">
id
, group_name, description,alert_instance_ids, create_user_id,create_time, update_time
</sql>
<select id="queryAlertGroupPage" resultType="org.apache.dolphinscheduler.dao.entity.AlertGroup">
select
<include refid="baseSql"/>
from t_ds_alertgroup
where 1 = 1
<if test="groupName != null and groupName != ''">
and group_name like concat('%', #{groupName}, '%')
</if>
order by update_time desc
</select>
<select id="queryAlertGroupVo" resultType="org.apache.dolphinscheduler.dao.vo.AlertGroupVo">
select id, group_name, description, create_time, update_time
from t_ds_alertgroup
where 1 = 1
<if test="groupName != null and groupName != ''">
and group_name like concat('%', #{groupName}, '%')
</if>
order by update_time desc
</select>
<select id="queryByGroupName" resultType="org.apache.dolphinscheduler.dao.entity.AlertGroup">
select
<include refid="baseSql"/>
from t_ds_alertgroup
where group_name=#{groupName}
</select>
<select id="existGroupName" resultType="java.lang.Boolean">
select 1
from t_ds_alertgroup
where group_name=#{groupName} limit 1
</select>
<select id="queryByUserId" resultType="org.apache.dolphinscheduler.dao.entity.AlertGroup">
select
<include refid="baseSql"/>
from t_ds_alertgroup
where create_user_id = #{userId}
</select>
<select id="queryAllGroupList" resultType="org.apache.dolphinscheduler.dao.entity.AlertGroup">
select
<include refid="baseSql"/>
from t_ds_alertgroup
order by update_time desc
</select>
<select id="queryInstanceIdsList" resultType="String">
select
alert_instance_ids
from t_ds_alertgroup
order by update_time desc
</select>
<select id="queryAlertGroupInstanceIdsById" resultType="String">
select alert_instance_ids from t_ds_alertgroup
where id = #{alertGroupId}
</select>
</mapper>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,901 | [Feature][Project Process] The vertical scroll bar is removed from the three sub-menus of the process menu in the project. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
The tables in Process definition, Process Instance, and Task Instance all have vertical scroll bars.
![image](https://user-images.githubusercontent.com/19239641/142808869-571c0372-a62d-4c6b-9db3-c9f24030ea1e.png)
### Use case
Remove the vertical scroll bars in these three tables.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6901 | https://github.com/apache/dolphinscheduler/pull/6958 | 38b14410ab622685b961cc8fc6728963b8552779 | 982030bf3867588317d5f5f056bd8db152d54ddb | "2021-11-18T06:45:55Z" | java | "2021-11-23T02:45:11Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/index.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<div class="wrap-definition">
<m-list-construction :title="$t('Process definition')">
<template slot="conditions">
<m-conditions @on-conditions="_onConditions">
<template slot="button-group">
<el-button size="mini" @click="() => this.$router.push({name: 'definition-create'})">{{$t('Create process')}}</el-button>
<el-button size="mini" @click="_uploading">{{$t('Import process')}}</el-button>
</template>
</m-conditions>
</template>
<template slot="content">
<template v-if="processListP.length || total>0">
<m-list :process-list="processListP" @on-update="_onUpdate" :page-no="searchParams.pageNo" :page-size="searchParams.pageSize"></m-list>
<div class="page-box">
<el-pagination
background
@current-change="_page"
@size-change="_pageSize"
:page-size="searchParams.pageSize"
:current-page.sync="searchParams.pageNo"
:page-sizes="[10, 30, 50]"
layout="sizes, prev, pager, next, jumper"
:total="total">
</el-pagination>
</div>
</template>
<template v-if="!processListP.length && total<=0">
<m-no-data></m-no-data>
</template>
<m-spin :is-spin="isLoading" :is-left="isLeft"></m-spin>
</template>
</m-list-construction>
</div>
</template>
<script>
import _ from 'lodash'
import { mapActions } from 'vuex'
import mList from './_source/list'
import mSpin from '@/module/components/spin/spin'
import localStore from '@/module/util/localStorage'
import mNoData from '@/module/components/noData/noData'
import listUrlParamHandle from '@/module/mixin/listUrlParamHandle'
import mConditions from '@/module/components/conditions/conditions'
import mListConstruction from '@/module/components/listConstruction/listConstruction'
import { findComponentDownward } from '@/module/util/'
export default {
name: 'definition-list-index',
data () {
return {
total: null,
processListP: [],
isLoading: true,
searchParams: {
pageSize: 10,
pageNo: 1,
searchVal: '',
userId: ''
},
isLeft: true
}
},
mixins: [listUrlParamHandle],
props: {
},
methods: {
...mapActions('dag', ['getProcessListP']),
/**
* File Upload
*/
_uploading () {
findComponentDownward(this.$root, 'roof-nav')._fileUpdate('DEFINITION')
},
/**
* page
*/
_page (val) {
this.searchParams.pageNo = val
},
_pageSize (val) {
this.searchParams.pageSize = val
},
/**
* conditions
*/
_onConditions (o) {
this.searchParams.searchVal = o.searchVal
this.searchParams.pageNo = 1
},
/**
* get data list
*/
_getList (flag) {
if (sessionStorage.getItem('isLeft') === 0) {
this.isLeft = false
} else {
this.isLeft = true
}
this.isLoading = !flag
this.getProcessListP(this.searchParams).then(res => {
if (this.searchParams.pageNo > 1 && res.totalList.length === 0) {
this.searchParams.pageNo = this.searchParams.pageNo - 1
} else {
this.processListP = []
this.processListP = res.totalList
this.total = res.total
this.isLoading = false
}
}).catch(e => {
this.isLoading = false
})
},
_onUpdate () {
this._debounceGET('false')
},
_updateList () {
this.searchParams.pageNo = 1
this.searchParams.searchVal = ''
this._debounceGET()
}
},
watch: {
'$route' (a) {
// url no params get instance list
this.searchParams.pageNo = _.isEmpty(a.query) ? 1 : a.query.pageNo
}
},
created () {
localStore.removeItem('subProcessId')
},
mounted () {
},
beforeDestroy () {
sessionStorage.setItem('isLeft', 1)
},
components: { mList, mConditions, mSpin, mListConstruction, mNoData }
}
</script>
<style lang="scss" rel="stylesheet/scss">
.wrap-definition {
.table-box {
overflow-y: scroll;
}
.table-box {
.fixed {
table-layout: auto;
tr {
th:last-child,td:last-child {
background: inherit;
width: 300px;
height: 40px;
line-height: 40px;
border-left:1px solid #ecf3ff;
position: absolute;
right: 0;
z-index: 2;
}
td:last-child {
border-bottom:1px solid #ecf3ff;
}
th:nth-last-child(2) {
padding-right: 330px;
}
}
}
}
}
</style>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,901 | [Feature][Project Process] The vertical scroll bar is removed from the three sub-menus of the process menu in the project. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
The tables in Process definition, Process Instance, and Task Instance all have vertical scroll bars.
![image](https://user-images.githubusercontent.com/19239641/142808869-571c0372-a62d-4c6b-9db3-c9f24030ea1e.png)
### Use case
Remove the vertical scroll bars in these three tables.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6901 | https://github.com/apache/dolphinscheduler/pull/6958 | 38b14410ab622685b961cc8fc6728963b8552779 | 982030bf3867588317d5f5f056bd8db152d54ddb | "2021-11-18T06:45:55Z" | java | "2021-11-23T02:45:11Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/instance/pages/list/index.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<div class="wrap-table">
<m-list-construction :title="$t('Process Instance')">
<template slot="conditions">
<m-instance-conditions class="searchNav" @on-query="_onQuery"></m-instance-conditions>
</template>
<template slot="content">
<template v-if="processInstanceList.length || total>0">
<m-list :process-instance-list="processInstanceList" @on-update="_onUpdate" :page-no="searchParams.pageNo" :page-size="searchParams.pageSize">
</m-list>
<div class="page-box">
<el-pagination
background
@current-change="_page"
@size-change="_pageSize"
:page-size="searchParams.pageSize"
:current-page.sync="searchParams.pageNo"
:page-sizes="[10, 30, 50]"
layout="sizes, prev, pager, next, jumper"
:total="total">
</el-pagination>
</div>
</template>
<template v-if="!processInstanceList.length && total<=0">
<m-no-data></m-no-data>
</template>
<m-spin :is-spin="isLoading" :is-left="isLeft"></m-spin>
</template>
</m-list-construction>
</div>
</template>
<script>
import _ from 'lodash'
import { mapActions } from 'vuex'
import mList from './_source/list'
import mSpin from '@/module/components/spin/spin'
import localStore from '@/module/util/localStorage'
import { setUrlParams } from '@/module/util/routerUtil'
import mNoData from '@/module/components/noData/noData'
import mListConstruction from '@/module/components/listConstruction/listConstruction'
import mInstanceConditions from '@/conf/home/pages/projects/pages/_source/conditions/instance/processInstance'
export default {
name: 'instance-list-index',
data () {
return {
// loading
isLoading: true,
// total
total: null,
// data
processInstanceList: [],
// Parameter
searchParams: {
// Search keywords
searchVal: '',
// Number of pages
pageSize: 10,
// Current page
pageNo: 1,
// host
host: '',
// State
stateType: '',
// Start Time
startDate: '',
// End Time
endDate: '',
// Exectuor Name
executorName: ''
},
isLeft: true
}
},
props: {},
methods: {
...mapActions('dag', ['getProcessInstance']),
/**
* Query
*/
_onQuery (o) {
this.searchParams.pageNo = 1
this.searchParams = _.assign(this.searchParams, o)
setUrlParams(this.searchParams)
this._debounceGET()
},
/**
* Paging event
*/
_page (val) {
this.searchParams.pageNo = val
setUrlParams(this.searchParams)
this._debounceGET()
},
_pageSize (val) {
this.searchParams.pageSize = val
setUrlParams(this.searchParams)
this._debounceGET()
},
/**
* get list data
*/
_getProcessInstanceListP (flag) {
this.isLoading = !flag
this.getProcessInstance(this.searchParams).then(res => {
if (this.searchParams.pageNo > 1 && res.totalList.length === 0) {
this.searchParams.pageNo = this.searchParams.pageNo - 1
} else {
this.processInstanceList = []
this.processInstanceList = res.totalList
this.total = res.total
this.isLoading = false
}
}).catch(e => {
this.isLoading = false
})
},
/**
* update
*/
_onUpdate () {
this._debounceGET()
},
/**
* Routing changes
*/
_routerView () {
return this.$route.name === 'projects-instance-details'
},
/**
* Anti shake request interface
* @desc Prevent functions from being called multiple times
*/
_debounceGET: _.debounce(function (flag) {
if (sessionStorage.getItem('isLeft') === 0) {
this.isLeft = false
} else {
this.isLeft = true
}
this._getProcessInstanceListP(flag)
}, 100, {
leading: false,
trailing: true
})
},
watch: {
// Routing changes
'$route' (a, b) {
if (a.name === 'instance' && b.name === 'projects-instance-details') {
this._debounceGET()
} else {
// url no params get instance list
this.searchParams.pageNo = !_.isEmpty(a.query) && a.query.pageNo || 1
}
},
searchParams: {
deep: true,
handler () {
this._debounceGET()
}
}
},
created () {
// Delete process definition ID
localStore.removeItem('subProcessId')
// Route parameter merge
if (!_.isEmpty(this.$route.query)) {
this.searchParams = _.assign(this.searchParams, this.$route.query)
}
// Judge the request data according to the route
if (!this._routerView()) {
this._debounceGET()
}
},
mounted () {
// Cycle acquisition status
this.setIntervalP = setInterval(() => {
this._debounceGET('false')
}, 90000)
},
beforeDestroy () {
// Destruction wheel
clearInterval(this.setIntervalP)
sessionStorage.setItem('isLeft', 1)
},
components: { mList, mInstanceConditions, mSpin, mListConstruction, mNoData }
}
</script>
<style lang="scss" rel="stylesheet/scss">
.wrap-table {
.table-box {
overflow-y: scroll;
}
.table-box {
.fixed {
table-layout: auto;
tr {
td:last-child {
.el-button+.el-button {
margin-left: 0;
}
}
}
}
}
}
@media screen and (max-width: 1246px) {
.searchNav {
margin-bottom: 30px;
}
}
</style>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,901 | [Feature][Project Process] The vertical scroll bar is removed from the three sub-menus of the process menu in the project. | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
The tables in Process definition, Process Instance, and Task Instance all have vertical scroll bars.
![image](https://user-images.githubusercontent.com/19239641/142808869-571c0372-a62d-4c6b-9db3-c9f24030ea1e.png)
### Use case
Remove the vertical scroll bars in these three tables.
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6901 | https://github.com/apache/dolphinscheduler/pull/6958 | 38b14410ab622685b961cc8fc6728963b8552779 | 982030bf3867588317d5f5f056bd8db152d54ddb | "2021-11-18T06:45:55Z" | java | "2021-11-23T02:45:11Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/taskInstance/index.vue | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
<template>
<div class="wrap-taskInstance">
<m-list-construction :title="$t('Task Instance')">
<template slot="conditions">
<m-instance-conditions @on-query="_onQuery"></m-instance-conditions>
</template>
<template slot="content">
<template v-if="taskInstanceList.length">
<m-list :task-instance-list="taskInstanceList" @on-update="_onUpdate" :page-no="searchParams.pageNo" :page-size="searchParams.pageSize">
</m-list>
<div class="page-box">
<el-pagination
background
@current-change="_page"
@size-change="_pageSize"
:page-size="searchParams.pageSize"
:current-page.sync="searchParams.pageNo"
:page-sizes="[10, 30, 50]"
layout="sizes, prev, pager, next, jumper"
:total="total">
</el-pagination>
</div>
</template>
<template v-if="!taskInstanceList.length">
<m-no-data></m-no-data>
</template>
<m-spin :is-spin="isLoading" :is-left="isLeft"></m-spin>
</template>
</m-list-construction>
</div>
</template>
<script>
import _ from 'lodash'
import { mapActions, mapState } from 'vuex'
import mList from './_source/list'
import mSpin from '@/module/components/spin/spin'
import mNoData from '@/module/components/noData/noData'
import listUrlParamHandle from '@/module/mixin/listUrlParamHandle'
import mListConstruction from '@/module/components/listConstruction/listConstruction'
import mInstanceConditions from '@/conf/home/pages/projects/pages/_source/conditions/instance/taskInstance'
export default {
name: 'task-instance-list-index',
data () {
return {
isLoading: true,
total: null,
taskInstanceList: [],
searchParams: {
// page size
pageSize: 10,
// page index
pageNo: 1,
// Query name
searchVal: '',
// Process instance id
processInstanceId: '',
// host
host: '',
// state
stateType: '',
// start date
startDate: '',
// end date
endDate: '',
// Exectuor Name
executorName: '',
processInstanceName: ''
},
isLeft: true
}
},
mixins: [listUrlParamHandle],
props: {},
methods: {
...mapActions('dag', ['getTaskInstanceList']),
/**
* click query
*/
_onQuery (o) {
this.searchParams = _.assign(this.searchParams, o)
this.searchParams.processInstanceId = ''
if (this.searchParams.taskName) {
this.searchParams.taskName = ''
}
this.searchParams.pageNo = 1
},
_page (val) {
this.searchParams.pageNo = val
},
_pageSize (val) {
this.searchParams.pageSize = val
},
/**
* get list data
*/
_getList (flag) {
this.isLoading = !flag
if (this.searchParams.pageNo === undefined) {
this.$router.push({ path: `/projects/${this.projectId}/index` })
return false
}
this.getTaskInstanceList(this.searchParams).then(res => {
this.taskInstanceList = []
this.taskInstanceList = res.totalList
this.total = res.total
this.isLoading = false
}).catch(e => {
this.isLoading = false
})
},
/**
* update
*/
_onUpdate () {
this._debounceGET()
},
/**
* Anti shake request interface
* @desc Prevent functions from being called multiple times
*/
_debounceGET: _.debounce(function (flag) {
if (sessionStorage.getItem('isLeft') === 0) {
this.isLeft = false
} else {
this.isLeft = true
}
this._getList(flag)
}, 100, {
leading: false,
trailing: true
})
},
watch: {
// router
'$route' (a) {
// url no params get instance list
if (_.isEmpty(a.query)) {
this.searchParams.processInstanceId = ''
}
this.searchParams.pageNo = _.isEmpty(a.query) ? 1 : a.query.pageNo
}
},
created () {
},
mounted () {
// Cycle acquisition status
this.setIntervalP = setInterval(() => {
this._debounceGET('false')
}, 90000)
},
computed: {
...mapState('dag', ['projectId'])
},
beforeDestroy () {
// Destruction wheel
clearInterval(this.setIntervalP)
sessionStorage.setItem('isLeft', 1)
},
components: { mList, mInstanceConditions, mSpin, mListConstruction, mNoData }
}
</script>
<style lang="scss" rel="stylesheet/scss">
.wrap-taskInstance {
.table-box {
overflow-y: scroll;
}
.table-box {
.fixed {
table-layout: auto;
tr {
th:last-child,td:last-child {
background: inherit;
width: 60px;
height: 40px;
line-height: 40px;
border-left:1px solid #ecf3ff;
position: absolute;
right: 0;
z-index: 2;
}
td:last-child {
border-bottom:1px solid #ecf3ff;
}
th:nth-last-child(2) {
padding-right: 90px;
}
}
}
}
.list-model {
.el-dialog__header, .el-dialog__body {
padding: 0;
}
}
}
</style>
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,924 | [Feature][Python] Add workflow as code task type sql | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Add python api task type sql. sub task in #6407. we should cover all parameter from UI side and make it suitable for python.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6924 | https://github.com/apache/dolphinscheduler/pull/6968 | abde2d7d28a3b070d0eb47f097d4c6e17a7c4ae1 | 41e8836c91f41035e4be806a11640f44438e4dc0 | "2021-11-19T06:59:44Z" | java | "2021-11-23T08:58:00Z" | dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/constants.py | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Constants for pydolphinscheduler."""
class ProcessDefinitionReleaseState:
"""Constants for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition` release state."""
ONLINE: str = "ONLINE"
OFFLINE: str = "OFFLINE"
class ProcessDefinitionDefault:
"""Constants default value for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition`."""
PROJECT: str = "project-pydolphin"
TENANT: str = "tenant_pydolphin"
USER: str = "userPythonGateway"
# TODO simple set password same as username
USER_PWD: str = "userPythonGateway"
USER_EMAIL: str = "userPythonGateway@dolphinscheduler.com"
USER_PHONE: str = "11111111111"
USER_STATE: int = 1
QUEUE: str = "queuePythonGateway"
WORKER_GROUP: str = "default"
TIME_ZONE: str = "Asia/Shanghai"
class TaskPriority(str):
"""Constants for task priority."""
HIGHEST = "HIGHEST"
HIGH = "HIGH"
MEDIUM = "MEDIUM"
LOW = "LOW"
LOWEST = "LOWEST"
class TaskFlag(str):
"""Constants for task flag."""
YES = "YES"
NO = "NO"
class TaskTimeoutFlag(str):
"""Constants for task timeout flag."""
CLOSE = "CLOSE"
class TaskType(str):
"""Constants for task type, it will also show you which kind we support up to now."""
SHELL = "SHELL"
HTTP = "HTTP"
PYTHON = "PYTHON"
class DefaultTaskCodeNum(str):
"""Constants and default value for default task code number."""
DEFAULT = 1
class JavaGatewayDefault(str):
"""Constants and default value for java gateway."""
RESULT_MESSAGE_KEYWORD = "msg"
RESULT_MESSAGE_SUCCESS = "success"
RESULT_STATUS_KEYWORD = "status"
RESULT_STATUS_SUCCESS = "SUCCESS"
RESULT_DATA = "data"
class Delimiter(str):
"""Constants for delimiter."""
BAR = "-"
DASH = "/"
COLON = ":"
UNDERSCORE = "_"
class Time(str):
"""Constants for date."""
FMT_STD_DATE = "%Y-%m-%d"
LEN_STD_DATE = 10
FMT_DASH_DATE = "%Y/%m/%d"
FMT_SHORT_DATE = "%Y%m%d"
LEN_SHORT_DATE = 8
FMT_STD_TIME = "%H:%M:%S"
FMT_NO_COLON_TIME = "%H%M%S"
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,924 | [Feature][Python] Add workflow as code task type sql | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Add python api task type sql. sub task in #6407. we should cover all parameter from UI side and make it suitable for python.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6924 | https://github.com/apache/dolphinscheduler/pull/6968 | abde2d7d28a3b070d0eb47f097d4c6e17a7c4ae1 | 41e8836c91f41035e4be806a11640f44438e4dc0 | "2021-11-19T06:59:44Z" | java | "2021-11-23T08:58:00Z" | dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/tasks/sql.py | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,924 | [Feature][Python] Add workflow as code task type sql | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Add python api task type sql. sub task in #6407. we should cover all parameter from UI side and make it suitable for python.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6924 | https://github.com/apache/dolphinscheduler/pull/6968 | abde2d7d28a3b070d0eb47f097d4c6e17a7c4ae1 | 41e8836c91f41035e4be806a11640f44438e4dc0 | "2021-11-19T06:59:44Z" | java | "2021-11-23T08:58:00Z" | dolphinscheduler-python/pydolphinscheduler/tests/tasks/test_sql.py | |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,924 | [Feature][Python] Add workflow as code task type sql | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Add python api task type sql. sub task in #6407. we should cover all parameter from UI side and make it suitable for python.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6924 | https://github.com/apache/dolphinscheduler/pull/6968 | abde2d7d28a3b070d0eb47f097d4c6e17a7c4ae1 | 41e8836c91f41035e4be806a11640f44438e4dc0 | "2021-11-19T06:59:44Z" | java | "2021-11-23T08:58:00Z" | dolphinscheduler-python/src/main/java/org/apache/dolphinscheduler/server/PythonGatewayServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.service.ExecutorService;
import org.apache.dolphinscheduler.api.service.ProcessDefinitionService;
import org.apache.dolphinscheduler.api.service.ProjectService;
import org.apache.dolphinscheduler.api.service.QueueService;
import org.apache.dolphinscheduler.api.service.SchedulerService;
import org.apache.dolphinscheduler.api.service.TaskDefinitionService;
import org.apache.dolphinscheduler.api.service.TenantService;
import org.apache.dolphinscheduler.api.service.UsersService;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.enums.FailureStrategy;
import org.apache.dolphinscheduler.common.enums.Priority;
import org.apache.dolphinscheduler.common.enums.ProcessExecutionTypeEnum;
import org.apache.dolphinscheduler.common.enums.ReleaseState;
import org.apache.dolphinscheduler.common.enums.RunMode;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.enums.UserType;
import org.apache.dolphinscheduler.common.enums.WarningType;
import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.Project;
import org.apache.dolphinscheduler.dao.entity.Queue;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.dao.entity.User;
import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper;
import org.apache.dolphinscheduler.dao.mapper.ProjectMapper;
import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper;
import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
import org.springframework.context.annotation.ComponentScan;
import org.springframework.context.annotation.FilterType;
import py4j.GatewayServer;
@ComponentScan(value = "org.apache.dolphinscheduler", excludeFilters = {
@ComponentScan.Filter(type = FilterType.REGEX, pattern = {
"org.apache.dolphinscheduler.server.master.*",
"org.apache.dolphinscheduler.server.worker.*",
"org.apache.dolphinscheduler.server.monitor.*",
"org.apache.dolphinscheduler.server.log.*",
"org.apache.dolphinscheduler.alert.*"
})
})
public class PythonGatewayServer extends SpringBootServletInitializer {
private static final Logger LOGGER = LoggerFactory.getLogger(PythonGatewayServer.class);
private static final WarningType DEFAULT_WARNING_TYPE = WarningType.NONE;
private static final int DEFAULT_WARNING_GROUP_ID = 0;
private static final FailureStrategy DEFAULT_FAILURE_STRATEGY = FailureStrategy.CONTINUE;
private static final Priority DEFAULT_PRIORITY = Priority.MEDIUM;
private static final Long DEFAULT_ENVIRONMENT_CODE = -1L;
private static final TaskDependType DEFAULT_TASK_DEPEND_TYPE = TaskDependType.TASK_POST;
private static final RunMode DEFAULT_RUN_MODE = RunMode.RUN_MODE_SERIAL;
private static final int DEFAULT_DRY_RUN = 0;
@Autowired
private ProcessDefinitionMapper processDefinitionMapper;
@Autowired
private ProjectService projectService;
@Autowired
private TenantService tenantService;
@Autowired
private ExecutorService executorService;
@Autowired
private ProcessDefinitionService processDefinitionService;
@Autowired
private TaskDefinitionService taskDefinitionService;
@Autowired
private UsersService usersService;
@Autowired
private QueueService queueService;
@Autowired
private ProjectMapper projectMapper;
@Autowired
private TaskDefinitionMapper taskDefinitionMapper;
@Autowired
private SchedulerService schedulerService;
@Autowired
private ScheduleMapper scheduleMapper;
// TODO replace this user to build in admin user if we make sure build in one could not be change
private final User dummyAdminUser = new User() {
{
setId(Integer.MAX_VALUE);
setUserName("dummyUser");
setUserType(UserType.ADMIN_USER);
}
};
private final Queue queuePythonGateway = new Queue() {
{
setId(Integer.MAX_VALUE);
setQueueName("queuePythonGateway");
}
};
public String ping() {
return "PONG";
}
// TODO Should we import package in python client side? utils package can but service can not, why
// Core api
public Map<String, Object> genTaskCodeList(Integer genNum) {
return taskDefinitionService.genTaskCodeList(genNum);
}
public Map<String, Long> getCodeAndVersion(String projectName, String taskName) throws CodeGenerateUtils.CodeGenerateException {
Project project = projectMapper.queryByName(projectName);
Map<String, Long> result = new HashMap<>();
// project do not exists, mean task not exists too, so we should directly return init value
if (project == null) {
result.put("code", CodeGenerateUtils.getInstance().genCode());
result.put("version", 0L);
return result;
}
TaskDefinition taskDefinition = taskDefinitionMapper.queryByName(project.getCode(), taskName);
if (taskDefinition == null) {
result.put("code", CodeGenerateUtils.getInstance().genCode());
result.put("version", 0L);
} else {
result.put("code", taskDefinition.getCode());
result.put("version", (long) taskDefinition.getVersion());
}
return result;
}
/**
* create or update process definition.
* If process definition do not exists in Project=`projectCode` would create a new one
* If process definition already exists in Project=`projectCode` would update it
*
* @param userName user name who create or update process definition
* @param projectName project name which process definition belongs to
* @param name process definition name
* @param description description
* @param globalParams global params
* @param schedule schedule for process definition, will not set schedule if null,
* and if would always fresh exists schedule if not null
* @param locations locations json object about all tasks
* @param timeout timeout for process definition working, if running time longer than timeout,
* task will mark as fail
* @param workerGroup run task in which worker group
* @param tenantCode tenantCode
* @param taskRelationJson relation json for nodes
* @param taskDefinitionJson taskDefinitionJson
* @return create result code
*/
public Long createOrUpdateProcessDefinition(String userName,
String projectName,
String name,
String description,
String globalParams,
String schedule,
String locations,
int timeout,
String workerGroup,
String tenantCode,
String taskRelationJson,
String taskDefinitionJson,
ProcessExecutionTypeEnum executionType) {
User user = usersService.queryUser(userName);
Project project = (Project) projectService.queryByName(user, projectName).get(Constants.DATA_LIST);
long projectCode = project.getCode();
Map<String, Object> verifyProcessDefinitionExists = processDefinitionService.verifyProcessDefinitionName(user, projectCode, name);
Status verifyStatus = (Status) verifyProcessDefinitionExists.get(Constants.STATUS);
long processDefinitionCode;
// create or update process definition
if (verifyStatus == Status.PROCESS_DEFINITION_NAME_EXIST) {
ProcessDefinition processDefinition = processDefinitionMapper.queryByDefineName(projectCode, name);
processDefinitionCode = processDefinition.getCode();
// make sure process definition offline which could edit
processDefinitionService.releaseProcessDefinition(user, projectCode, processDefinitionCode, ReleaseState.OFFLINE);
Map<String, Object> result = processDefinitionService.updateProcessDefinition(user, projectCode, name, processDefinitionCode, description, globalParams,
locations, timeout, tenantCode, taskRelationJson, taskDefinitionJson,executionType);
} else if (verifyStatus == Status.SUCCESS) {
Map<String, Object> result = processDefinitionService.createProcessDefinition(user, projectCode, name, description, globalParams,
locations, timeout, tenantCode, taskRelationJson, taskDefinitionJson,executionType);
ProcessDefinition processDefinition = (ProcessDefinition) result.get(Constants.DATA_LIST);
processDefinitionCode = processDefinition.getCode();
} else {
String msg = "Verify process definition exists status is invalid, neither SUCCESS or PROCESS_DEFINITION_NAME_EXIST.";
LOGGER.error(msg);
throw new RuntimeException(msg);
}
// Fresh process definition schedule
if (schedule != null) {
createOrUpdateSchedule(user, projectCode, processDefinitionCode, schedule, workerGroup);
}
processDefinitionService.releaseProcessDefinition(user, projectCode, processDefinitionCode, ReleaseState.ONLINE);
return processDefinitionCode;
}
/**
* create or update process definition schedule.
* It would always use latest schedule define in workflow-as-code, and set schedule online when
* it's not null
*
* @param user user who create or update schedule
* @param projectCode project which process definition belongs to
* @param processDefinitionCode process definition code
* @param schedule schedule expression
* @param workerGroup work group
*/
private void createOrUpdateSchedule(User user,
long projectCode,
long processDefinitionCode,
String schedule,
String workerGroup) {
List<Schedule> schedules = scheduleMapper.queryByProcessDefinitionCode(processDefinitionCode);
// create or update schedule
int scheduleId;
if (schedules.isEmpty()) {
processDefinitionService.releaseProcessDefinition(user, projectCode, processDefinitionCode, ReleaseState.ONLINE);
Map<String, Object> result = schedulerService.insertSchedule(user, projectCode, processDefinitionCode, schedule, DEFAULT_WARNING_TYPE,
DEFAULT_WARNING_GROUP_ID, DEFAULT_FAILURE_STRATEGY, DEFAULT_PRIORITY, workerGroup, DEFAULT_ENVIRONMENT_CODE);
scheduleId = (int) result.get("scheduleId");
} else {
scheduleId = schedules.get(0).getId();
processDefinitionService.releaseProcessDefinition(user, projectCode, processDefinitionCode, ReleaseState.OFFLINE);
schedulerService.updateSchedule(user, projectCode, scheduleId, schedule, DEFAULT_WARNING_TYPE,
DEFAULT_WARNING_GROUP_ID, DEFAULT_FAILURE_STRATEGY, DEFAULT_PRIORITY, workerGroup, DEFAULT_ENVIRONMENT_CODE);
}
schedulerService.setScheduleState(user, projectCode, scheduleId, ReleaseState.ONLINE);
}
public void execProcessInstance(String userName,
String projectName,
String processDefinitionName,
String cronTime,
String workerGroup,
Integer timeout
) {
User user = usersService.queryUser(userName);
Project project = projectMapper.queryByName(projectName);
ProcessDefinition processDefinition = processDefinitionMapper.queryByDefineName(project.getCode(), processDefinitionName);
// make sure process definition online
processDefinitionService.releaseProcessDefinition(user, project.getCode(), processDefinition.getCode(), ReleaseState.ONLINE);
executorService.execProcessInstance(user,
project.getCode(),
processDefinition.getCode(),
cronTime,
null,
DEFAULT_FAILURE_STRATEGY,
null,
DEFAULT_TASK_DEPEND_TYPE,
DEFAULT_WARNING_TYPE,
DEFAULT_WARNING_GROUP_ID,
DEFAULT_RUN_MODE,
DEFAULT_PRIORITY,
workerGroup,
DEFAULT_ENVIRONMENT_CODE,
timeout,
null,
null,
DEFAULT_DRY_RUN
);
}
// side object
public Map<String, Object> createProject(String userName, String name, String desc) {
User user = usersService.queryUser(userName);
return projectService.createProject(user, name, desc);
}
public Map<String, Object> createQueue(String name, String queueName) {
Result<Object> verifyQueueExists = queueService.verifyQueue(name, queueName);
if (verifyQueueExists.getCode() == 0) {
return queueService.createQueue(dummyAdminUser, name, queueName);
} else {
Map<String, Object> result = new HashMap<>();
// TODO function putMsg do not work here
result.put(Constants.STATUS, Status.SUCCESS);
result.put(Constants.MSG, Status.SUCCESS.getMsg());
return result;
}
}
public Map<String, Object> createTenant(String tenantCode, String desc, String queueName) throws Exception {
if (tenantService.checkTenantExists(tenantCode)) {
Map<String, Object> result = new HashMap<>();
// TODO function putMsg do not work here
result.put(Constants.STATUS, Status.SUCCESS);
result.put(Constants.MSG, Status.SUCCESS.getMsg());
return result;
} else {
Result<Object> verifyQueueExists = queueService.verifyQueue(queueName, queueName);
if (verifyQueueExists.getCode() == 0) {
// TODO why create do not return id?
queueService.createQueue(dummyAdminUser, queueName, queueName);
}
Map<String, Object> result = queueService.queryQueueName(queueName);
List<Queue> queueList = (List<Queue>) result.get(Constants.DATA_LIST);
Queue queue = queueList.get(0);
return tenantService.createTenant(dummyAdminUser, tenantCode, queue.getId(), desc);
}
}
public void createUser(String userName,
String userPassword,
String email,
String phone,
String tenantCode,
String queue,
int state) {
User user = usersService.queryUser(userName);
if (Objects.isNull(user)) {
Map<String, Object> tenantResult = tenantService.queryByTenantCode(tenantCode);
Tenant tenant = (Tenant) tenantResult.get(Constants.DATA_LIST);
usersService.createUser(userName, userPassword, email, tenant.getId(), phone, queue, state);
}
}
@PostConstruct
public void run() {
GatewayServer server = new GatewayServer(this);
GatewayServer.turnLoggingOn();
// Start server to accept python client RPC
server.start();
}
public static void main(String[] args) {
SpringApplication.run(PythonGatewayServer.class, args);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,961 | [Improvement][MasterServer] Use threadpool to dispatch instead of single thread | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master use a single thread `TaskPriorityQueueConsumer` to dispatch task, it will be slowly when Master was busy.
So I think it can be improved by threadpool instead of single thread.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6961 | https://github.com/apache/dolphinscheduler/pull/6962 | 5a04b8d49aa8e30a60a3fdc0cdb3b2f6910aa084 | e6fe39ea47f3168201f5d7f8c8a5340d075d2379 | "2021-11-22T09:25:39Z" | java | "2021-11-25T04:55:37Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/config/MasterConfig.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.config;
import org.apache.dolphinscheduler.server.master.dispatch.host.assign.HostSelector;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.stereotype.Component;
@Component
@EnableConfigurationProperties
@ConfigurationProperties("master")
public class MasterConfig {
private int listenPort;
private int fetchCommandNum;
private int preExecThreads;
private int execThreads;
private int execTaskNum;
private int dispatchTaskNumber;
private HostSelector hostSelector;
private int heartbeatInterval;
private int taskCommitRetryTimes;
private int taskCommitInterval;
private int stateWheelInterval;
private double maxCpuLoadAvg;
private double reservedMemory;
private boolean cacheProcessDefinition;
public int getListenPort() {
return listenPort;
}
public void setListenPort(int listenPort) {
this.listenPort = listenPort;
}
public int getFetchCommandNum() {
return fetchCommandNum;
}
public void setFetchCommandNum(int fetchCommandNum) {
this.fetchCommandNum = fetchCommandNum;
}
public int getPreExecThreads() {
return preExecThreads;
}
public void setPreExecThreads(int preExecThreads) {
this.preExecThreads = preExecThreads;
}
public int getExecThreads() {
return execThreads;
}
public void setExecThreads(int execThreads) {
this.execThreads = execThreads;
}
public int getExecTaskNum() {
return execTaskNum;
}
public void setExecTaskNum(int execTaskNum) {
this.execTaskNum = execTaskNum;
}
public int getDispatchTaskNumber() {
return dispatchTaskNumber;
}
public void setDispatchTaskNumber(int dispatchTaskNumber) {
this.dispatchTaskNumber = dispatchTaskNumber;
}
public HostSelector getHostSelector() {
return hostSelector;
}
public void setHostSelector(HostSelector hostSelector) {
this.hostSelector = hostSelector;
}
public int getHeartbeatInterval() {
return heartbeatInterval;
}
public void setHeartbeatInterval(int heartbeatInterval) {
this.heartbeatInterval = heartbeatInterval;
}
public int getTaskCommitRetryTimes() {
return taskCommitRetryTimes;
}
public void setTaskCommitRetryTimes(int taskCommitRetryTimes) {
this.taskCommitRetryTimes = taskCommitRetryTimes;
}
public int getTaskCommitInterval() {
return taskCommitInterval;
}
public void setTaskCommitInterval(int taskCommitInterval) {
this.taskCommitInterval = taskCommitInterval;
}
public int getStateWheelInterval() {
return stateWheelInterval;
}
public void setStateWheelInterval(int stateWheelInterval) {
this.stateWheelInterval = stateWheelInterval;
}
public double getMaxCpuLoadAvg() {
return maxCpuLoadAvg > 0 ? maxCpuLoadAvg : Runtime.getRuntime().availableProcessors() * 2;
}
public void setMaxCpuLoadAvg(double maxCpuLoadAvg) {
this.maxCpuLoadAvg = maxCpuLoadAvg;
}
public double getReservedMemory() {
return reservedMemory;
}
public void setReservedMemory(double reservedMemory) {
this.reservedMemory = reservedMemory;
}
public boolean isCacheProcessDefinition() {
return cacheProcessDefinition;
}
public void setCacheProcessDefinition(boolean cacheProcessDefinition) {
this.cacheProcessDefinition = cacheProcessDefinition;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,961 | [Improvement][MasterServer] Use threadpool to dispatch instead of single thread | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master use a single thread `TaskPriorityQueueConsumer` to dispatch task, it will be slowly when Master was busy.
So I think it can be improved by threadpool instead of single thread.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6961 | https://github.com/apache/dolphinscheduler/pull/6962 | 5a04b8d49aa8e30a60a3fdc0cdb3b2f6910aa084 | e6fe39ea47f3168201f5d7f8c8a5340d075d2379 | "2021-11-22T09:25:39Z" | java | "2021-11-25T04:55:37Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/consumer/TaskPriorityQueueConsumer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.consumer;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.dispatch.ExecutorDispatcher;
import org.apache.dolphinscheduler.server.master.dispatch.context.ExecutionContext;
import org.apache.dolphinscheduler.server.master.dispatch.enums.ExecutorType;
import org.apache.dolphinscheduler.server.master.dispatch.exceptions.ExecuteException;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.queue.TaskPriority;
import org.apache.dolphinscheduler.service.queue.TaskPriorityQueue;
import org.apache.dolphinscheduler.service.queue.entity.TaskExecutionContext;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.concurrent.TimeUnit;
import javax.annotation.PostConstruct;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
/**
* TaskUpdateQueue consumer
*/
@Component
public class TaskPriorityQueueConsumer extends Thread {
/**
* logger of TaskUpdateQueueConsumer
*/
private static final Logger logger = LoggerFactory.getLogger(TaskPriorityQueueConsumer.class);
/**
* taskUpdateQueue
*/
@Autowired
private TaskPriorityQueue<TaskPriority> taskPriorityQueue;
/**
* processService
*/
@Autowired
private ProcessService processService;
/**
* executor dispatcher
*/
@Autowired
private ExecutorDispatcher dispatcher;
/**
* master config
*/
@Autowired
private MasterConfig masterConfig;
@PostConstruct
public void init() {
super.setName("TaskUpdateQueueConsumerThread");
super.start();
}
@Override
public void run() {
List<TaskPriority> failedDispatchTasks = new ArrayList<>();
while (Stopper.isRunning()) {
try {
int fetchTaskNum = masterConfig.getDispatchTaskNumber();
failedDispatchTasks.clear();
for (int i = 0; i < fetchTaskNum; i++) {
TaskPriority taskPriority = taskPriorityQueue.poll(Constants.SLEEP_TIME_MILLIS, TimeUnit.MILLISECONDS);
if (Objects.isNull(taskPriority)) {
continue;
}
boolean dispatchResult = dispatch(taskPriority);
if (!dispatchResult) {
failedDispatchTasks.add(taskPriority);
}
}
if (!failedDispatchTasks.isEmpty()) {
for (TaskPriority dispatchFailedTask : failedDispatchTasks) {
taskPriorityQueue.put(dispatchFailedTask);
}
// If there are tasks in a cycle that cannot find the worker group,
// sleep for 1 second
if (taskPriorityQueue.size() <= failedDispatchTasks.size()) {
TimeUnit.MILLISECONDS.sleep(Constants.SLEEP_TIME_MILLIS);
}
}
} catch (Exception e) {
logger.error("dispatcher task error", e);
}
}
}
/**
* dispatch task
*
* @param taskPriority taskPriority
* @return result
*/
protected boolean dispatch(TaskPriority taskPriority) {
boolean result = false;
try {
TaskExecutionContext context = taskPriority.getTaskExecutionContext();
ExecutionContext executionContext = new ExecutionContext(context.toCommand(), ExecutorType.WORKER, context.getWorkerGroup());
if (isTaskNeedToCheck(taskPriority)) {
if (taskInstanceIsFinalState(taskPriority.getTaskId())) {
// when task finish, ignore this task, there is no need to dispatch anymore
return true;
}
}
result = dispatcher.dispatch(executionContext);
} catch (ExecuteException e) {
logger.error("dispatch error: {}", e.getMessage(), e);
}
return result;
}
/**
* taskInstance is final state
* success,failure,kill,stop,pause,threadwaiting is final state
*
* @param taskInstanceId taskInstanceId
* @return taskInstance is final state
*/
public Boolean taskInstanceIsFinalState(int taskInstanceId) {
TaskInstance taskInstance = processService.findTaskInstanceById(taskInstanceId);
return taskInstance.getState().typeIsFinished();
}
/**
* check if task need to check state, if true, refresh the checkpoint
* @param taskPriority
* @return
*/
private boolean isTaskNeedToCheck(TaskPriority taskPriority) {
long now = System.currentTimeMillis();
if (now - taskPriority.getCheckpoint() > Constants.SECOND_TIME_MILLIS) {
taskPriority.setCheckpoint(now);
return true;
}
return false;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,961 | [Improvement][MasterServer] Use threadpool to dispatch instead of single thread | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master use a single thread `TaskPriorityQueueConsumer` to dispatch task, it will be slowly when Master was busy.
So I think it can be improved by threadpool instead of single thread.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6961 | https://github.com/apache/dolphinscheduler/pull/6962 | 5a04b8d49aa8e30a60a3fdc0cdb3b2f6910aa084 | e6fe39ea47f3168201f5d7f8c8a5340d075d2379 | "2021-11-22T09:25:39Z" | java | "2021-11-25T04:55:37Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/WorkflowExecuteThreadTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVERY_START_NODE_STRING;
import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_START_NODES;
import static org.mockito.Mockito.verify;
import static org.powermock.api.mockito.PowerMockito.mock;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.Flag;
import org.apache.dolphinscheduler.common.enums.ProcessExecutionTypeEnum;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.utils.DateUtils;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.Schedule;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread;
import org.apache.dolphinscheduler.service.process.ProcessService;
import java.lang.reflect.Field;
import java.lang.reflect.Method;
import java.text.ParseException;
import java.util.Collections;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mockito;
import org.powermock.api.mockito.PowerMockito;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
import org.springframework.context.ApplicationContext;
/**
* test for WorkflowExecuteThread
*/
@RunWith(PowerMockRunner.class)
@PrepareForTest({WorkflowExecuteThread.class})
public class WorkflowExecuteThreadTest {
private WorkflowExecuteThread workflowExecuteThread;
private ProcessInstance processInstance;
private ProcessService processService;
private final int processDefinitionId = 1;
private MasterConfig config;
private ApplicationContext applicationContext;
@Before
public void init() throws Exception {
processService = mock(ProcessService.class);
applicationContext = mock(ApplicationContext.class);
config = new MasterConfig();
config.setExecTaskNum(1);
Mockito.when(applicationContext.getBean(MasterConfig.class)).thenReturn(config);
processInstance = mock(ProcessInstance.class);
Mockito.when(processInstance.getState()).thenReturn(ExecutionStatus.SUCCESS);
Mockito.when(processInstance.getHistoryCmd()).thenReturn(CommandType.COMPLEMENT_DATA.toString());
Mockito.when(processInstance.getIsSubProcess()).thenReturn(Flag.NO);
Mockito.when(processInstance.getScheduleTime()).thenReturn(DateUtils.stringToDate("2020-01-01 00:00:00"));
Map<String, String> cmdParam = new HashMap<>();
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, "2020-01-01 00:00:00");
cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, "2020-01-20 23:00:00");
Mockito.when(processInstance.getCommandParam()).thenReturn(JSONUtils.toJsonString(cmdParam));
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setGlobalParamMap(Collections.emptyMap());
processDefinition.setGlobalParamList(Collections.emptyList());
Mockito.when(processInstance.getProcessDefinition()).thenReturn(processDefinition);
ConcurrentHashMap<Integer, TaskInstance> taskTimeoutCheckList = new ConcurrentHashMap<>();
workflowExecuteThread = PowerMockito.spy(new WorkflowExecuteThread(processInstance, processService, null, null, config, taskTimeoutCheckList));
// prepareProcess init dag
Field dag = WorkflowExecuteThread.class.getDeclaredField("dag");
dag.setAccessible(true);
dag.set(workflowExecuteThread, new DAG());
PowerMockito.doNothing().when(workflowExecuteThread, "endProcess");
}
@Test
public void testParseStartNodeName() throws ParseException {
try {
Map<String, String> cmdParam = new HashMap<>();
cmdParam.put(CMD_PARAM_START_NODES, "1,2,3");
Mockito.when(processInstance.getCommandParam()).thenReturn(JSONUtils.toJsonString(cmdParam));
Class<WorkflowExecuteThread> masterExecThreadClass = WorkflowExecuteThread.class;
Method method = masterExecThreadClass.getDeclaredMethod("parseStartNodeName", String.class);
method.setAccessible(true);
List<String> nodeNames = (List<String>) method.invoke(workflowExecuteThread, JSONUtils.toJsonString(cmdParam));
Assert.assertEquals(3, nodeNames.size());
} catch (Exception e) {
Assert.fail();
}
}
@Test
public void testGetStartTaskInstanceList() {
try {
TaskInstance taskInstance1 = new TaskInstance();
taskInstance1.setId(1);
TaskInstance taskInstance2 = new TaskInstance();
taskInstance2.setId(2);
TaskInstance taskInstance3 = new TaskInstance();
taskInstance3.setId(3);
TaskInstance taskInstance4 = new TaskInstance();
taskInstance4.setId(4);
Map<String, String> cmdParam = new HashMap<>();
cmdParam.put(CMD_PARAM_RECOVERY_START_NODE_STRING, "1,2,3,4");
Mockito.when(processService.findTaskInstanceById(1)).thenReturn(taskInstance1);
Mockito.when(processService.findTaskInstanceById(2)).thenReturn(taskInstance2);
Mockito.when(processService.findTaskInstanceById(3)).thenReturn(taskInstance3);
Mockito.when(processService.findTaskInstanceById(4)).thenReturn(taskInstance4);
Class<WorkflowExecuteThread> masterExecThreadClass = WorkflowExecuteThread.class;
Method method = masterExecThreadClass.getDeclaredMethod("getStartTaskInstanceList", String.class);
method.setAccessible(true);
List<TaskInstance> taskInstances = (List<TaskInstance>) method.invoke(workflowExecuteThread, JSONUtils.toJsonString(cmdParam));
Assert.assertEquals(4, taskInstances.size());
} catch (Exception e) {
Assert.fail();
}
}
@Test
public void testGetPreVarPool() {
try {
Set<String> preTaskName = new HashSet<>();
preTaskName.add(Long.toString(1));
preTaskName.add(Long.toString(2));
TaskInstance taskInstance = new TaskInstance();
TaskInstance taskInstance1 = new TaskInstance();
taskInstance1.setId(1);
taskInstance1.setTaskCode(1);
taskInstance1.setVarPool("[{\"direct\":\"OUT\",\"prop\":\"test1\",\"type\":\"VARCHAR\",\"value\":\"1\"}]");
taskInstance1.setEndTime(new Date());
TaskInstance taskInstance2 = new TaskInstance();
taskInstance2.setId(2);
taskInstance2.setTaskCode(2);
taskInstance2.setVarPool("[{\"direct\":\"OUT\",\"prop\":\"test2\",\"type\":\"VARCHAR\",\"value\":\"2\"}]");
taskInstance2.setEndTime(new Date());
Map<Integer, TaskInstance> taskInstanceMap = new ConcurrentHashMap<>();
taskInstanceMap.put(taskInstance1.getId(), taskInstance1);
taskInstanceMap.put(taskInstance2.getId(), taskInstance2);
Map<String, Integer> completeTaskList = new ConcurrentHashMap<>();
completeTaskList.put(Long.toString(taskInstance1.getTaskCode()), taskInstance1.getId());
completeTaskList.put(Long.toString(taskInstance1.getTaskCode()), taskInstance2.getId());
Class<WorkflowExecuteThread> masterExecThreadClass = WorkflowExecuteThread.class;
Field completeTaskMapField = masterExecThreadClass.getDeclaredField("completeTaskMap");
completeTaskMapField.setAccessible(true);
completeTaskMapField.set(workflowExecuteThread, completeTaskList);
Field taskInstanceMapField = masterExecThreadClass.getDeclaredField("taskInstanceMap");
taskInstanceMapField.setAccessible(true);
taskInstanceMapField.set(workflowExecuteThread, taskInstanceMap);
workflowExecuteThread.getPreVarPool(taskInstance, preTaskName);
Assert.assertNotNull(taskInstance.getVarPool());
taskInstance2.setVarPool("[{\"direct\":\"OUT\",\"prop\":\"test1\",\"type\":\"VARCHAR\",\"value\":\"2\"}]");
completeTaskList.put(Long.toString(taskInstance2.getTaskCode()), taskInstance2.getId());
completeTaskMapField.setAccessible(true);
completeTaskMapField.set(workflowExecuteThread, completeTaskList);
taskInstanceMapField.setAccessible(true);
taskInstanceMapField.set(workflowExecuteThread, taskInstanceMap);
workflowExecuteThread.getPreVarPool(taskInstance, preTaskName);
Assert.assertNotNull(taskInstance.getVarPool());
} catch (Exception e) {
Assert.fail();
}
}
@Test
public void testCheckSerialProcess() {
try {
ProcessDefinition processDefinition1 = new ProcessDefinition();
processDefinition1.setId(123);
processDefinition1.setName("test");
processDefinition1.setVersion(1);
processDefinition1.setCode(11L);
processDefinition1.setExecutionType(ProcessExecutionTypeEnum.SERIAL_WAIT);
Mockito.when(processInstance.getId()).thenReturn(225);
Mockito.when(processService.findProcessInstanceById(225)).thenReturn(processInstance);
workflowExecuteThread.checkSerialProcess(processDefinition1);
Mockito.when(processInstance.getId()).thenReturn(225);
Mockito.when(processInstance.getNextProcessInstanceId()).thenReturn(222);
ProcessInstance processInstance9 = new ProcessInstance();
processInstance9.setId(222);
processInstance9.setProcessDefinitionCode(11L);
processInstance9.setProcessDefinitionVersion(1);
processInstance9.setState(ExecutionStatus.SERIAL_WAIT);
Mockito.when(processService.findProcessInstanceById(225)).thenReturn(processInstance);
Mockito.when(processService.findProcessInstanceById(222)).thenReturn(processInstance9);
workflowExecuteThread.checkSerialProcess(processDefinition1);
} catch (Exception e) {
Assert.fail();
}
}
private List<Schedule> zeroSchedulerList() {
return Collections.emptyList();
}
private List<Schedule> oneSchedulerList() {
List<Schedule> schedulerList = new LinkedList<>();
Schedule schedule = new Schedule();
schedule.setCrontab("0 0 0 1/2 * ?");
schedulerList.add(schedule);
return schedulerList;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,961 | [Improvement][MasterServer] Use threadpool to dispatch instead of single thread | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement.
### Description
Now the Master use a single thread `TaskPriorityQueueConsumer` to dispatch task, it will be slowly when Master was busy.
So I think it can be improved by threadpool instead of single thread.
### Use case
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6961 | https://github.com/apache/dolphinscheduler/pull/6962 | 5a04b8d49aa8e30a60a3fdc0cdb3b2f6910aa084 | e6fe39ea47f3168201f5d7f8c8a5340d075d2379 | "2021-11-22T09:25:39Z" | java | "2021-11-25T04:55:37Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/consumer/TaskPriorityQueueConsumerTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.consumer;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.Priority;
import org.apache.dolphinscheduler.common.enums.TaskType;
import org.apache.dolphinscheduler.common.enums.TimeoutFlag;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.dao.entity.DataSource;
import org.apache.dolphinscheduler.dao.entity.ProcessDefinition;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.dao.entity.Tenant;
import org.apache.dolphinscheduler.server.master.dispatch.ExecutorDispatcher;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.queue.TaskPriority;
import org.apache.dolphinscheduler.service.queue.TaskPriorityQueue;
import org.apache.dolphinscheduler.spi.enums.DbType;
import java.util.Date;
import java.util.concurrent.TimeUnit;
import org.junit.After;
import org.junit.Assert;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.Mockito;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
@RunWith(SpringJUnit4ClassRunner.class)
@Ignore
public class TaskPriorityQueueConsumerTest {
@Autowired
private TaskPriorityQueue<TaskPriority> taskPriorityQueue;
@Autowired
private TaskPriorityQueueConsumer taskPriorityQueueConsumer;
@Autowired
private ProcessService processService;
@Autowired
private ExecutorDispatcher dispatcher;
@Before
public void init() {
Tenant tenant = new Tenant();
tenant.setId(1);
tenant.setTenantCode("journey");
tenant.setDescription("journey");
tenant.setQueueId(1);
tenant.setCreateTime(new Date());
tenant.setUpdateTime(new Date());
Mockito.doReturn(tenant).when(processService).getTenantForProcess(1, 2);
}
@Test
public void testSHELLTask() throws Exception {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setTaskType(TaskType.SHELL.getDesc());
taskInstance.setProcessInstanceId(1);
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setProcessInstancePriority(Priority.MEDIUM);
taskInstance.setWorkerGroup("default");
taskInstance.setExecutorId(2);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(1);
processInstance.setTenantId(1);
processInstance.setCommandType(CommandType.START_PROCESS);
taskInstance.setProcessInstance(processInstance);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setUserId(2);
taskInstance.setProcessDefine(processDefinition);
TaskPriority taskPriority = new TaskPriority(2, 1, 2, 1, "default");
taskPriorityQueue.put(taskPriority);
TimeUnit.SECONDS.sleep(10);
Assert.assertNotNull(taskInstance);
}
@Test
public void testSQLTask() throws Exception {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setTaskType(TaskType.SQL.getDesc());
taskInstance.setProcessInstanceId(1);
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setProcessInstancePriority(Priority.MEDIUM);
taskInstance.setWorkerGroup("default");
taskInstance.setExecutorId(2);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setTenantId(1);
processInstance.setCommandType(CommandType.START_PROCESS);
taskInstance.setProcessInstance(processInstance);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setUserId(2);
taskInstance.setProcessDefine(processDefinition);
TaskPriority taskPriority = new TaskPriority(2, 1, 2, 1, "default");
taskPriorityQueue.put(taskPriority);
DataSource dataSource = new DataSource();
dataSource.setId(1);
dataSource.setName("sqlDatasource");
dataSource.setType(DbType.MYSQL);
dataSource.setUserId(2);
dataSource.setConnectionParams("{\"address\":\"jdbc:mysql://192.168.221.185:3306\","
+ "\"database\":\"dolphinscheduler_qiaozhanwei\","
+ "\"jdbcUrl\":\"jdbc:mysql://192.168.221.185:3306/dolphinscheduler_qiaozhanwei\","
+ "\"user\":\"root\","
+ "\"password\":\"root@123\"}");
dataSource.setCreateTime(new Date());
dataSource.setUpdateTime(new Date());
Mockito.doReturn(dataSource).when(processService).findDataSourceById(1);
TimeUnit.SECONDS.sleep(10);
Assert.assertNotNull(taskInstance);
}
@Test
public void testDataxTask() throws Exception {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setTaskType(TaskType.DATAX.getDesc());
taskInstance.setProcessInstanceId(1);
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setProcessInstancePriority(Priority.MEDIUM);
taskInstance.setWorkerGroup("default");
taskInstance.setExecutorId(2);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setTenantId(1);
processInstance.setCommandType(CommandType.START_PROCESS);
taskInstance.setProcessInstance(processInstance);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setUserId(2);
taskInstance.setProcessDefine(processDefinition);
TaskPriority taskPriority = new TaskPriority(2, 1, 2, 1, "default");
taskPriorityQueue.put(taskPriority);
DataSource dataSource = new DataSource();
dataSource.setId(80);
dataSource.setName("datax");
dataSource.setType(DbType.MYSQL);
dataSource.setUserId(2);
dataSource.setConnectionParams("{\"address\":\"jdbc:mysql://192.168.221.185:3306\","
+ "\"database\":\"dolphinscheduler_qiaozhanwei\","
+ "\"jdbcUrl\":\"jdbc:mysql://192.168.221.185:3306/dolphinscheduler_qiaozhanwei\","
+ "\"user\":\"root\","
+ "\"password\":\"root@123\"}");
dataSource.setCreateTime(new Date());
dataSource.setUpdateTime(new Date());
Mockito.doReturn(dataSource).when(processService).findDataSourceById(80);
TimeUnit.SECONDS.sleep(10);
Assert.assertNotNull(taskInstance);
}
@Test
public void testSqoopTask() throws Exception {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setTaskType(TaskType.SQOOP.getDesc());
taskInstance.setProcessInstanceId(1);
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setProcessInstancePriority(Priority.MEDIUM);
taskInstance.setWorkerGroup("default");
taskInstance.setExecutorId(2);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setTenantId(1);
processInstance.setCommandType(CommandType.START_PROCESS);
taskInstance.setProcessInstance(processInstance);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setUserId(2);
taskInstance.setProcessDefine(processDefinition);
TaskPriority taskPriority = new TaskPriority(2, 1, 2, 1, "default");
taskPriorityQueue.put(taskPriority);
DataSource dataSource = new DataSource();
dataSource.setId(1);
dataSource.setName("datax");
dataSource.setType(DbType.MYSQL);
dataSource.setUserId(2);
dataSource.setConnectionParams("{\"address\":\"jdbc:mysql://192.168.221.185:3306\","
+ "\"database\":\"dolphinscheduler_qiaozhanwei\","
+ "\"jdbcUrl\":\"jdbc:mysql://192.168.221.185:3306/dolphinscheduler_qiaozhanwei\","
+ "\"user\":\"root\","
+ "\"password\":\"root@123\"}");
dataSource.setCreateTime(new Date());
dataSource.setUpdateTime(new Date());
Mockito.doReturn(dataSource).when(processService).findDataSourceById(1);
TimeUnit.SECONDS.sleep(10);
Assert.assertNotNull(taskInstance);
}
@Test
public void testTaskInstanceIsFinalState() {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setTaskType(TaskType.SHELL.getDesc());
taskInstance.setProcessInstanceId(1);
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setProcessInstancePriority(Priority.MEDIUM);
taskInstance.setWorkerGroup("default");
taskInstance.setExecutorId(2);
Mockito.doReturn(taskInstance).when(processService).findTaskInstanceById(1);
Boolean state = taskPriorityQueueConsumer.taskInstanceIsFinalState(1);
Assert.assertNotNull(state);
}
@Test
public void testNotFoundWorkerGroup() throws Exception {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setTaskType(TaskType.SHELL.getDesc());
taskInstance.setProcessInstanceId(1);
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setProcessInstancePriority(Priority.MEDIUM);
taskInstance.setWorkerGroup("NoWorkGroup");
taskInstance.setExecutorId(2);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(1);
processInstance.setTenantId(1);
processInstance.setCommandType(CommandType.START_PROCESS);
taskInstance.setProcessInstance(processInstance);
taskInstance.setState(ExecutionStatus.DELAY_EXECUTION);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setUserId(2);
taskInstance.setProcessDefine(processDefinition);
Mockito.doReturn(taskInstance).when(processService).findTaskInstanceById(1);
TaskPriority taskPriority = new TaskPriority(2, 1, 2, 1, "NoWorkGroup");
taskPriorityQueue.put(taskPriority);
TimeUnit.SECONDS.sleep(10);
Assert.assertNotNull(taskInstance);
}
@Test
public void testDispatch() {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setTaskType(TaskType.SHELL.getDesc());
taskInstance.setProcessInstanceId(1);
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setProcessInstancePriority(Priority.MEDIUM);
taskInstance.setWorkerGroup("NoWorkGroup");
taskInstance.setExecutorId(2);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(1);
processInstance.setTenantId(1);
processInstance.setCommandType(CommandType.START_PROCESS);
taskInstance.setProcessInstance(processInstance);
taskInstance.setState(ExecutionStatus.DELAY_EXECUTION);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setUserId(2);
processDefinition.setProjectCode(1L);
taskInstance.setProcessDefine(processDefinition);
TaskDefinition taskDefinition = new TaskDefinition();
taskDefinition.setTimeoutFlag(TimeoutFlag.OPEN);
taskInstance.setTaskDefine(taskDefinition);
Mockito.doReturn(taskInstance).when(processService).findTaskInstanceById(1);
TaskPriority taskPriority = new TaskPriority();
taskPriority.setTaskId(1);
boolean res = taskPriorityQueueConsumer.dispatch(taskPriority);
Assert.assertFalse(res);
}
@Test
public void testRun() throws Exception {
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setTaskType(TaskType.SHELL.getDesc());
taskInstance.setProcessInstanceId(1);
taskInstance.setState(ExecutionStatus.KILL);
taskInstance.setProcessInstancePriority(Priority.MEDIUM);
taskInstance.setWorkerGroup("NoWorkGroup");
taskInstance.setExecutorId(2);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(1);
processInstance.setTenantId(1);
processInstance.setCommandType(CommandType.START_PROCESS);
taskInstance.setProcessInstance(processInstance);
taskInstance.setState(ExecutionStatus.DELAY_EXECUTION);
ProcessDefinition processDefinition = new ProcessDefinition();
processDefinition.setUserId(2);
taskInstance.setProcessDefine(processDefinition);
Mockito.doReturn(taskInstance).when(processService).findTaskInstanceById(1);
TaskPriority taskPriority = new TaskPriority(2, 1, 2, 1, "NoWorkGroup");
taskPriorityQueue.put(taskPriority);
taskPriorityQueueConsumer.run();
TimeUnit.SECONDS.sleep(10);
Assert.assertNotEquals(-1, taskPriorityQueue.size());
}
@After
public void close() {
Stopper.stop();
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 7,019 | [Bug] [MasterServer] switch task run error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch: 2.0
```
[ERROR] 2021-11-26 18:46:39.370 org.apache.dolphinscheduler.common.utils.JSONUtils:[129] - parse object exception!
com.fasterxml.jackson.databind.JsonMappingException: java.lang.Long cannot be cast to java.util.ArrayList
at [Source: (String)"{"dependTaskList":[{"condition":"${today}==20211201","nextNode":3645680828192},{"condition":"${today}==20211101","nextNode":3645675551008}],"nextNode":3645675551008}"; line: 1, column: 65] (through reference chain: org.apache.dolphinscheduler.common.task.switchtask.SwitchParameters["dependTaskList"]->java.util.ArrayList[0]->org.apache.dolphinscheduler.common.task.switchtask.SwitchResultVo["nextNode"])
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:281)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:611)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:599)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:143)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:286)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:245)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:27)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4218)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3214)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3182)
at org.apache.dolphinscheduler.common.utils.JSONUtils.parseObject(JSONUtils.java:127)
at org.apache.dolphinscheduler.dao.entity.TaskInstance.getSwitchDependency(TaskInstance.java:473)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.setSwitchResult(SwitchTaskProcessor.java:138)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.run(SwitchTaskProcessor.java:88)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:616)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1244)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:858)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskFinished(WorkflowExecuteThread.java:391)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskStateChangeHandler(WorkflowExecuteThread.java:348)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.stateEventHandler(WorkflowExecuteThread.java:297)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.handleEvents(WorkflowExecuteThread.java:245)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:225)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to java.util.ArrayList
at org.apache.dolphinscheduler.common.task.switchtask.SwitchResultVo.setNextNode(SwitchResultVo.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:141)
... 30 common frames omitted
[ERROR] 2021-11-26 18:46:39.370 org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor:[92] - update work flow 78 switch task 380 state error:
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.setSwitchResult(SwitchTaskProcessor.java:139)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.run(SwitchTaskProcessor.java:88)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:616)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1244)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:858)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskFinished(WorkflowExecuteThread.java:391)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskStateChangeHandler(WorkflowExecuteThread.java:348)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.stateEventHandler(WorkflowExecuteThread.java:297)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.handleEvents(WorkflowExecuteThread.java:245)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:225)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
it seems like the task params was dismatch between ui and server, which cause conversion failure.
### What you expected to happen
switch task run normally.
### How to reproduce
create a switch task and run it.
### Anything else
_No response_
### Version
2.0.0
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/7019 | https://github.com/apache/dolphinscheduler/pull/7046 | 8e93939121043b828f73ea2898c6102aa858524b | a0955a078a2ca83f3736caf5f05a325545168114 | "2021-11-26T13:44:05Z" | java | "2021-11-30T01:59:48Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/task/switchtask/SwitchResultVo.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.task.switchtask;
import java.util.ArrayList;
import java.util.List;
public class SwitchResultVo {
private String condition;
private List<String> nextNode;
public String getCondition() {
return condition;
}
public void setCondition(String condition) {
this.condition = condition;
}
public List<String> getNextNode() {
return nextNode;
}
public void setNextNode(Object nextNode) {
if (nextNode instanceof String) {
List<String> nextNodeList = new ArrayList<>();
nextNodeList.add(String.valueOf(nextNode));
this.nextNode = nextNodeList;
} else {
this.nextNode = (ArrayList) nextNode;
}
}
} |
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 7,019 | [Bug] [MasterServer] switch task run error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch: 2.0
```
[ERROR] 2021-11-26 18:46:39.370 org.apache.dolphinscheduler.common.utils.JSONUtils:[129] - parse object exception!
com.fasterxml.jackson.databind.JsonMappingException: java.lang.Long cannot be cast to java.util.ArrayList
at [Source: (String)"{"dependTaskList":[{"condition":"${today}==20211201","nextNode":3645680828192},{"condition":"${today}==20211101","nextNode":3645675551008}],"nextNode":3645675551008}"; line: 1, column: 65] (through reference chain: org.apache.dolphinscheduler.common.task.switchtask.SwitchParameters["dependTaskList"]->java.util.ArrayList[0]->org.apache.dolphinscheduler.common.task.switchtask.SwitchResultVo["nextNode"])
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:281)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:611)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:599)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:143)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:286)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:245)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:27)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4218)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3214)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3182)
at org.apache.dolphinscheduler.common.utils.JSONUtils.parseObject(JSONUtils.java:127)
at org.apache.dolphinscheduler.dao.entity.TaskInstance.getSwitchDependency(TaskInstance.java:473)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.setSwitchResult(SwitchTaskProcessor.java:138)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.run(SwitchTaskProcessor.java:88)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:616)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1244)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:858)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskFinished(WorkflowExecuteThread.java:391)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskStateChangeHandler(WorkflowExecuteThread.java:348)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.stateEventHandler(WorkflowExecuteThread.java:297)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.handleEvents(WorkflowExecuteThread.java:245)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:225)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to java.util.ArrayList
at org.apache.dolphinscheduler.common.task.switchtask.SwitchResultVo.setNextNode(SwitchResultVo.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:141)
... 30 common frames omitted
[ERROR] 2021-11-26 18:46:39.370 org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor:[92] - update work flow 78 switch task 380 state error:
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.setSwitchResult(SwitchTaskProcessor.java:139)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.run(SwitchTaskProcessor.java:88)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:616)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1244)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:858)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskFinished(WorkflowExecuteThread.java:391)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskStateChangeHandler(WorkflowExecuteThread.java:348)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.stateEventHandler(WorkflowExecuteThread.java:297)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.handleEvents(WorkflowExecuteThread.java:245)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:225)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
it seems like the task params was dismatch between ui and server, which cause conversion failure.
### What you expected to happen
switch task run normally.
### How to reproduce
create a switch task and run it.
### Anything else
_No response_
### Version
2.0.0
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/7019 | https://github.com/apache/dolphinscheduler/pull/7046 | 8e93939121043b828f73ea2898c6102aa858524b | a0955a078a2ca83f3736caf5f05a325545168114 | "2021-11-26T13:44:05Z" | java | "2021-11-30T01:59:48Z" | dolphinscheduler-dao/src/main/java/org/apache/dolphinscheduler/dao/utils/DagHelper.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.dao.utils;
import org.apache.dolphinscheduler.common.enums.TaskDependType;
import org.apache.dolphinscheduler.common.graph.DAG;
import org.apache.dolphinscheduler.common.model.TaskNode;
import org.apache.dolphinscheduler.common.model.TaskNodeRelation;
import org.apache.dolphinscheduler.common.process.ProcessDag;
import org.apache.dolphinscheduler.common.task.conditions.ConditionsParameters;
import org.apache.dolphinscheduler.common.task.switchtask.SwitchParameters;
import org.apache.dolphinscheduler.common.task.switchtask.SwitchResultVo;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.commons.collections.CollectionUtils;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.HashSet;
import java.util.List;
import java.util.Map;
import java.util.Set;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
/**
* dag tools
*/
public class DagHelper {
private static final Logger logger = LoggerFactory.getLogger(DagHelper.class);
/**
* generate flow node relation list by task node list;
* Edges that are not in the task Node List will not be added to the result
*
* @param taskNodeList taskNodeList
* @return task node relation list
*/
public static List<TaskNodeRelation> generateRelationListByFlowNodes(List<TaskNode> taskNodeList) {
List<TaskNodeRelation> nodeRelationList = new ArrayList<>();
for (TaskNode taskNode : taskNodeList) {
String preTasks = taskNode.getPreTasks();
List<String> preTaskList = JSONUtils.toList(preTasks, String.class);
if (preTaskList != null) {
for (String depNodeCode : preTaskList) {
if (null != findNodeByCode(taskNodeList, depNodeCode)) {
nodeRelationList.add(new TaskNodeRelation(depNodeCode, Long.toString(taskNode.getCode())));
}
}
}
}
return nodeRelationList;
}
/**
* generate task nodes needed by dag
*
* @param taskNodeList taskNodeList
* @param startNodeNameList startNodeNameList
* @param recoveryNodeCodeList recoveryNodeCodeList
* @param taskDependType taskDependType
* @return task node list
*/
public static List<TaskNode> generateFlowNodeListByStartNode(List<TaskNode> taskNodeList, List<String> startNodeNameList,
List<String> recoveryNodeCodeList, TaskDependType taskDependType) {
List<TaskNode> destFlowNodeList = new ArrayList<>();
List<String> startNodeList = startNodeNameList;
if (taskDependType != TaskDependType.TASK_POST && CollectionUtils.isEmpty(startNodeList)) {
logger.error("start node list is empty! cannot continue run the process ");
return destFlowNodeList;
}
List<TaskNode> destTaskNodeList = new ArrayList<>();
List<TaskNode> tmpTaskNodeList = new ArrayList<>();
if (taskDependType == TaskDependType.TASK_POST
&& CollectionUtils.isNotEmpty(recoveryNodeCodeList)) {
startNodeList = recoveryNodeCodeList;
}
if (CollectionUtils.isEmpty(startNodeList)) {
// no special designation start nodes
tmpTaskNodeList = taskNodeList;
} else {
// specified start nodes or resume execution
for (String startNodeCode : startNodeList) {
TaskNode startNode = findNodeByCode(taskNodeList, startNodeCode);
List<TaskNode> childNodeList = new ArrayList<>();
if (startNode == null) {
logger.error("start node name [{}] is not in task node list [{}] ",
startNodeCode,
taskNodeList
);
continue;
} else if (TaskDependType.TASK_POST == taskDependType) {
List<String> visitedNodeCodeList = new ArrayList<>();
childNodeList = getFlowNodeListPost(startNode, taskNodeList, visitedNodeCodeList);
} else if (TaskDependType.TASK_PRE == taskDependType) {
List<String> visitedNodeCodeList = new ArrayList<>();
childNodeList = getFlowNodeListPre(startNode, recoveryNodeCodeList, taskNodeList, visitedNodeCodeList);
} else {
childNodeList.add(startNode);
}
tmpTaskNodeList.addAll(childNodeList);
}
}
for (TaskNode taskNode : tmpTaskNodeList) {
if (null == findNodeByCode(destTaskNodeList, Long.toString(taskNode.getCode()))) {
destTaskNodeList.add(taskNode);
}
}
return destTaskNodeList;
}
/**
* find all the nodes that depended on the start node
*
* @param startNode startNode
* @param taskNodeList taskNodeList
* @return task node list
*/
private static List<TaskNode> getFlowNodeListPost(TaskNode startNode, List<TaskNode> taskNodeList, List<String> visitedNodeCodeList) {
List<TaskNode> resultList = new ArrayList<>();
for (TaskNode taskNode : taskNodeList) {
List<String> depList = taskNode.getDepList();
if (null != depList && null != startNode && depList.contains(Long.toString(startNode.getCode())) && !visitedNodeCodeList.contains(Long.toString(taskNode.getCode()))) {
resultList.addAll(getFlowNodeListPost(taskNode, taskNodeList, visitedNodeCodeList));
}
}
// why add (startNode != null) condition? for SonarCloud Quality Gate passed
if (null != startNode) {
visitedNodeCodeList.add(Long.toString(startNode.getCode()));
}
resultList.add(startNode);
return resultList;
}
/**
* find all nodes that start nodes depend on.
*
* @param startNode startNode
* @param recoveryNodeCodeList recoveryNodeCodeList
* @param taskNodeList taskNodeList
* @return task node list
*/
private static List<TaskNode> getFlowNodeListPre(TaskNode startNode, List<String> recoveryNodeCodeList, List<TaskNode> taskNodeList, List<String> visitedNodeCodeList) {
List<TaskNode> resultList = new ArrayList<>();
List<String> depList = new ArrayList<>();
if (null != startNode) {
depList = startNode.getDepList();
resultList.add(startNode);
}
if (CollectionUtils.isEmpty(depList)) {
return resultList;
}
for (String depNodeCode : depList) {
TaskNode start = findNodeByCode(taskNodeList, depNodeCode);
if (recoveryNodeCodeList.contains(depNodeCode)) {
resultList.add(start);
} else if (!visitedNodeCodeList.contains(depNodeCode)) {
resultList.addAll(getFlowNodeListPre(start, recoveryNodeCodeList, taskNodeList, visitedNodeCodeList));
}
}
// why add (startNode != null) condition? for SonarCloud Quality Gate passed
if (null != startNode) {
visitedNodeCodeList.add(Long.toString(startNode.getCode()));
}
return resultList;
}
/**
* generate dag by start nodes and recovery nodes
*
* @param totalTaskNodeList totalTaskNodeList
* @param startNodeNameList startNodeNameList
* @param recoveryNodeCodeList recoveryNodeCodeList
* @param depNodeType depNodeType
* @return process dag
* @throws Exception if error throws Exception
*/
public static ProcessDag generateFlowDag(List<TaskNode> totalTaskNodeList,
List<String> startNodeNameList,
List<String> recoveryNodeCodeList,
TaskDependType depNodeType) throws Exception {
List<TaskNode> destTaskNodeList = generateFlowNodeListByStartNode(totalTaskNodeList, startNodeNameList, recoveryNodeCodeList, depNodeType);
if (destTaskNodeList.isEmpty()) {
return null;
}
List<TaskNodeRelation> taskNodeRelations = generateRelationListByFlowNodes(destTaskNodeList);
ProcessDag processDag = new ProcessDag();
processDag.setEdges(taskNodeRelations);
processDag.setNodes(destTaskNodeList);
return processDag;
}
/**
* find node by node name
*
* @param nodeDetails nodeDetails
* @param nodeName nodeName
* @return task node
*/
public static TaskNode findNodeByName(List<TaskNode> nodeDetails, String nodeName) {
for (TaskNode taskNode : nodeDetails) {
if (taskNode.getName().equals(nodeName)) {
return taskNode;
}
}
return null;
}
/**
* find node by node code
*
* @param nodeDetails nodeDetails
* @param nodeCode nodeCode
* @return task node
*/
public static TaskNode findNodeByCode(List<TaskNode> nodeDetails, String nodeCode) {
for (TaskNode taskNode : nodeDetails) {
if (Long.toString(taskNode.getCode()).equals(nodeCode)) {
return taskNode;
}
}
return null;
}
/**
* the task can be submit when all the depends nodes are forbidden or complete
*
* @param taskNode taskNode
* @param dag dag
* @param completeTaskList completeTaskList
* @return can submit
*/
public static boolean allDependsForbiddenOrEnd(TaskNode taskNode,
DAG<String, TaskNode, TaskNodeRelation> dag,
Map<String, TaskNode> skipTaskNodeList,
Map<String, TaskInstance> completeTaskList) {
List<String> dependList = taskNode.getDepList();
if (dependList == null) {
return true;
}
for (String dependNodeCode : dependList) {
TaskNode dependNode = dag.getNode(dependNodeCode);
if (dependNode == null || completeTaskList.containsKey(dependNodeCode)
|| dependNode.isForbidden()
|| skipTaskNodeList.containsKey(dependNodeCode)) {
continue;
} else {
return false;
}
}
return true;
}
/**
* parse the successor nodes of previous node.
* this function parse the condition node to find the right branch.
* also check all the depends nodes forbidden or complete
*
* @return successor nodes
*/
public static Set<String> parsePostNodes(String preNodeCode,
Map<String, TaskNode> skipTaskNodeList,
DAG<String, TaskNode, TaskNodeRelation> dag,
Map<String, TaskInstance> completeTaskList) {
Set<String> postNodeList = new HashSet<>();
Collection<String> startVertexes = new ArrayList<>();
if (preNodeCode == null) {
startVertexes = dag.getBeginNode();
} else if (dag.getNode(preNodeCode).isConditionsTask()) {
List<String> conditionTaskList = parseConditionTask(preNodeCode, skipTaskNodeList, dag, completeTaskList);
startVertexes.addAll(conditionTaskList);
} else if (dag.getNode(preNodeCode).isSwitchTask()) {
List<String> conditionTaskList = parseSwitchTask(preNodeCode, skipTaskNodeList, dag, completeTaskList);
startVertexes.addAll(conditionTaskList);
} else {
startVertexes = dag.getSubsequentNodes(preNodeCode);
}
for (String subsequent : startVertexes) {
TaskNode taskNode = dag.getNode(subsequent);
if (isTaskNodeNeedSkip(taskNode, skipTaskNodeList)) {
setTaskNodeSkip(subsequent, dag, completeTaskList, skipTaskNodeList);
continue;
}
if (!DagHelper.allDependsForbiddenOrEnd(taskNode, dag, skipTaskNodeList, completeTaskList)) {
continue;
}
if (taskNode.isForbidden() || completeTaskList.containsKey(subsequent)) {
postNodeList.addAll(parsePostNodes(subsequent, skipTaskNodeList, dag, completeTaskList));
continue;
}
postNodeList.add(subsequent);
}
return postNodeList;
}
/**
* if all of the task dependence are skipped, skip it too.
*/
private static boolean isTaskNodeNeedSkip(TaskNode taskNode,
Map<String, TaskNode> skipTaskNodeList
) {
if (CollectionUtils.isEmpty(taskNode.getDepList())) {
return false;
}
for (String depNode : taskNode.getDepList()) {
if (!skipTaskNodeList.containsKey(depNode)) {
return false;
}
}
return true;
}
/**
* parse condition task find the branch process
* set skip flag for another one.
*/
public static List<String> parseConditionTask(String nodeCode,
Map<String, TaskNode> skipTaskNodeList,
DAG<String, TaskNode, TaskNodeRelation> dag,
Map<String, TaskInstance> completeTaskList) {
List<String> conditionTaskList = new ArrayList<>();
TaskNode taskNode = dag.getNode(nodeCode);
if (!taskNode.isConditionsTask()) {
return conditionTaskList;
}
if (!completeTaskList.containsKey(nodeCode)) {
return conditionTaskList;
}
TaskInstance taskInstance = completeTaskList.get(nodeCode);
ConditionsParameters conditionsParameters =
JSONUtils.parseObject(taskNode.getConditionResult(), ConditionsParameters.class);
List<String> skipNodeList = new ArrayList<>();
if (taskInstance.getState().typeIsSuccess()) {
conditionTaskList = conditionsParameters.getSuccessNode();
skipNodeList = conditionsParameters.getFailedNode();
} else if (taskInstance.getState().typeIsFailure()) {
conditionTaskList = conditionsParameters.getFailedNode();
skipNodeList = conditionsParameters.getSuccessNode();
} else {
conditionTaskList.add(nodeCode);
}
for (String failedNode : skipNodeList) {
setTaskNodeSkip(failedNode, dag, completeTaskList, skipTaskNodeList);
}
return conditionTaskList;
}
/**
* parse condition task find the branch process
* set skip flag for another one.
*
* @param nodeCode
* @return
*/
public static List<String> parseSwitchTask(String nodeCode,
Map<String, TaskNode> skipTaskNodeList,
DAG<String, TaskNode, TaskNodeRelation> dag,
Map<String, TaskInstance> completeTaskList) {
List<String> conditionTaskList = new ArrayList<>();
TaskNode taskNode = dag.getNode(nodeCode);
if (!taskNode.isSwitchTask()) {
return conditionTaskList;
}
if (!completeTaskList.containsKey(nodeCode)) {
return conditionTaskList;
}
conditionTaskList = skipTaskNode4Switch(taskNode, skipTaskNodeList, completeTaskList, dag);
return conditionTaskList;
}
private static List<String> skipTaskNode4Switch(TaskNode taskNode, Map<String, TaskNode> skipTaskNodeList,
Map<String, TaskInstance> completeTaskList,
DAG<String, TaskNode, TaskNodeRelation> dag) {
SwitchParameters switchParameters = completeTaskList.get(taskNode.getName()).getSwitchDependency();
int resultConditionLocation = switchParameters.getResultConditionLocation();
List<SwitchResultVo> conditionResultVoList = switchParameters.getDependTaskList();
List<String> switchTaskList = conditionResultVoList.get(resultConditionLocation).getNextNode();
if (CollectionUtils.isEmpty(switchTaskList)) {
switchTaskList = new ArrayList<>();
}
conditionResultVoList.remove(resultConditionLocation);
for (SwitchResultVo info : conditionResultVoList) {
if (CollectionUtils.isEmpty(info.getNextNode())) {
continue;
}
setTaskNodeSkip(info.getNextNode().get(0), dag, completeTaskList, skipTaskNodeList);
}
return switchTaskList;
}
/**
* set task node and the post nodes skip flag
*/
private static void setTaskNodeSkip(String skipNodeCode,
DAG<String, TaskNode, TaskNodeRelation> dag,
Map<String, TaskInstance> completeTaskList,
Map<String, TaskNode> skipTaskNodeList) {
if (!dag.containsNode(skipNodeCode)) {
return;
}
skipTaskNodeList.putIfAbsent(skipNodeCode, dag.getNode(skipNodeCode));
Collection<String> postNodeList = dag.getSubsequentNodes(skipNodeCode);
for (String post : postNodeList) {
TaskNode postNode = dag.getNode(post);
if (isTaskNodeNeedSkip(postNode, skipTaskNodeList)) {
setTaskNodeSkip(post, dag, completeTaskList, skipTaskNodeList);
}
}
}
/***
* build dag graph
* @param processDag processDag
* @return dag
*/
public static DAG<String, TaskNode, TaskNodeRelation> buildDagGraph(ProcessDag processDag) {
DAG<String, TaskNode, TaskNodeRelation> dag = new DAG<>();
//add vertex
if (CollectionUtils.isNotEmpty(processDag.getNodes())) {
for (TaskNode node : processDag.getNodes()) {
dag.addNode(Long.toString(node.getCode()), node);
}
}
//add edge
if (CollectionUtils.isNotEmpty(processDag.getEdges())) {
for (TaskNodeRelation edge : processDag.getEdges()) {
dag.addEdge(edge.getStartNode(), edge.getEndNode());
}
}
return dag;
}
/**
* get process dag
*
* @param taskNodeList task node list
* @return Process dag
*/
public static ProcessDag getProcessDag(List<TaskNode> taskNodeList) {
List<TaskNodeRelation> taskNodeRelations = new ArrayList<>();
// Traverse node information and build relationships
for (TaskNode taskNode : taskNodeList) {
String preTasks = taskNode.getPreTasks();
List<String> preTasksList = JSONUtils.toList(preTasks, String.class);
// If the dependency is not empty
if (preTasksList != null) {
for (String depNode : preTasksList) {
taskNodeRelations.add(new TaskNodeRelation(depNode, Long.toString(taskNode.getCode())));
}
}
}
ProcessDag processDag = new ProcessDag();
processDag.setEdges(taskNodeRelations);
processDag.setNodes(taskNodeList);
return processDag;
}
/**
* get process dag
*
* @param taskNodeList task node list
* @return Process dag
*/
public static ProcessDag getProcessDag(List<TaskNode> taskNodeList,
List<ProcessTaskRelation> processTaskRelations) {
Map<Long, TaskNode> taskNodeMap = new HashMap<>();
taskNodeList.forEach(taskNode -> {
taskNodeMap.putIfAbsent(taskNode.getCode(), taskNode);
});
List<TaskNodeRelation> taskNodeRelations = new ArrayList<>();
for (ProcessTaskRelation processTaskRelation : processTaskRelations) {
long preTaskCode = processTaskRelation.getPreTaskCode();
long postTaskCode = processTaskRelation.getPostTaskCode();
if (processTaskRelation.getPreTaskCode() != 0
&& taskNodeMap.containsKey(preTaskCode) && taskNodeMap.containsKey(postTaskCode)) {
TaskNode preNode = taskNodeMap.get(preTaskCode);
TaskNode postNode = taskNodeMap.get(postTaskCode);
taskNodeRelations.add(new TaskNodeRelation(Long.toString(preNode.getCode()), Long.toString(postNode.getCode())));
}
}
ProcessDag processDag = new ProcessDag();
processDag.setEdges(taskNodeRelations);
processDag.setNodes(taskNodeList);
return processDag;
}
/**
* is there have conditions after the parent node
*/
public static boolean haveConditionsAfterNode(String parentNodeCode,
DAG<String, TaskNode, TaskNodeRelation> dag
) {
boolean result = false;
Set<String> subsequentNodes = dag.getSubsequentNodes(parentNodeCode);
if (CollectionUtils.isEmpty(subsequentNodes)) {
return result;
}
for (String nodeCode : subsequentNodes) {
TaskNode taskNode = dag.getNode(nodeCode);
List<String> preTasksList = JSONUtils.toList(taskNode.getPreTasks(), String.class);
if (preTasksList.contains(parentNodeCode) && taskNode.isConditionsTask()) {
return true;
}
}
return result;
}
/**
* is there have conditions after the parent node
*/
public static boolean haveConditionsAfterNode(String parentNodeCode, List<TaskNode> taskNodes) {
if (CollectionUtils.isEmpty(taskNodes)) {
return false;
}
for (TaskNode taskNode : taskNodes) {
List<String> preTasksList = JSONUtils.toList(taskNode.getPreTasks(), String.class);
if (preTasksList.contains(parentNodeCode) && taskNode.isConditionsTask()) {
return true;
}
}
return false;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 7,019 | [Bug] [MasterServer] switch task run error | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
branch: 2.0
```
[ERROR] 2021-11-26 18:46:39.370 org.apache.dolphinscheduler.common.utils.JSONUtils:[129] - parse object exception!
com.fasterxml.jackson.databind.JsonMappingException: java.lang.Long cannot be cast to java.util.ArrayList
at [Source: (String)"{"dependTaskList":[{"condition":"${today}==20211201","nextNode":3645680828192},{"condition":"${today}==20211101","nextNode":3645675551008}],"nextNode":3645675551008}"; line: 1, column: 65] (through reference chain: org.apache.dolphinscheduler.common.task.switchtask.SwitchParameters["dependTaskList"]->java.util.ArrayList[0]->org.apache.dolphinscheduler.common.task.switchtask.SwitchResultVo["nextNode"])
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:281)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:611)
at com.fasterxml.jackson.databind.deser.SettableBeanProperty._throwAsIOE(SettableBeanProperty.java:599)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:143)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:286)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:245)
at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:27)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:129)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:288)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4218)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3214)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3182)
at org.apache.dolphinscheduler.common.utils.JSONUtils.parseObject(JSONUtils.java:127)
at org.apache.dolphinscheduler.dao.entity.TaskInstance.getSwitchDependency(TaskInstance.java:473)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.setSwitchResult(SwitchTaskProcessor.java:138)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.run(SwitchTaskProcessor.java:88)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:616)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1244)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:858)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskFinished(WorkflowExecuteThread.java:391)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskStateChangeHandler(WorkflowExecuteThread.java:348)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.stateEventHandler(WorkflowExecuteThread.java:297)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.handleEvents(WorkflowExecuteThread.java:245)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:225)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassCastException: java.lang.Long cannot be cast to java.util.ArrayList
at org.apache.dolphinscheduler.common.task.switchtask.SwitchResultVo.setNextNode(SwitchResultVo.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at com.fasterxml.jackson.databind.deser.impl.MethodProperty.deserializeAndSet(MethodProperty.java:141)
... 30 common frames omitted
[ERROR] 2021-11-26 18:46:39.370 org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor:[92] - update work flow 78 switch task 380 state error:
java.lang.NullPointerException: null
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.setSwitchResult(SwitchTaskProcessor.java:139)
at org.apache.dolphinscheduler.server.master.runner.task.SwitchTaskProcessor.run(SwitchTaskProcessor.java:88)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:616)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1244)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:858)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskFinished(WorkflowExecuteThread.java:391)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.taskStateChangeHandler(WorkflowExecuteThread.java:348)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.stateEventHandler(WorkflowExecuteThread.java:297)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.handleEvents(WorkflowExecuteThread.java:245)
at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:225)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
```
it seems like the task params was dismatch between ui and server, which cause conversion failure.
### What you expected to happen
switch task run normally.
### How to reproduce
create a switch task and run it.
### Anything else
_No response_
### Version
2.0.0
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/7019 | https://github.com/apache/dolphinscheduler/pull/7046 | 8e93939121043b828f73ea2898c6102aa858524b | a0955a078a2ca83f3736caf5f05a325545168114 | "2021-11-26T13:44:05Z" | java | "2021-11-30T01:59:48Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/runner/task/SwitchTaskProcessor.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.runner.task;
import org.apache.dolphinscheduler.common.enums.DependResult;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.TaskType;
import org.apache.dolphinscheduler.common.process.Property;
import org.apache.dolphinscheduler.common.task.switchtask.SwitchParameters;
import org.apache.dolphinscheduler.common.task.switchtask.SwitchResultVo;
import org.apache.dolphinscheduler.common.utils.JSONUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskDefinition;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.utils.LogUtils;
import org.apache.dolphinscheduler.server.utils.SwitchTaskUtils;
import org.apache.dolphinscheduler.service.bean.SpringApplicationContext;
import org.apache.commons.lang.StringUtils;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.stream.Collectors;
public class SwitchTaskProcessor extends BaseTaskProcessor {
protected final String rgex = "['\"]*\\$\\{(.*?)\\}['\"]*";
private TaskInstance taskInstance;
private ProcessInstance processInstance;
TaskDefinition taskDefinition;
MasterConfig masterConfig = SpringApplicationContext.getBean(MasterConfig.class);
/**
* switch result
*/
private DependResult conditionResult;
@Override
public boolean submit(TaskInstance taskInstance, ProcessInstance processInstance, int masterTaskCommitRetryTimes, int masterTaskCommitInterval) {
this.processInstance = processInstance;
this.taskInstance = processService.submitTaskWithRetry(processInstance, taskInstance, masterTaskCommitRetryTimes, masterTaskCommitInterval);
if (this.taskInstance == null) {
return false;
}
taskDefinition = processService.findTaskDefinition(
taskInstance.getTaskCode(), taskInstance.getTaskDefinitionVersion()
);
taskInstance.setLogPath(LogUtils.getTaskLogPath(taskInstance.getFirstSubmitTime(),processInstance.getProcessDefinitionCode(),
processInstance.getProcessDefinitionVersion(),
taskInstance.getProcessInstanceId(),
taskInstance.getId()));
taskInstance.setHost(NetUtils.getAddr(masterConfig.getListenPort()));
taskInstance.setState(ExecutionStatus.RUNNING_EXECUTION);
taskInstance.setStartTime(new Date());
processService.updateTaskInstance(taskInstance);
return true;
}
@Override
public void run() {
try {
if (!this.taskState().typeIsFinished() && setSwitchResult()) {
endTaskState();
}
} catch (Exception e) {
logger.error("update work flow {} switch task {} state error:",
this.processInstance.getId(),
this.taskInstance.getId(),
e);
}
}
@Override
protected boolean pauseTask() {
this.taskInstance.setState(ExecutionStatus.PAUSE);
this.taskInstance.setEndTime(new Date());
processService.saveTaskInstance(taskInstance);
return true;
}
@Override
protected boolean killTask() {
this.taskInstance.setState(ExecutionStatus.KILL);
this.taskInstance.setEndTime(new Date());
processService.saveTaskInstance(taskInstance);
return true;
}
@Override
protected boolean taskTimeout() {
return true;
}
@Override
public String getType() {
return TaskType.SWITCH.getDesc();
}
@Override
public ExecutionStatus taskState() {
return this.taskInstance.getState();
}
private boolean setSwitchResult() {
List<TaskInstance> taskInstances = processService.findValidTaskListByProcessId(
taskInstance.getProcessInstanceId()
);
Map<String, ExecutionStatus> completeTaskList = new HashMap<>();
for (TaskInstance task : taskInstances) {
completeTaskList.putIfAbsent(task.getName(), task.getState());
}
SwitchParameters switchParameters = taskInstance.getSwitchDependency();
List<SwitchResultVo> switchResultVos = switchParameters.getDependTaskList();
SwitchResultVo switchResultVo = new SwitchResultVo();
switchResultVo.setNextNode(switchParameters.getNextNode());
switchResultVos.add(switchResultVo);
int finalConditionLocation = switchResultVos.size() - 1;
int i = 0;
conditionResult = DependResult.SUCCESS;
for (SwitchResultVo info : switchResultVos) {
logger.info("the {} execution ", (i + 1));
logger.info("original condition sentence:{}", info.getCondition());
if (StringUtils.isEmpty(info.getCondition())) {
finalConditionLocation = i;
break;
}
String content = setTaskParams(info.getCondition().replaceAll("'", "\""), rgex);
logger.info("format condition sentence::{}", content);
Boolean result = null;
try {
result = SwitchTaskUtils.evaluate(content);
} catch (Exception e) {
logger.info("error sentence : {}", content);
conditionResult = DependResult.FAILED;
break;
}
logger.info("condition result : {}", result);
if (result) {
finalConditionLocation = i;
break;
}
i++;
}
switchParameters.setDependTaskList(switchResultVos);
switchParameters.setResultConditionLocation(finalConditionLocation);
taskInstance.setSwitchDependency(switchParameters);
logger.info("the switch task depend result : {}", conditionResult);
return true;
}
/**
* update task state
*/
private void endTaskState() {
ExecutionStatus status = (conditionResult == DependResult.SUCCESS) ? ExecutionStatus.SUCCESS : ExecutionStatus.FAILURE;
taskInstance.setEndTime(new Date());
taskInstance.setState(status);
processService.updateTaskInstance(taskInstance);
}
public String setTaskParams(String content, String rgex) {
Pattern pattern = Pattern.compile(rgex);
Matcher m = pattern.matcher(content);
Map<String, Property> globalParams = JSONUtils
.toList(processInstance.getGlobalParams(), Property.class)
.stream()
.collect(Collectors.toMap(Property::getProp, Property -> Property));
Map<String, Property> varParams = JSONUtils
.toList(taskInstance.getVarPool(), Property.class)
.stream()
.collect(Collectors.toMap(Property::getProp, Property -> Property));
if (varParams.size() > 0) {
varParams.putAll(globalParams);
globalParams = varParams;
}
while (m.find()) {
String paramName = m.group(1);
Property property = globalParams.get(paramName);
if (property == null) {
return "";
}
String value = property.getValue();
if (!org.apache.commons.lang.math.NumberUtils.isNumber(value)) {
value = "\"" + value + "\"";
}
logger.info("paramName:{},paramValue{}", paramName, value);
content = content.replace("${" + paramName + "}", value);
}
return content;
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,978 | [Bug] [UI] Token delete request 404 | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Token delete request 404.
```
xhr.js?b50d:178 DELETE http://localhost:8888/dolphinscheduler/access-token/3?id=3 404 (Not Found)
```
### What you expected to happen
Token delete request is normal
### How to reproduce
Delete token
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6978 | https://github.com/apache/dolphinscheduler/pull/6979 | a0955a078a2ca83f3736caf5f05a325545168114 | 631a31b6ac6068100f312d7194a6939bc7801bf8 | "2021-11-24T07:40:46Z" | java | "2021-11-30T02:00:22Z" | dolphinscheduler-ui/src/js/conf/home/store/user/actions.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import io from '@/module/io'
export default {
/**
* get userInfo
*/
getUserInfo ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('users/get-user-info', payload, res => {
state.userInfo = res.data
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* sign out
*/
signOut () {
io.post('signOut', res => {
setTimeout(() => {
window.location.href = `${PUBLIC_PATH}/view/login/index.html`
}, 100)
}).catch(e => {
console.log(e)
})
},
/**
* get token list
* User loginUser,
* Integer pageNo,
* String searchVal,
* Integer pageSize
*/
getTokenListP ({ state }, payload) {
return new Promise((resolve, reject) => {
io.get('access-tokens', payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* create token
* User loginUser,
* int userId,
* String expireTime,
* String token
*/
createToken ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post('access-tokens', payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* update token
* User loginUser,
* int userId,
* String expireTime,
* String token
*/
updateToken ({ state }, payload) {
return new Promise((resolve, reject) => {
io.put(`access-tokens/${payload.id}`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
},
/**
* create token
* User loginUser,
* int userId,
* String expireTime
*/
generateToken ({ state }, payload) {
return new Promise((resolve, reject) => {
io.post('access-tokens/generate', payload, res => {
resolve(res.data)
}).catch(e => {
reject(e)
})
})
},
/**
* delete token
* User loginUser,
* int id
*/
deleteToken ({ state }, payload) {
return new Promise((resolve, reject) => {
io.delete(`access-token/${payload.id}`, payload, res => {
resolve(res)
}).catch(e => {
reject(e)
})
})
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,955 | [Bug] [UI] View task status module on project homepage cannot read property'label' of undefined | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
View task status module on project homepage cannot read property'label' of undefined
```
TypeError: Cannot read property 'label' of undefined
at eval (taskStatusCount.vue?cc37:86)
at arrayMap (lodash.js?2ef0:653)
at Function.map (lodash.js?2ef0:9622)
at VueComponent._handleTaskStatus (taskStatusCount.vue?cc37:86)
at eval (taskStatusCount.vue?cc37:113)
```
### What you expected to happen
The page is displayed normally
### How to reproduce
Project homepage view exception
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6955 | https://github.com/apache/dolphinscheduler/pull/6956 | 631a31b6ac6068100f312d7194a6939bc7801bf8 | f0630bc9a05cf4e4407e040ad0bec36d486aa173 | "2021-11-22T05:04:52Z" | java | "2021-11-30T02:00:54Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/enums/ExecutionStatus.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common.enums;
import java.util.HashMap;
import com.baomidou.mybatisplus.annotation.EnumValue;
/**
* running status for workflow and task nodes
*/
public enum ExecutionStatus {
/**
* status:
* 0 submit success
* 1 running
* 2 ready pause
* 3 pause
* 4 ready stop
* 5 stop
* 6 failure
* 7 success
* 8 need fault tolerance
* 9 kill
* 10 waiting thread
* 11 waiting depend node complete
* 12 delay execution
* 13 forced success
*/
SUBMITTED_SUCCESS(0, "submit success"),
RUNNING_EXECUTION(1, "running"),
READY_PAUSE(2, "ready pause"),
PAUSE(3, "pause"),
READY_STOP(4, "ready stop"),
STOP(5, "stop"),
FAILURE(6, "failure"),
SUCCESS(7, "success"),
NEED_FAULT_TOLERANCE(8, "need fault tolerance"),
KILL(9, "kill"),
WAITING_THREAD(10, "waiting thread"),
WAITING_DEPEND(11, "waiting depend node complete"),
DELAY_EXECUTION(12, "delay execution"),
FORCED_SUCCESS(13, "forced success"),
SERIAL_WAIT(14, "serial wait");
ExecutionStatus(int code, String descp) {
this.code = code;
this.descp = descp;
}
@EnumValue
private final int code;
private final String descp;
private static HashMap<Integer, ExecutionStatus> EXECUTION_STATUS_MAP = new HashMap<>();
static {
for (ExecutionStatus executionStatus : ExecutionStatus.values()) {
EXECUTION_STATUS_MAP.put(executionStatus.code, executionStatus);
}
}
/**
* status is success
*
* @return status
*/
public boolean typeIsSuccess() {
return this == SUCCESS || this == FORCED_SUCCESS;
}
/**
* status is failure
*
* @return status
*/
public boolean typeIsFailure() {
return this == FAILURE || this == NEED_FAULT_TOLERANCE;
}
/**
* status is finished
*
* @return status
*/
public boolean typeIsFinished() {
return typeIsSuccess() || typeIsFailure() || typeIsCancel() || typeIsPause()
|| typeIsStop();
}
/**
* status is waiting thread
*
* @return status
*/
public boolean typeIsWaitingThread() {
return this == WAITING_THREAD;
}
/**
* status is pause
*
* @return status
*/
public boolean typeIsPause() {
return this == PAUSE;
}
/**
* status is pause
*
* @return status
*/
public boolean typeIsStop() {
return this == STOP;
}
/**
* status is running
*
* @return status
*/
public boolean typeIsRunning() {
return this == RUNNING_EXECUTION || this == WAITING_DEPEND || this == DELAY_EXECUTION;
}
/**
* status is cancel
*
* @return status
*/
public boolean typeIsCancel() {
return this == KILL || this == STOP;
}
public int getCode() {
return code;
}
public String getDescp() {
return descp;
}
public static ExecutionStatus of(int status) {
if (EXECUTION_STATUS_MAP.containsKey(status)) {
return EXECUTION_STATUS_MAP.get(status);
}
throw new IllegalArgumentException("invalid status : " + status);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 6,955 | [Bug] [UI] View task status module on project homepage cannot read property'label' of undefined | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
View task status module on project homepage cannot read property'label' of undefined
```
TypeError: Cannot read property 'label' of undefined
at eval (taskStatusCount.vue?cc37:86)
at arrayMap (lodash.js?2ef0:653)
at Function.map (lodash.js?2ef0:9622)
at VueComponent._handleTaskStatus (taskStatusCount.vue?cc37:86)
at eval (taskStatusCount.vue?cc37:113)
```
### What you expected to happen
The page is displayed normally
### How to reproduce
Project homepage view exception
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/6955 | https://github.com/apache/dolphinscheduler/pull/6956 | 631a31b6ac6068100f312d7194a6939bc7801bf8 | f0630bc9a05cf4e4407e040ad0bec36d486aa173 | "2021-11-22T05:04:52Z" | java | "2021-11-30T02:00:54Z" | dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/_source/conditions/instance/common.js | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import i18n from '@/module/i18n'
/**
* State code table
*/
const stateType = [
{
code: '',
label: `${i18n.$t('AllStatus')}`
}, {
code: 'SUBMITTED_SUCCESS',
label: `${i18n.$t('Submitted successfully')}`
}, {
code: 'RUNNING_EXECUTION',
label: `${i18n.$t('Running')}`
}, {
code: 'READY_PAUSE',
label: `${i18n.$t('Ready to pause')}`
}, {
code: 'PAUSE',
label: `${i18n.$t('Pause')}`
}, {
code: 'READY_STOP',
label: `${i18n.$t('Ready to stop')}`
}, {
code: 'STOP',
label: `${i18n.$t('Stop')}`
}, {
code: 'FAILURE',
label: `${i18n.$t('Failed')}`
}, {
code: 'SUCCESS',
label: `${i18n.$t('Success')}`
}, {
code: 'NEED_FAULT_TOLERANCE',
label: `${i18n.$t('Need fault tolerance')}`
}, {
code: 'KILL',
label: `${i18n.$t('Kill')}`
}, {
code: 'WAITING_THREAD',
label: `${i18n.$t('Waiting for thread')}`
}, {
code: 'WAITING_DEPEND',
label: `${i18n.$t('Waiting for dependency to complete')}`
}, {
code: 'DELAY_EXECUTION',
label: `${i18n.$t('Delay execution')}`
}, {
code: 'FORCED_SUCCESS',
label: `${i18n.$t('Forced success')}`
}
]
export {
stateType
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 7,004 | [Bug] [MasterServer] master will still work when it lose zk connection | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Now when master lose zk connection, it will still find commands and handle processInstance, because the condition `ServerNodeManager.MASTER_SIZE` and `slot` is not changed. So it may happen two masters have the same slot, and
I found `MasterRegistryClient` addConnectionStateListener when registry:
`registryClient.addConnectionStateListener(newState -> {
if (newState == ConnectionState.RECONNECTED || newState == ConnectionState.SUSPENDED) {
registryClient.persistEphemeral(localNodePath, "");
}
});
`
### What you expected to happen
When master lose zk connection, stop processing command until reconnect.
### How to reproduce
none
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/7004 | https://github.com/apache/dolphinscheduler/pull/7045 | 6e5539478392d6c1f334ae53d01574b5e54e769c | be3bd4c83d30773ce6204bca08d3cd1e6444dea9 | "2021-11-26T02:54:02Z" | java | "2021-11-30T03:00:17Z" | dolphinscheduler-server/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistryClient.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.registry;
import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_MASTERS;
import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_NODE;
import static org.apache.dolphinscheduler.common.Constants.SLEEP_TIME_MILLIS;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.common.IStoppable;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.dolphinscheduler.common.enums.NodeType;
import org.apache.dolphinscheduler.common.enums.StateEvent;
import org.apache.dolphinscheduler.common.enums.StateEventType;
import org.apache.dolphinscheduler.common.model.Server;
import org.apache.dolphinscheduler.common.thread.ThreadUtils;
import org.apache.dolphinscheduler.common.utils.NetUtils;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.registry.api.ConnectionState;
import org.apache.dolphinscheduler.remote.utils.NamedThreadFactory;
import org.apache.dolphinscheduler.server.builder.TaskExecutionContextBuilder;
import org.apache.dolphinscheduler.server.master.cache.ProcessInstanceExecCacheManager;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread;
import org.apache.dolphinscheduler.server.registry.HeartBeatTask;
import org.apache.dolphinscheduler.server.utils.ProcessUtils;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.queue.entity.TaskExecutionContext;
import org.apache.dolphinscheduler.service.registry.RegistryClient;
import org.apache.commons.lang.StringUtils;
import java.util.Collections;
import java.util.Date;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import com.google.common.collect.Sets;
/**
* zookeeper master client
* <p>
* single instance
*/
@Component
public class MasterRegistryClient {
/**
* logger
*/
private static final Logger logger = LoggerFactory.getLogger(MasterRegistryClient.class);
/**
* process service
*/
@Autowired
private ProcessService processService;
@Autowired
private RegistryClient registryClient;
/**
* master config
*/
@Autowired
private MasterConfig masterConfig;
/**
* heartbeat executor
*/
private ScheduledExecutorService heartBeatExecutor;
@Autowired
private ProcessInstanceExecCacheManager processInstanceExecCacheManager;
/**
* master startup time, ms
*/
private long startupTime;
private String localNodePath;
public void init() {
this.startupTime = System.currentTimeMillis();
this.heartBeatExecutor = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("HeartBeatExecutor"));
}
public void start() {
String nodeLock = Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS;
try {
// create distributed lock with the root node path of the lock space as /dolphinscheduler/lock/failover/startup-masters
registryClient.getLock(nodeLock);
// master registry
registry();
String registryPath = getMasterPath();
registryClient.handleDeadServer(Collections.singleton(registryPath), NodeType.MASTER, Constants.DELETE_OP);
// init system node
while (!registryClient.checkNodeExists(NetUtils.getHost(), NodeType.MASTER)) {
ThreadUtils.sleep(SLEEP_TIME_MILLIS);
}
// self tolerant
if (registryClient.getActiveMasterNum() == 1) {
removeNodePath(null, NodeType.MASTER, true);
removeNodePath(null, NodeType.WORKER, true);
}
registryClient.subscribe(REGISTRY_DOLPHINSCHEDULER_NODE, new MasterRegistryDataListener());
} catch (Exception e) {
logger.error("master start up exception", e);
} finally {
registryClient.releaseLock(nodeLock);
}
}
public void setRegistryStoppable(IStoppable stoppable) {
registryClient.setStoppable(stoppable);
}
public void closeRegistry() {
// TODO unsubscribe MasterRegistryDataListener
deregister();
}
/**
* remove zookeeper node path
*
* @param path zookeeper node path
* @param nodeType zookeeper node type
* @param failover is failover
*/
public void removeNodePath(String path, NodeType nodeType, boolean failover) {
logger.info("{} node deleted : {}", nodeType, path);
String failoverPath = getFailoverLockPath(nodeType);
try {
registryClient.getLock(failoverPath);
String serverHost = null;
if (!StringUtils.isEmpty(path)) {
serverHost = registryClient.getHostByEventDataPath(path);
if (StringUtils.isEmpty(serverHost)) {
logger.error("server down error: unknown path: {}", path);
return;
}
// handle dead server
registryClient.handleDeadServer(Collections.singleton(path), nodeType, Constants.ADD_OP);
}
//failover server
if (failover) {
failoverServerWhenDown(serverHost, nodeType);
}
} catch (Exception e) {
logger.error("{} server failover failed.", nodeType);
logger.error("failover exception ", e);
} finally {
registryClient.releaseLock(failoverPath);
}
}
/**
* failover server when server down
*
* @param serverHost server host
* @param nodeType zookeeper node type
*/
private void failoverServerWhenDown(String serverHost, NodeType nodeType) {
switch (nodeType) {
case MASTER:
failoverMaster(serverHost);
break;
case WORKER:
failoverWorker(serverHost, true, true);
break;
default:
break;
}
}
/**
* get failover lock path
*
* @param nodeType zookeeper node type
* @return fail over lock path
*/
private String getFailoverLockPath(NodeType nodeType) {
switch (nodeType) {
case MASTER:
return Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS;
case WORKER:
return Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS;
default:
return "";
}
}
/**
* task needs failover if task start before worker starts
*
* @param taskInstance task instance
* @return true if task instance need fail over
*/
private boolean checkTaskInstanceNeedFailover(TaskInstance taskInstance) {
boolean taskNeedFailover = true;
//now no host will execute this task instance,so no need to failover the task
if (taskInstance.getHost() == null) {
return false;
}
// if the worker node exists in zookeeper, we must check the task starts after the worker
if (registryClient.checkNodeExists(taskInstance.getHost(), NodeType.WORKER)) {
//if task start after worker starts, there is no need to failover the task.
if (checkTaskAfterWorkerStart(taskInstance)) {
taskNeedFailover = false;
}
}
return taskNeedFailover;
}
/**
* check task start after the worker server starts.
*
* @param taskInstance task instance
* @return true if task instance start time after worker server start date
*/
private boolean checkTaskAfterWorkerStart(TaskInstance taskInstance) {
if (StringUtils.isEmpty(taskInstance.getHost())) {
return false;
}
Date workerServerStartDate = null;
List<Server> workerServers = registryClient.getServerList(NodeType.WORKER);
for (Server workerServer : workerServers) {
if (taskInstance.getHost().equals(workerServer.getHost() + Constants.COLON + workerServer.getPort())) {
workerServerStartDate = workerServer.getCreateTime();
break;
}
}
if (workerServerStartDate != null) {
return taskInstance.getStartTime().after(workerServerStartDate);
}
return false;
}
/**
* failover worker tasks
* <p>
* 1. kill yarn job if there are yarn jobs in tasks.
* 2. change task state from running to need failover.
* 3. failover all tasks when workerHost is null
*
* @param workerHost worker host
* @param needCheckWorkerAlive need check worker alive
* @param checkOwner need check process instance owner
*/
private void failoverWorker(String workerHost, boolean needCheckWorkerAlive, boolean checkOwner) {
logger.info("start worker[{}] failover ...", workerHost);
List<TaskInstance> needFailoverTaskInstanceList = processService.queryNeedFailoverTaskInstances(workerHost);
for (TaskInstance taskInstance : needFailoverTaskInstanceList) {
if (needCheckWorkerAlive) {
if (!checkTaskInstanceNeedFailover(taskInstance)) {
continue;
}
}
ProcessInstance processInstance = processService.findProcessInstanceDetailById(taskInstance.getProcessInstanceId());
if (workerHost == null
|| !checkOwner
|| processInstance.getHost().equalsIgnoreCase(getLocalAddress())) {
// only failover the task owned myself if worker down.
if (processInstance == null) {
logger.error("failover error, the process {} of task {} do not exists.",
taskInstance.getProcessInstanceId(), taskInstance.getId());
continue;
}
taskInstance.setProcessInstance(processInstance);
TaskExecutionContext taskExecutionContext = TaskExecutionContextBuilder.get()
.buildTaskInstanceRelatedInfo(taskInstance)
.buildProcessInstanceRelatedInfo(processInstance)
.create();
// only kill yarn job if exists , the local thread has exited
ProcessUtils.killYarnJob(taskExecutionContext);
taskInstance.setState(ExecutionStatus.NEED_FAULT_TOLERANCE);
processService.saveTaskInstance(taskInstance);
if (!processInstanceExecCacheManager.contains(processInstance.getId())) {
continue;
}
WorkflowExecuteThread workflowExecuteThreadNotify = processInstanceExecCacheManager.getByProcessInstanceId(processInstance.getId());
StateEvent stateEvent = new StateEvent();
stateEvent.setTaskInstanceId(taskInstance.getId());
stateEvent.setType(StateEventType.TASK_STATE_CHANGE);
stateEvent.setProcessInstanceId(processInstance.getId());
stateEvent.setExecutionStatus(taskInstance.getState());
workflowExecuteThreadNotify.addStateEvent(stateEvent);
}
}
logger.info("end worker[{}] failover ...", workerHost);
}
/**
* failover master tasks
*
* @param masterHost master host
*/
private void failoverMaster(String masterHost) {
logger.info("start master failover ...");
List<ProcessInstance> needFailoverProcessInstanceList = processService.queryNeedFailoverProcessInstances(masterHost);
logger.info("failover process list size:{} ", needFailoverProcessInstanceList.size());
//updateProcessInstance host is null and insert into command
for (ProcessInstance processInstance : needFailoverProcessInstanceList) {
logger.info("failover process instance id: {} host:{}", processInstance.getId(), processInstance.getHost());
if (Constants.NULL.equals(processInstance.getHost())) {
continue;
}
processService.processNeedFailoverProcessInstances(processInstance);
}
failoverWorker(masterHost, true, false);
logger.info("master failover end");
}
/**
* registry
*/
public void registry() {
String address = NetUtils.getAddr(masterConfig.getListenPort());
localNodePath = getMasterPath();
int masterHeartbeatInterval = masterConfig.getHeartbeatInterval();
HeartBeatTask heartBeatTask = new HeartBeatTask(startupTime,
masterConfig.getMaxCpuLoadAvg(),
masterConfig.getReservedMemory(),
Sets.newHashSet(getMasterPath()),
Constants.MASTER_TYPE,
registryClient);
registryClient.persistEphemeral(localNodePath, heartBeatTask.getHeartBeatInfo());
registryClient.addConnectionStateListener(newState -> {
if (newState == ConnectionState.RECONNECTED || newState == ConnectionState.SUSPENDED) {
registryClient.persistEphemeral(localNodePath, "");
}
});
this.heartBeatExecutor.scheduleAtFixedRate(heartBeatTask, masterHeartbeatInterval, masterHeartbeatInterval, TimeUnit.SECONDS);
logger.info("master node : {} registry to ZK successfully with heartBeatInterval : {}s", address, masterHeartbeatInterval);
}
public void deregister() {
try {
String address = getLocalAddress();
String localNodePath = getMasterPath();
registryClient.remove(localNodePath);
logger.info("master node : {} unRegistry to register center.", address);
heartBeatExecutor.shutdown();
logger.info("heartbeat executor shutdown");
registryClient.close();
} catch (Exception e) {
logger.error("remove registry path exception ", e);
}
}
/**
* get master path
*/
public String getMasterPath() {
String address = getLocalAddress();
return REGISTRY_DOLPHINSCHEDULER_MASTERS + "/" + address;
}
/**
* get local address
*/
private String getLocalAddress() {
return NetUtils.getAddr(masterConfig.getListenPort());
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 7,004 | [Bug] [MasterServer] master will still work when it lose zk connection | ### Search before asking
- [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues.
### What happened
Now when master lose zk connection, it will still find commands and handle processInstance, because the condition `ServerNodeManager.MASTER_SIZE` and `slot` is not changed. So it may happen two masters have the same slot, and
I found `MasterRegistryClient` addConnectionStateListener when registry:
`registryClient.addConnectionStateListener(newState -> {
if (newState == ConnectionState.RECONNECTED || newState == ConnectionState.SUSPENDED) {
registryClient.persistEphemeral(localNodePath, "");
}
});
`
### What you expected to happen
When master lose zk connection, stop processing command until reconnect.
### How to reproduce
none
### Anything else
_No response_
### Version
dev
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
| https://github.com/apache/dolphinscheduler/issues/7004 | https://github.com/apache/dolphinscheduler/pull/7045 | 6e5539478392d6c1f334ae53d01574b5e54e769c | be3bd4c83d30773ce6204bca08d3cd1e6444dea9 | "2021-11-26T02:54:02Z" | java | "2021-11-30T03:00:17Z" | dolphinscheduler-server/src/test/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistryClientTest.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.server.master.registry;
import static org.mockito.BDDMockito.given;
import static org.mockito.Mockito.doNothing;
import org.apache.dolphinscheduler.common.enums.CommandType;
import org.apache.dolphinscheduler.common.enums.NodeType;
import org.apache.dolphinscheduler.common.model.Server;
import org.apache.dolphinscheduler.dao.entity.ProcessInstance;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.server.master.cache.impl.ProcessInstanceExecCacheManagerImpl;
import org.apache.dolphinscheduler.server.master.config.MasterConfig;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.dolphinscheduler.service.registry.RegistryClient;
import java.util.Arrays;
import java.util.Date;
import java.util.concurrent.ScheduledExecutorService;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.InjectMocks;
import org.mockito.Mock;
import org.mockito.Mockito;
import org.powermock.core.classloader.annotations.PowerMockIgnore;
import org.powermock.core.classloader.annotations.PrepareForTest;
import org.powermock.modules.junit4.PowerMockRunner;
import org.springframework.test.util.ReflectionTestUtils;
/**
* MasterRegistryClientTest
*/
@RunWith(PowerMockRunner.class)
@PrepareForTest({ RegistryClient.class })
@PowerMockIgnore({"javax.management.*"})
public class MasterRegistryClientTest {
@InjectMocks
private MasterRegistryClient masterRegistryClient;
@Mock
private MasterConfig masterConfig;
@Mock
private RegistryClient registryClient;
@Mock
private ScheduledExecutorService heartBeatExecutor;
@Mock
private ProcessService processService;
@Mock
private ProcessInstanceExecCacheManagerImpl processInstanceExecCacheManager;
@Before
public void before() throws Exception {
given(registryClient.getLock(Mockito.anyString())).willReturn(true);
given(registryClient.releaseLock(Mockito.anyString())).willReturn(true);
given(registryClient.getHostByEventDataPath(Mockito.anyString())).willReturn("127.0.0.1:8080");
doNothing().when(registryClient).handleDeadServer(Mockito.anySet(), Mockito.any(NodeType.class), Mockito.anyString());
ReflectionTestUtils.setField(masterRegistryClient, "registryClient", registryClient);
ProcessInstance processInstance = new ProcessInstance();
processInstance.setId(1);
processInstance.setHost("127.0.0.1:8080");
processInstance.setHistoryCmd("xxx");
processInstance.setCommandType(CommandType.STOP);
given(processService.queryNeedFailoverProcessInstances(Mockito.anyString())).willReturn(Arrays.asList(processInstance));
doNothing().when(processService).processNeedFailoverProcessInstances(Mockito.any(ProcessInstance.class));
TaskInstance taskInstance = new TaskInstance();
taskInstance.setId(1);
taskInstance.setStartTime(new Date());
taskInstance.setHost("127.0.0.1:8080");
given(processService.queryNeedFailoverTaskInstances(Mockito.anyString())).willReturn(Arrays.asList(taskInstance));
given(processService.findProcessInstanceDetailById(Mockito.anyInt())).willReturn(processInstance);
given(registryClient.checkNodeExists(Mockito.anyString(), Mockito.any())).willReturn(true);
Server server = new Server();
server.setHost("127.0.0.1");
server.setPort(8080);
server.setCreateTime(new Date());
given(registryClient.getServerList(NodeType.WORKER)).willReturn(Arrays.asList(server));
}
@Test
public void registryTest() {
masterRegistryClient.registry();
}
@Test
public void removeNodePathTest() {
masterRegistryClient.removeNodePath("/path", NodeType.MASTER, false);
masterRegistryClient.removeNodePath("/path", NodeType.MASTER, true);
//Cannot mock static methods
masterRegistryClient.removeNodePath("/path", NodeType.WORKER, true);
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,229 | [Feature][server port] server port custom config | **Describe the feature**
all application can custom config server port
| https://github.com/apache/dolphinscheduler/issues/5229 | https://github.com/apache/dolphinscheduler/pull/6947 | 100a9ba3fc21764158593bd33e2b261b0ef39982 | 0dcff1425a28da0ce0004fc3e594b97d080c5fd9 | "2021-04-07T09:22:45Z" | java | "2021-11-30T16:21:03Z" | dolphinscheduler-alert/dolphinscheduler-alert-server/src/main/java/org/apache/dolphinscheduler/alert/AlertServer.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.alert;
import static org.apache.dolphinscheduler.common.Constants.ALERT_RPC_PORT;
import org.apache.dolphinscheduler.common.thread.Stopper;
import org.apache.dolphinscheduler.dao.AlertDao;
import org.apache.dolphinscheduler.dao.PluginDao;
import org.apache.dolphinscheduler.dao.entity.Alert;
import org.apache.dolphinscheduler.remote.NettyRemotingServer;
import org.apache.dolphinscheduler.remote.command.CommandType;
import org.apache.dolphinscheduler.remote.config.NettyServerConfig;
import java.io.Closeable;
import java.util.List;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.annotation.ComponentScan;
@EnableAutoConfiguration
@ComponentScan(value = {
"org.apache.dolphinscheduler.alert",
"org.apache.dolphinscheduler.dao"
})
public class AlertServer implements Closeable {
private static final Logger log = LoggerFactory.getLogger(AlertServer.class);
private final PluginDao pluginDao;
private final AlertDao alertDao;
private final AlertPluginManager alertPluginManager;
private final AlertSender alertSender;
private final AlertRequestProcessor alertRequestProcessor;
private NettyRemotingServer server;
public AlertServer(PluginDao pluginDao, AlertDao alertDao, AlertPluginManager alertPluginManager, AlertSender alertSender, AlertRequestProcessor alertRequestProcessor) {
this.pluginDao = pluginDao;
this.alertDao = alertDao;
this.alertPluginManager = alertPluginManager;
this.alertSender = alertSender;
this.alertRequestProcessor = alertRequestProcessor;
}
public static void main(String[] args) {
SpringApplication.run(AlertServer.class, args);
}
@PostConstruct
public void start() {
log.info("Starting Alert server");
checkTable();
startServer();
if (alertPluginManager.size() == 0) {
log.warn("No alert plugin, alert sender will exit.");
return;
}
Executors.newScheduledThreadPool(1)
.scheduleAtFixedRate(new Sender(), 5, 5, TimeUnit.SECONDS);
}
@Override
@PreDestroy
public void close() {
server.close();
}
private void checkTable() {
if (!pluginDao.checkPluginDefineTableExist()) {
log.error("Plugin Define Table t_ds_plugin_define Not Exist . Please Create it First !");
System.exit(1);
}
}
private void startServer() {
NettyServerConfig serverConfig = new NettyServerConfig();
serverConfig.setListenPort(ALERT_RPC_PORT);
server = new NettyRemotingServer(serverConfig);
server.registerProcessor(CommandType.ALERT_SEND_REQUEST, alertRequestProcessor);
server.start();
}
final class Sender implements Runnable {
@Override
public void run() {
if (!Stopper.isRunning()) {
return;
}
try {
final List<Alert> alerts = alertDao.listPendingAlerts();
alertSender.send(alerts);
} catch (Exception e) {
log.error("Failed to send alert", e);
}
}
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,229 | [Feature][server port] server port custom config | **Describe the feature**
all application can custom config server port
| https://github.com/apache/dolphinscheduler/issues/5229 | https://github.com/apache/dolphinscheduler/pull/6947 | 100a9ba3fc21764158593bd33e2b261b0ef39982 | 0dcff1425a28da0ce0004fc3e594b97d080c5fd9 | "2021-04-07T09:22:45Z" | java | "2021-11-30T16:21:03Z" | dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/LoggerServiceImpl.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.api.service.impl;
import org.apache.dolphinscheduler.api.enums.Status;
import org.apache.dolphinscheduler.api.exceptions.ServiceException;
import org.apache.dolphinscheduler.api.service.LoggerService;
import org.apache.dolphinscheduler.api.utils.Result;
import org.apache.dolphinscheduler.common.Constants;
import org.apache.dolphinscheduler.dao.entity.TaskInstance;
import org.apache.dolphinscheduler.remote.utils.Host;
import org.apache.dolphinscheduler.service.log.LogClientService;
import org.apache.dolphinscheduler.service.process.ProcessService;
import org.apache.commons.lang.StringUtils;
import java.nio.charset.StandardCharsets;
import java.util.Objects;
import javax.annotation.PostConstruct;
import javax.annotation.PreDestroy;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.google.common.primitives.Bytes;
/**
* logger service impl
*/
@Service
public class LoggerServiceImpl implements LoggerService {
private static final Logger logger = LoggerFactory.getLogger(LoggerServiceImpl.class);
private static final String LOG_HEAD_FORMAT = "[LOG-PATH]: %s, [HOST]: %s%s";
@Autowired
private ProcessService processService;
private LogClientService logClient;
@PostConstruct
public void init() {
if (Objects.isNull(this.logClient)) {
this.logClient = new LogClientService();
}
}
@PreDestroy
public void close() {
if (Objects.nonNull(this.logClient) && this.logClient.isRunning()) {
logClient.close();
}
}
/**
* view log
*
* @param taskInstId task instance id
* @param skipLineNum skip line number
* @param limit limit
* @return log string data
*/
@Override
@SuppressWarnings("unchecked")
public Result<String> queryLog(int taskInstId, int skipLineNum, int limit) {
TaskInstance taskInstance = processService.findTaskInstanceById(taskInstId);
if (taskInstance == null || StringUtils.isBlank(taskInstance.getHost())) {
return Result.error(Status.TASK_INSTANCE_NOT_FOUND);
}
String host = getHost(taskInstance.getHost());
Result<String> result = new Result<>(Status.SUCCESS.getCode(), Status.SUCCESS.getMsg());
logger.info("log host : {} , logPath : {} , logServer port : {}", host, taskInstance.getLogPath(),
Constants.RPC_PORT);
StringBuilder log = new StringBuilder();
if (skipLineNum == 0) {
String head = String.format(LOG_HEAD_FORMAT,
taskInstance.getLogPath(),
host,
Constants.SYSTEM_LINE_SEPARATOR);
log.append(head);
}
log.append(logClient
.rollViewLog(host, Constants.RPC_PORT, taskInstance.getLogPath(), skipLineNum, limit));
result.setData(log.toString());
return result;
}
/**
* get log size
*
* @param taskInstId task instance id
* @return log byte array
*/
@Override
public byte[] getLogBytes(int taskInstId) {
TaskInstance taskInstance = processService.findTaskInstanceById(taskInstId);
if (taskInstance == null || StringUtils.isBlank(taskInstance.getHost())) {
throw new ServiceException("task instance is null or host is null");
}
String host = getHost(taskInstance.getHost());
byte[] head = String.format(LOG_HEAD_FORMAT,
taskInstance.getLogPath(),
host,
Constants.SYSTEM_LINE_SEPARATOR).getBytes(StandardCharsets.UTF_8);
return Bytes.concat(head,
logClient.getLogBytes(host, Constants.RPC_PORT, taskInstance.getLogPath()));
}
/**
* get host
*
* @param address address
* @return old version return true ,otherwise return false
*/
private String getHost(String address) {
if (Boolean.TRUE.equals(Host.isOldVersion(address))) {
return address;
}
return Host.of(address).getIp();
}
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,229 | [Feature][server port] server port custom config | **Describe the feature**
all application can custom config server port
| https://github.com/apache/dolphinscheduler/issues/5229 | https://github.com/apache/dolphinscheduler/pull/6947 | 100a9ba3fc21764158593bd33e2b261b0ef39982 | 0dcff1425a28da0ce0004fc3e594b97d080c5fd9 | "2021-04-07T09:22:45Z" | java | "2021-11-30T16:21:03Z" | dolphinscheduler-common/src/main/java/org/apache/dolphinscheduler/common/Constants.java | /*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.dolphinscheduler.common;
import org.apache.dolphinscheduler.common.enums.ExecutionStatus;
import org.apache.commons.lang.StringUtils;
import org.apache.commons.lang.SystemUtils;
import java.util.regex.Pattern;
/**
* Constants
*/
public final class Constants {
private Constants() {
throw new UnsupportedOperationException("Construct Constants");
}
/**
* common properties path
*/
public static final String COMMON_PROPERTIES_PATH = "/common.properties";
/**
* alter properties
*/
public static final int ALERT_RPC_PORT = 50052;
/**
* registry properties
*/
public static final String REGISTRY_DOLPHINSCHEDULER_MASTERS = "/nodes/master";
public static final String REGISTRY_DOLPHINSCHEDULER_WORKERS = "/nodes/worker";
public static final String REGISTRY_DOLPHINSCHEDULER_DEAD_SERVERS = "/dead-servers";
public static final String REGISTRY_DOLPHINSCHEDULER_NODE = "/nodes";
public static final String REGISTRY_DOLPHINSCHEDULER_LOCK_MASTERS = "/lock/masters";
public static final String REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS = "/lock/failover/masters";
public static final String REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS = "/lock/failover/workers";
public static final String REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS = "/lock/failover/startup-masters";
/**
* fs.defaultFS
*/
public static final String FS_DEFAULTFS = "fs.defaultFS";
/**
* fs s3a endpoint
*/
public static final String FS_S3A_ENDPOINT = "fs.s3a.endpoint";
/**
* fs s3a access key
*/
public static final String FS_S3A_ACCESS_KEY = "fs.s3a.access.key";
/**
* fs s3a secret key
*/
public static final String FS_S3A_SECRET_KEY = "fs.s3a.secret.key";
/**
* hadoop configuration
*/
public static final String HADOOP_RM_STATE_ACTIVE = "ACTIVE";
public static final String HADOOP_RESOURCE_MANAGER_HTTPADDRESS_PORT = "resource.manager.httpaddress.port";
/**
* yarn.resourcemanager.ha.rm.ids
*/
public static final String YARN_RESOURCEMANAGER_HA_RM_IDS = "yarn.resourcemanager.ha.rm.ids";
/**
* yarn.application.status.address
*/
public static final String YARN_APPLICATION_STATUS_ADDRESS = "yarn.application.status.address";
/**
* yarn.job.history.status.address
*/
public static final String YARN_JOB_HISTORY_STATUS_ADDRESS = "yarn.job.history.status.address";
/**
* hdfs configuration
* hdfs.root.user
*/
public static final String HDFS_ROOT_USER = "hdfs.root.user";
/**
* hdfs/s3 configuration
* resource.upload.path
*/
public static final String RESOURCE_UPLOAD_PATH = "resource.upload.path";
/**
* data basedir path
*/
public static final String DATA_BASEDIR_PATH = "data.basedir.path";
/**
* dolphinscheduler.env.path
*/
public static final String DOLPHINSCHEDULER_ENV_PATH = "dolphinscheduler.env.path";
/**
* environment properties default path
*/
public static final String ENV_PATH = "env/dolphinscheduler_env.sh";
/**
* resource.view.suffixs
*/
public static final String RESOURCE_VIEW_SUFFIXS = "resource.view.suffixs";
public static final String RESOURCE_VIEW_SUFFIXS_DEFAULT_VALUE = "txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js";
/**
* development.state
*/
public static final String DEVELOPMENT_STATE = "development.state";
/**
* sudo enable
*/
public static final String SUDO_ENABLE = "sudo.enable";
/**
* string true
*/
public static final String STRING_TRUE = "true";
/**
* resource storage type
*/
public static final String RESOURCE_STORAGE_TYPE = "resource.storage.type";
/**
* comma ,
*/
public static final String COMMA = ",";
/**
* COLON :
*/
public static final String COLON = ":";
/**
* SINGLE_SLASH /
*/
public static final String SINGLE_SLASH = "/";
/**
* DOUBLE_SLASH //
*/
public static final String DOUBLE_SLASH = "//";
/**
* EQUAL SIGN
*/
public static final String EQUAL_SIGN = "=";
/**
* date format of yyyy-MM-dd HH:mm:ss
*/
public static final String YYYY_MM_DD_HH_MM_SS = "yyyy-MM-dd HH:mm:ss";
/**
* date format of yyyyMMdd
*/
public static final String YYYYMMDD = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String YYYYMMDDHHMMSS = "yyyyMMddHHmmss";
/**
* date format of yyyyMMddHHmmssSSS
*/
public static final String YYYYMMDDHHMMSSSSS = "yyyyMMddHHmmssSSS";
/**
* http connect time out
*/
public static final int HTTP_CONNECT_TIMEOUT = 60 * 1000;
/**
* http connect request time out
*/
public static final int HTTP_CONNECTION_REQUEST_TIMEOUT = 60 * 1000;
/**
* httpclient soceket time out
*/
public static final int SOCKET_TIMEOUT = 60 * 1000;
/**
* http header
*/
public static final String HTTP_HEADER_UNKNOWN = "unKnown";
/**
* http X-Forwarded-For
*/
public static final String HTTP_X_FORWARDED_FOR = "X-Forwarded-For";
/**
* http X-Real-IP
*/
public static final String HTTP_X_REAL_IP = "X-Real-IP";
/**
* UTF-8
*/
public static final String UTF_8 = "UTF-8";
/**
* user name regex
*/
public static final Pattern REGEX_USER_NAME = Pattern.compile("^[a-zA-Z0-9._-]{3,39}$");
/**
* email regex
*/
public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^([a-z0-9A-Z]+[_|\\-|\\.]?)+[a-z0-9A-Z]@([a-z0-9A-Z]+(-[a-z0-9A-Z]+)?\\.)+[a-zA-Z]{2,}$");
/**
* read permission
*/
public static final int READ_PERMISSION = 2;
/**
* write permission
*/
public static final int WRITE_PERMISSION = 2 * 2;
/**
* execute permission
*/
public static final int EXECUTE_PERMISSION = 1;
/**
* default admin permission
*/
public static final int DEFAULT_ADMIN_PERMISSION = 7;
/**
* all permissions
*/
public static final int ALL_PERMISSIONS = READ_PERMISSION | WRITE_PERMISSION | EXECUTE_PERMISSION;
/**
* max task timeout
*/
public static final int MAX_TASK_TIMEOUT = 24 * 3600;
/**
* worker host weight
*/
public static final int DEFAULT_WORKER_HOST_WEIGHT = 100;
/**
* time unit secong to minutes
*/
public static final int SEC_2_MINUTES_TIME_UNIT = 60;
/***
*
* rpc port
*/
public static final int RPC_PORT = 50051;
/**
* forbid running task
*/
public static final String FLOWNODE_RUN_FLAG_FORBIDDEN = "FORBIDDEN";
/**
* normal running task
*/
public static final String FLOWNODE_RUN_FLAG_NORMAL = "NORMAL";
public static final String COMMON_TASK_TYPE = "common";
public static final String DEFAULT = "default";
public static final String PASSWORD = "password";
public static final String XXXXXX = "******";
public static final String NULL = "NULL";
public static final String THREAD_NAME_MASTER_SERVER = "Master-Server";
public static final String THREAD_NAME_WORKER_SERVER = "Worker-Server";
/**
* command parameter keys
*/
public static final String CMD_PARAM_RECOVER_PROCESS_ID_STRING = "ProcessInstanceId";
public static final String CMD_PARAM_RECOVERY_START_NODE_STRING = "StartNodeIdList";
public static final String CMD_PARAM_RECOVERY_WAITING_THREAD = "WaitingThreadInstanceId";
public static final String CMD_PARAM_SUB_PROCESS = "processInstanceId";
public static final String CMD_PARAM_EMPTY_SUB_PROCESS = "0";
public static final String CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID = "parentProcessInstanceId";
public static final String CMD_PARAM_SUB_PROCESS_DEFINE_CODE = "processDefinitionCode";
public static final String CMD_PARAM_START_NODES = "StartNodeList";
public static final String CMD_PARAM_START_PARAMS = "StartParams";
public static final String CMD_PARAM_FATHER_PARAMS = "fatherParams";
/**
* complement data start date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_START_DATE = "complementStartDate";
/**
* complement data end date
*/
public static final String CMDPARAM_COMPLEMENT_DATA_END_DATE = "complementEndDate";
/**
* complement date default cron string
*/
public static final String DEFAULT_CRON_STRING = "0 0 0 * * ? *";
public static final int SLEEP_TIME_MILLIS = 1000;
/**
* one second mils
*/
public static final int SECOND_TIME_MILLIS = 1000;
/**
* master task instance cache-database refresh interval
*/
public static final int CACHE_REFRESH_TIME_MILLIS = 20 * 1000;
/**
* heartbeat for zk info length
*/
public static final int HEARTBEAT_FOR_ZOOKEEPER_INFO_LENGTH = 13;
/**
* jar
*/
public static final String JAR = "jar";
/**
* hadoop
*/
public static final String HADOOP = "hadoop";
/**
* -D <property>=<value>
*/
public static final String D = "-D";
/**
* exit code success
*/
public static final int EXIT_CODE_SUCCESS = 0;
/**
* exit code failure
*/
public static final int EXIT_CODE_FAILURE = -1;
/**
* process or task definition failure
*/
public static final int DEFINITION_FAILURE = -1;
/**
* process or task definition first version
*/
public static final int VERSION_FIRST = 1;
/**
* date format of yyyyMMdd
*/
public static final String PARAMETER_FORMAT_DATE = "yyyyMMdd";
/**
* date format of yyyyMMddHHmmss
*/
public static final String PARAMETER_FORMAT_TIME = "yyyyMMddHHmmss";
/**
* system date(yyyyMMddHHmmss)
*/
public static final String PARAMETER_DATETIME = "system.datetime";
/**
* system date(yyyymmdd) today
*/
public static final String PARAMETER_CURRENT_DATE = "system.biz.curdate";
/**
* system date(yyyymmdd) yesterday
*/
public static final String PARAMETER_BUSINESS_DATE = "system.biz.date";
/**
* ACCEPTED
*/
public static final String ACCEPTED = "ACCEPTED";
/**
* SUCCEEDED
*/
public static final String SUCCEEDED = "SUCCEEDED";
/**
* NEW
*/
public static final String NEW = "NEW";
/**
* NEW_SAVING
*/
public static final String NEW_SAVING = "NEW_SAVING";
/**
* SUBMITTED
*/
public static final String SUBMITTED = "SUBMITTED";
/**
* FAILED
*/
public static final String FAILED = "FAILED";
/**
* KILLED
*/
public static final String KILLED = "KILLED";
/**
* RUNNING
*/
public static final String RUNNING = "RUNNING";
/**
* underline "_"
*/
public static final String UNDERLINE = "_";
/**
* quartz job prifix
*/
public static final String QUARTZ_JOB_PRIFIX = "job";
/**
* quartz job group prifix
*/
public static final String QUARTZ_JOB_GROUP_PRIFIX = "jobgroup";
/**
* projectId
*/
public static final String PROJECT_ID = "projectId";
/**
* processId
*/
public static final String SCHEDULE_ID = "scheduleId";
/**
* schedule
*/
public static final String SCHEDULE = "schedule";
/**
* application regex
*/
public static final String APPLICATION_REGEX = "application_\\d+_\\d+";
public static final String PID = SystemUtils.IS_OS_WINDOWS ? "handle" : "pid";
/**
* month_begin
*/
public static final String MONTH_BEGIN = "month_begin";
/**
* add_months
*/
public static final String ADD_MONTHS = "add_months";
/**
* month_end
*/
public static final String MONTH_END = "month_end";
/**
* week_begin
*/
public static final String WEEK_BEGIN = "week_begin";
/**
* week_end
*/
public static final String WEEK_END = "week_end";
/**
* timestamp
*/
public static final String TIMESTAMP = "timestamp";
public static final char SUBTRACT_CHAR = '-';
public static final char ADD_CHAR = '+';
public static final char MULTIPLY_CHAR = '*';
public static final char DIVISION_CHAR = '/';
public static final char LEFT_BRACE_CHAR = '(';
public static final char RIGHT_BRACE_CHAR = ')';
public static final String ADD_STRING = "+";
public static final String STAR = "*";
public static final String DIVISION_STRING = "/";
public static final String LEFT_BRACE_STRING = "(";
public static final char P = 'P';
public static final char N = 'N';
public static final String SUBTRACT_STRING = "-";
public static final String GLOBAL_PARAMS = "globalParams";
public static final String LOCAL_PARAMS = "localParams";
public static final String SUBPROCESS_INSTANCE_ID = "subProcessInstanceId";
public static final String PROCESS_INSTANCE_STATE = "processInstanceState";
public static final String PARENT_WORKFLOW_INSTANCE = "parentWorkflowInstance";
public static final String CONDITION_RESULT = "conditionResult";
public static final String SWITCH_RESULT = "switchResult";
public static final String WAIT_START_TIMEOUT = "waitStartTimeout";
public static final String DEPENDENCE = "dependence";
public static final String TASK_LIST = "taskList";
public static final String QUEUE = "queue";
public static final String QUEUE_NAME = "queueName";
public static final int LOG_QUERY_SKIP_LINE_NUMBER = 0;
public static final int LOG_QUERY_LIMIT = 4096;
/**
* master/worker server use for zk
*/
public static final String MASTER_TYPE = "master";
public static final String WORKER_TYPE = "worker";
public static final String DELETE_OP = "delete";
public static final String ADD_OP = "add";
public static final String ALIAS = "alias";
public static final String CONTENT = "content";
public static final String DEPENDENT_SPLIT = ":||";
public static final long DEPENDENT_ALL_TASK_CODE = 0;
/**
* preview schedule execute count
*/
public static final int PREVIEW_SCHEDULE_EXECUTE_COUNT = 5;
/**
* kerberos
*/
public static final String KERBEROS = "kerberos";
/**
* kerberos expire time
*/
public static final String KERBEROS_EXPIRE_TIME = "kerberos.expire.time";
/**
* java.security.krb5.conf
*/
public static final String JAVA_SECURITY_KRB5_CONF = "java.security.krb5.conf";
/**
* java.security.krb5.conf.path
*/
public static final String JAVA_SECURITY_KRB5_CONF_PATH = "java.security.krb5.conf.path";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION = "hadoop.security.authentication";
/**
* hadoop.security.authentication
*/
public static final String HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE = "hadoop.security.authentication.startup.state";
/**
* com.amazonaws.services.s3.enableV4
*/
public static final String AWS_S3_V4 = "com.amazonaws.services.s3.enableV4";
/**
* loginUserFromKeytab user
*/
public static final String LOGIN_USER_KEY_TAB_USERNAME = "login.user.keytab.username";
/**
* loginUserFromKeytab path
*/
public static final String LOGIN_USER_KEY_TAB_PATH = "login.user.keytab.path";
/**
* task log info format
*/
public static final String TASK_LOG_INFO_FORMAT = "TaskLogInfo-%s";
public static final int[] NOT_TERMINATED_STATES = new int[] {
ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.DELAY_EXECUTION.ordinal(),
ExecutionStatus.READY_PAUSE.ordinal(),
ExecutionStatus.READY_STOP.ordinal(),
ExecutionStatus.NEED_FAULT_TOLERANCE.ordinal(),
ExecutionStatus.WAITING_THREAD.ordinal(),
ExecutionStatus.WAITING_DEPEND.ordinal()
};
public static final int[] RUNNING_PROCESS_STATE = new int[] {
ExecutionStatus.RUNNING_EXECUTION.ordinal(),
ExecutionStatus.SUBMITTED_SUCCESS.ordinal(),
ExecutionStatus.SERIAL_WAIT.ordinal()
};
/**
* status
*/
public static final String STATUS = "status";
/**
* message
*/
public static final String MSG = "msg";
/**
* data total
*/
public static final String COUNT = "count";
/**
* page size
*/
public static final String PAGE_SIZE = "pageSize";
/**
* current page no
*/
public static final String PAGE_NUMBER = "pageNo";
/**
*
*/
public static final String DATA_LIST = "data";
public static final String TOTAL_LIST = "totalList";
public static final String CURRENT_PAGE = "currentPage";
public static final String TOTAL_PAGE = "totalPage";
public static final String TOTAL = "total";
/**
* workflow
*/
public static final String WORKFLOW_LIST = "workFlowList";
public static final String WORKFLOW_RELATION_LIST = "workFlowRelationList";
/**
* session user
*/
public static final String SESSION_USER = "session.user";
public static final String SESSION_ID = "sessionId";
/**
* locale
*/
public static final String LOCALE_LANGUAGE = "language";
/**
* database type
*/
public static final String MYSQL = "MYSQL";
public static final String HIVE = "HIVE";
public static final String ADDRESS = "address";
public static final String DATABASE = "database";
public static final String OTHER = "other";
/**
* session timeout
*/
public static final int SESSION_TIME_OUT = 7200;
public static final int MAX_FILE_SIZE = 1024 * 1024 * 1024;
public static final String UDF = "UDF";
public static final String CLASS = "class";
/**
* dataSource sensitive param
*/
public static final String DATASOURCE_PASSWORD_REGEX = "(?<=((?i)password((\\\\\":\\\\\")|(=')))).*?(?=((\\\\\")|(')))";
/**
* default worker group
*/
public static final String DEFAULT_WORKER_GROUP = "default";
/**
* authorize writable perm
*/
public static final int AUTHORIZE_WRITABLE_PERM = 7;
/**
* authorize readable perm
*/
public static final int AUTHORIZE_READABLE_PERM = 4;
public static final int NORMAL_NODE_STATUS = 0;
public static final int ABNORMAL_NODE_STATUS = 1;
public static final int BUSY_NODE_STATUE = 2;
public static final String START_TIME = "start time";
public static final String END_TIME = "end time";
public static final String START_END_DATE = "startDate,endDate";
/**
* system line separator
*/
public static final String SYSTEM_LINE_SEPARATOR = System.getProperty("line.separator");
/**
* datasource encryption salt
*/
public static final String DATASOURCE_ENCRYPTION_SALT_DEFAULT = "!@#$%^&*";
public static final String DATASOURCE_ENCRYPTION_ENABLE = "datasource.encryption.enable";
public static final String DATASOURCE_ENCRYPTION_SALT = "datasource.encryption.salt";
/**
* network interface preferred
*/
public static final String DOLPHIN_SCHEDULER_NETWORK_INTERFACE_PREFERRED = "dolphin.scheduler.network.interface.preferred";
/**
* network IP gets priority, default inner outer
*/
public static final String DOLPHIN_SCHEDULER_NETWORK_PRIORITY_STRATEGY = "dolphin.scheduler.network.priority.strategy";
/**
* exec shell scripts
*/
public static final String SH = "sh";
/**
* pstree, get pud and sub pid
*/
public static final String PSTREE = "pstree";
public static final Boolean KUBERNETES_MODE = !StringUtils.isEmpty(System.getenv("KUBERNETES_SERVICE_HOST")) && !StringUtils.isEmpty(System.getenv("KUBERNETES_SERVICE_PORT"));
/**
* dry run flag
*/
public static final int DRY_RUN_FLAG_NO = 0;
public static final int DRY_RUN_FLAG_YES = 1;
}
|
closed | apache/dolphinscheduler | https://github.com/apache/dolphinscheduler | 5,229 | [Feature][server port] server port custom config | **Describe the feature**
all application can custom config server port
| https://github.com/apache/dolphinscheduler/issues/5229 | https://github.com/apache/dolphinscheduler/pull/6947 | 100a9ba3fc21764158593bd33e2b261b0ef39982 | 0dcff1425a28da0ce0004fc3e594b97d080c5fd9 | "2021-04-07T09:22:45Z" | java | "2021-11-30T16:21:03Z" | dolphinscheduler-common/src/main/resources/common.properties | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# user data local directory path, please make sure the directory exists and have read write permissions
data.basedir.path=/tmp/dolphinscheduler
# resource storage type: HDFS, S3, NONE
resource.storage.type=NONE
# resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resource.upload.path=/dolphinscheduler
# whether to startup kerberos
hadoop.security.authentication.startup.state=false
# java.security.krb5.conf path
java.security.krb5.conf.path=/opt/krb5.conf
# login user from keytab username
login.user.keytab.username=hdfs-mycluster@ESZ.COM
# login user from keytab path
login.user.keytab.path=/opt/hdfs.headless.keytab
# kerberos expire time, the unit is hour
kerberos.expire.time=2
# resource view suffixs
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
# if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
hdfs.root.user=hdfs
# if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
fs.defaultFS=hdfs://mycluster:8020
# if resource.storage.type=S3, s3 endpoint
fs.s3a.endpoint=http://192.168.xx.xx:9010
# if resource.storage.type=S3, s3 access key
fs.s3a.access.key=A3DXS30FO22544RE
# if resource.storage.type=S3, s3 secret key
fs.s3a.secret.key=OloCLq3n+8+sdPHUhJ21XrSxTC+JK
# resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=8088
# if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
# if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://ds1:%s/ws/v1/cluster/apps/%s
# job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
yarn.job.history.status.address=http://ds1:19888/ws/v1/history/mapreduce/jobs/%s
# datasource encryption enable
datasource.encryption.enable=false
# datasource encryption salt
datasource.encryption.salt=!@#$%^&*
# use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions
sudo.enable=true
# network interface preferred like eth0, default: empty
#dolphin.scheduler.network.interface.preferred=
# network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default
# system env path
#dolphinscheduler.env.path=env/dolphinscheduler_env.sh
# development state
development.state=false
|