status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
369
body
stringlengths
0
254k
issue_url
stringlengths
37
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
4
188
file_content
stringlengths
0
5.12M
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,462
[Feature][UI Next] Write part of the api method.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Write part of the api method. Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7462
https://github.com/apache/dolphinscheduler/pull/7476
01a2b9684a4c6478ca4aea74b2307558c57468a7
2f7a406ea9d1cc79441759ae3691a88841f50fb7
"2021-12-17T06:48:25Z"
java
"2021-12-18T09:07:24Z"
dolphinscheduler-ui-next/src/service/modules/resources/index.ts
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,462
[Feature][UI Next] Write part of the api method.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Write part of the api method. Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7462
https://github.com/apache/dolphinscheduler/pull/7476
01a2b9684a4c6478ca4aea74b2307558c57468a7
2f7a406ea9d1cc79441759ae3691a88841f50fb7
"2021-12-17T06:48:25Z"
java
"2021-12-18T09:07:24Z"
dolphinscheduler-ui-next/src/service/modules/resources/types.ts
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,227
[Improvement] [dolphinscheduler-api] The alarm instance pages are sorted in reverse order according to the update time
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![image](https://user-images.githubusercontent.com/95271106/144959320-053fb083-290b-4e26-986a-9c9637bb57de.png) ### Version dev ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7227
https://github.com/apache/dolphinscheduler/pull/7506
0d86a48d1c3caddf813bbf7f3453e1e7d624c760
bb140fbc3b462cc192b18dca4ef1cee409909b71
"2021-12-07T03:13:01Z"
java
"2021-12-21T01:22:16Z"
dolphinscheduler-dao/src/main/resources/org/apache/dolphinscheduler/dao/mapper/AlertPluginInstanceMapper.xml
<?xml version="1.0" encoding="UTF-8" ?> <!-- ~ Licensed to the Apache Software Foundation (ASF) under one or more ~ contributor license agreements. See the NOTICE file distributed with ~ this work for additional information regarding copyright ownership. ~ The ASF licenses this file to You under the Apache License, Version 2.0 ~ (the "License"); you may not use this file except in compliance with ~ the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, software ~ distributed under the License is distributed on an "AS IS" BASIS, ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ~ See the License for the specific language governing permissions and ~ limitations under the License. --> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" > <mapper namespace="org.apache.dolphinscheduler.dao.mapper.AlertPluginInstanceMapper"> <select id="queryAllAlertPluginInstanceList" resultType="org.apache.dolphinscheduler.dao.entity.AlertPluginInstance"> select * from t_ds_alert_plugin_instance where 1 = 1 </select> <select id="queryByIds" resultType="org.apache.dolphinscheduler.dao.entity.AlertPluginInstance"> select * from t_ds_alert_plugin_instance where id in <foreach item="item" index="index" collection="ids" open="(" separator="," close=")"> #{item} </foreach> </select> <select id="queryByInstanceNamePage" resultType="org.apache.dolphinscheduler.dao.entity.AlertPluginInstance"> select * from t_ds_alert_plugin_instance where 1 = 1 <if test="instanceName != null and instanceName != ''"> and instance_name like concat('%', #{instanceName}, '%') </if> </select> <select id="existInstanceName" resultType="java.lang.Boolean"> select 1 from t_ds_alert_plugin_instance where instance_name = #{instanceName} limit 1 </select> </mapper>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,622
[Bug] [Module Name] Unsupported mechanism type PLAIN
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened Failed to create datasource for hive with kerberos environment,The api log is bellow: [WARN] 2021-12-25 17:52:11.501 org.apache.hive.jdbc.HiveConnection:[321] - Failed to connect to onedts-dev-master-v01.zjyg.com:10000 [ERROR] 2021-12-25 17:52:12.503 com.zaxxer.hikari.pool.HikariPool:[594] - HikariPool-1 - Exception during pool initialization. java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000/ods_hdp_ambari: Peer indicated failure: Unsupported mechanism type PLAIN at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:344) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364) at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476) at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:159) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:117) at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:376) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:431) at org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient.checkClient(CommonDataSourceClient.java:104) at org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient.<init>(CommonDataSourceClient.java:55) at org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceClient.<init>(HiveDataSourceClient.java:61) at org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceChannel.createDataSourceClient(HiveDataSourceChannel.java:29) at org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.lambda$getConnection$0(DataSourceClientProvider.java:64) at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) at org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.getConnection(DataSourceClientProvider.java:58) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl.checkConnection(DataSourceServiceImpl.java:320) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl$$FastClassBySpringCGLIB$$a86d54aa.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl$$EnhancerBySpringCGLIB$$b6edf9d8.checkConnection(<generated>) at org.apache.dolphinscheduler.api.controller.DataSourceController.connectDataSource(DataSourceController.java:215) at org.apache.dolphinscheduler.api.controller.DataSourceController$$FastClassBySpringCGLIB$$835fdd04.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89) at org.apache.dolphinscheduler.api.aspect.AccessLogAspect.doAround(AccessLogAspect.java:87) at sun.reflect.GeneratedMethodAccessor281.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:634) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:624) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:72) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698) at org.apache.dolphinscheduler.api.controller.DataSourceController$$EnhancerBySpringCGLIB$$9351867a.connectDataSource(<generated>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) at javax.servlet.http.HttpServlet.service(HttpServlet.java:517) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) at javax.servlet.http.HttpServlet.service(HttpServlet.java:584) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1631) at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at com.github.xiaoymin.swaggerbootstrapui.filter.SecurityBasicAuthFilter.doFilter(SecurityBasicAuthFilter.java:84) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at com.github.xiaoymin.swaggerbootstrapui.filter.ProductionSecurityFilter.doFilter(ProductionSecurityFilter.java:53) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: Unsupported mechanism type PLAIN at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:431) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:312) ... 119 common frames omitted [INFO] 2021-12-25 17:52:12.505 org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient:[108] - Time to execute check jdbc client with sql select 1 for 1141 ms [ERROR] 2021-12-25 17:52:12.505 org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl:[328] - datasource test connection error, dbType:HIVE, connectionParam:HiveConnectionParam{user='hive', password='123456', address='jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000', database='ods_hdp_ambari', jdbcUrl='jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000/ods_hdp_ambari', driverLocation='null', driverClassName='org.apache.hive.jdbc.HiveDriver', validationQuery='select 1', other='null', principal='null', javaSecurityKrb5Conf='null', loginUserKeytabUsername='null', loginUserKeytabPath='null'}, message:JDBC connect failed. ### What you expected to happen failed to create datasource for hive with kerberos configure. ### How to reproduce 大数据相关组件: hdp 3.1.5 ( 3.1.5.0-152 ) hive 3.1.0 hadoop 3.1.1 dolphinscheduler 2.0.1 ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7622
https://github.com/apache/dolphinscheduler/pull/7489
90243f13eea046c60c55f091d1002a233ea156af
85beb50f03457d9687e9f1761becd4f37ec41766
"2021-12-25T10:25:42Z"
java
"2021-12-21T06:37:33Z"
dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/main/java/org/apache/dolphinscheduler/plugin/datasource/api/provider/JDBCDataSourceProvider.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.datasource.api.provider; import org.apache.dolphinscheduler.plugin.datasource.api.utils.DataSourceUtils; import org.apache.dolphinscheduler.plugin.datasource.api.utils.PasswordUtils; import org.apache.dolphinscheduler.spi.datasource.BaseConnectionParam; import org.apache.dolphinscheduler.spi.enums.DbType; import org.apache.dolphinscheduler.spi.utils.Constants; import org.apache.dolphinscheduler.spi.utils.PropertyUtils; import org.apache.dolphinscheduler.spi.utils.StringUtils; import java.sql.Driver; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.zaxxer.hikari.HikariDataSource; /** * Jdbc Data Source Provider */ public class JDBCDataSourceProvider { private static final Logger logger = LoggerFactory.getLogger(JDBCDataSourceProvider.class); public static HikariDataSource createJdbcDataSource(BaseConnectionParam properties, DbType dbType) { logger.info("Creating HikariDataSource pool for maxActive:{}", PropertyUtils.getInt(Constants.SPRING_DATASOURCE_MAX_ACTIVE, 50)); HikariDataSource dataSource = new HikariDataSource(); //TODO Support multiple versions of data sources ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); loaderJdbcDriver(classLoader, properties, dbType); dataSource.setDriverClassName(properties.getDriverClassName()); dataSource.setJdbcUrl(properties.getJdbcUrl()); dataSource.setUsername(properties.getUser()); dataSource.setPassword(PasswordUtils.decodePassword(properties.getPassword())); dataSource.setMinimumIdle(PropertyUtils.getInt(Constants.SPRING_DATASOURCE_MIN_IDLE, 5)); dataSource.setMaximumPoolSize(PropertyUtils.getInt(Constants.SPRING_DATASOURCE_MAX_ACTIVE, 50)); dataSource.setConnectionTestQuery(properties.getValidationQuery()); if (properties.getProps() != null) { properties.getProps().forEach(dataSource::addDataSourceProperty); } logger.info("Creating HikariDataSource pool success."); return dataSource; } /** * @return One Session Jdbc DataSource */ public static HikariDataSource createOneSessionJdbcDataSource(BaseConnectionParam properties) { logger.info("Creating OneSession HikariDataSource pool for maxActive:{}", PropertyUtils.getInt(Constants.SPRING_DATASOURCE_MAX_ACTIVE, 50)); HikariDataSource dataSource = new HikariDataSource(); dataSource.setDriverClassName(properties.getDriverClassName()); dataSource.setJdbcUrl(properties.getJdbcUrl()); dataSource.setUsername(properties.getUser()); dataSource.setPassword(PasswordUtils.decodePassword(properties.getPassword())); dataSource.setMinimumIdle(1); dataSource.setMaximumPoolSize(1); dataSource.setConnectionTestQuery(properties.getValidationQuery()); if (properties.getProps() != null) { properties.getProps().forEach(dataSource::addDataSourceProperty); } logger.info("Creating OneSession HikariDataSource pool success."); return dataSource; } protected static void loaderJdbcDriver(ClassLoader classLoader, BaseConnectionParam properties, DbType dbType) { String drv = StringUtils.isBlank(properties.getDriverClassName()) ? DataSourceUtils.getDatasourceProcessor(dbType).getDatasourceDriver() : properties.getDriverClassName(); try { final Class<?> clazz = Class.forName(drv, true, classLoader); final Driver driver = (Driver) clazz.newInstance(); if (!driver.acceptsURL(properties.getJdbcUrl())) { logger.warn("Jdbc driver loading error. Driver {} cannot accept url.", drv); throw new RuntimeException("Jdbc driver loading error."); } if (dbType.equals(DbType.MYSQL)) { if (driver.getMajorVersion() >= 8) { properties.setDriverClassName(drv); } else { properties.setDriverClassName(Constants.COM_MYSQL_JDBC_DRIVER); } } } catch (final Exception e) { logger.warn("The specified driver not suitable."); } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,622
[Bug] [Module Name] Unsupported mechanism type PLAIN
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened Failed to create datasource for hive with kerberos environment,The api log is bellow: [WARN] 2021-12-25 17:52:11.501 org.apache.hive.jdbc.HiveConnection:[321] - Failed to connect to onedts-dev-master-v01.zjyg.com:10000 [ERROR] 2021-12-25 17:52:12.503 com.zaxxer.hikari.pool.HikariPool:[594] - HikariPool-1 - Exception during pool initialization. java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000/ods_hdp_ambari: Peer indicated failure: Unsupported mechanism type PLAIN at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:344) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364) at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476) at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:159) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:117) at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:376) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:431) at org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient.checkClient(CommonDataSourceClient.java:104) at org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient.<init>(CommonDataSourceClient.java:55) at org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceClient.<init>(HiveDataSourceClient.java:61) at org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceChannel.createDataSourceClient(HiveDataSourceChannel.java:29) at org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.lambda$getConnection$0(DataSourceClientProvider.java:64) at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) at org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.getConnection(DataSourceClientProvider.java:58) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl.checkConnection(DataSourceServiceImpl.java:320) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl$$FastClassBySpringCGLIB$$a86d54aa.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl$$EnhancerBySpringCGLIB$$b6edf9d8.checkConnection(<generated>) at org.apache.dolphinscheduler.api.controller.DataSourceController.connectDataSource(DataSourceController.java:215) at org.apache.dolphinscheduler.api.controller.DataSourceController$$FastClassBySpringCGLIB$$835fdd04.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89) at org.apache.dolphinscheduler.api.aspect.AccessLogAspect.doAround(AccessLogAspect.java:87) at sun.reflect.GeneratedMethodAccessor281.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:634) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:624) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:72) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698) at org.apache.dolphinscheduler.api.controller.DataSourceController$$EnhancerBySpringCGLIB$$9351867a.connectDataSource(<generated>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) at javax.servlet.http.HttpServlet.service(HttpServlet.java:517) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) at javax.servlet.http.HttpServlet.service(HttpServlet.java:584) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1631) at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at com.github.xiaoymin.swaggerbootstrapui.filter.SecurityBasicAuthFilter.doFilter(SecurityBasicAuthFilter.java:84) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at com.github.xiaoymin.swaggerbootstrapui.filter.ProductionSecurityFilter.doFilter(ProductionSecurityFilter.java:53) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: Unsupported mechanism type PLAIN at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:431) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:312) ... 119 common frames omitted [INFO] 2021-12-25 17:52:12.505 org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient:[108] - Time to execute check jdbc client with sql select 1 for 1141 ms [ERROR] 2021-12-25 17:52:12.505 org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl:[328] - datasource test connection error, dbType:HIVE, connectionParam:HiveConnectionParam{user='hive', password='123456', address='jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000', database='ods_hdp_ambari', jdbcUrl='jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000/ods_hdp_ambari', driverLocation='null', driverClassName='org.apache.hive.jdbc.HiveDriver', validationQuery='select 1', other='null', principal='null', javaSecurityKrb5Conf='null', loginUserKeytabUsername='null', loginUserKeytabPath='null'}, message:JDBC connect failed. ### What you expected to happen failed to create datasource for hive with kerberos configure. ### How to reproduce 大数据相关组件: hdp 3.1.5 ( 3.1.5.0-152 ) hive 3.1.0 hadoop 3.1.1 dolphinscheduler 2.0.1 ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7622
https://github.com/apache/dolphinscheduler/pull/7489
90243f13eea046c60c55f091d1002a233ea156af
85beb50f03457d9687e9f1761becd4f37ec41766
"2021-12-25T10:25:42Z"
java
"2021-12-21T06:37:33Z"
dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-api/src/test/java/org/apache/dolphinscheduler/plugin/datasource/api/provider/JDBCDataSourceProviderTest.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.datasource.api.provider; import org.apache.dolphinscheduler.plugin.datasource.api.datasource.mysql.MySQLConnectionParam; import org.apache.dolphinscheduler.spi.enums.DbType; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mockito; import org.powermock.api.mockito.PowerMockito; import org.powermock.core.classloader.annotations.PrepareForTest; import org.powermock.modules.junit4.PowerMockRunner; import com.zaxxer.hikari.HikariDataSource; @RunWith(PowerMockRunner.class) @PrepareForTest(value = {HikariDataSource.class, JDBCDataSourceProvider.class}) public class JDBCDataSourceProviderTest { @Test public void testCreateJdbcDataSource() { PowerMockito.mockStatic(JDBCDataSourceProvider.class); HikariDataSource dataSource = PowerMockito.mock(HikariDataSource.class); PowerMockito.when(JDBCDataSourceProvider.createJdbcDataSource(Mockito.any(), Mockito.any())).thenReturn(dataSource); Assert.assertNotNull(JDBCDataSourceProvider.createJdbcDataSource(new MySQLConnectionParam(), DbType.MYSQL)); } @Test public void testCreateOneSessionJdbcDataSource() { PowerMockito.mockStatic(JDBCDataSourceProvider.class); HikariDataSource dataSource = PowerMockito.mock(HikariDataSource.class); PowerMockito.when(JDBCDataSourceProvider.createOneSessionJdbcDataSource(Mockito.any())).thenReturn(dataSource); Assert.assertNotNull(JDBCDataSourceProvider.createOneSessionJdbcDataSource(new MySQLConnectionParam())); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,622
[Bug] [Module Name] Unsupported mechanism type PLAIN
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened Failed to create datasource for hive with kerberos environment,The api log is bellow: [WARN] 2021-12-25 17:52:11.501 org.apache.hive.jdbc.HiveConnection:[321] - Failed to connect to onedts-dev-master-v01.zjyg.com:10000 [ERROR] 2021-12-25 17:52:12.503 com.zaxxer.hikari.pool.HikariPool:[594] - HikariPool-1 - Exception during pool initialization. java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000/ods_hdp_ambari: Peer indicated failure: Unsupported mechanism type PLAIN at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:344) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:364) at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:476) at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:561) at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:159) at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:117) at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:80) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:376) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:431) at org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient.checkClient(CommonDataSourceClient.java:104) at org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient.<init>(CommonDataSourceClient.java:55) at org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceClient.<init>(HiveDataSourceClient.java:61) at org.apache.dolphinscheduler.plugin.datasource.hive.HiveDataSourceChannel.createDataSourceClient(HiveDataSourceChannel.java:29) at org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.lambda$getConnection$0(DataSourceClientProvider.java:64) at java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660) at org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider.getConnection(DataSourceClientProvider.java:58) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl.checkConnection(DataSourceServiceImpl.java:320) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl$$FastClassBySpringCGLIB$$a86d54aa.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl$$EnhancerBySpringCGLIB$$b6edf9d8.checkConnection(<generated>) at org.apache.dolphinscheduler.api.controller.DataSourceController.connectDataSource(DataSourceController.java:215) at org.apache.dolphinscheduler.api.controller.DataSourceController$$FastClassBySpringCGLIB$$835fdd04.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:783) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89) at org.apache.dolphinscheduler.api.aspect.AccessLogAspect.doAround(AccessLogAspect.java:87) at sun.reflect.GeneratedMethodAccessor281.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:634) at org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:624) at org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:72) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:175) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186) at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698) at org.apache.dolphinscheduler.api.controller.DataSourceController$$EnhancerBySpringCGLIB$$9351867a.connectDataSource(<generated>) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1067) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:963) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) at javax.servlet.http.HttpServlet.service(HttpServlet.java:517) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) at javax.servlet.http.HttpServlet.service(HttpServlet.java:584) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799) at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1631) at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at com.github.xiaoymin.swaggerbootstrapui.filter.SecurityBasicAuthFilter.doFilter(SecurityBasicAuthFilter.java:84) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at com.github.xiaoymin.swaggerbootstrapui.filter.ProductionSecurityFilter.doFilter(ProductionSecurityFilter.java:53) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:119) at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193) at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:600) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1624) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1434) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1594) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1349) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.eclipse.jetty.server.Server.handle(Server.java:516) at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:400) at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:645) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:392) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:409) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: Unsupported mechanism type PLAIN at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:307) at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:431) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:312) ... 119 common frames omitted [INFO] 2021-12-25 17:52:12.505 org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient:[108] - Time to execute check jdbc client with sql select 1 for 1141 ms [ERROR] 2021-12-25 17:52:12.505 org.apache.dolphinscheduler.api.service.impl.DataSourceServiceImpl:[328] - datasource test connection error, dbType:HIVE, connectionParam:HiveConnectionParam{user='hive', password='123456', address='jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000', database='ods_hdp_ambari', jdbcUrl='jdbc:hive2://onedts-dev-master-v01.zjyg.com:10000/ods_hdp_ambari', driverLocation='null', driverClassName='org.apache.hive.jdbc.HiveDriver', validationQuery='select 1', other='null', principal='null', javaSecurityKrb5Conf='null', loginUserKeytabUsername='null', loginUserKeytabPath='null'}, message:JDBC connect failed. ### What you expected to happen failed to create datasource for hive with kerberos configure. ### How to reproduce 大数据相关组件: hdp 3.1.5 ( 3.1.5.0-152 ) hive 3.1.0 hadoop 3.1.1 dolphinscheduler 2.0.1 ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7622
https://github.com/apache/dolphinscheduler/pull/7489
90243f13eea046c60c55f091d1002a233ea156af
85beb50f03457d9687e9f1761becd4f37ec41766
"2021-12-25T10:25:42Z"
java
"2021-12-21T06:37:33Z"
dolphinscheduler-datasource-plugin/dolphinscheduler-datasource-hive/src/main/java/org/apache/dolphinscheduler/plugin/datasource/hive/HiveDataSourceClient.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.datasource.hive; import static org.apache.dolphinscheduler.spi.task.TaskConstants.JAVA_SECURITY_KRB5_CONF; import static org.apache.dolphinscheduler.spi.task.TaskConstants.JAVA_SECURITY_KRB5_CONF_PATH; import static org.apache.dolphinscheduler.spi.task.TaskConstants.HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE; import org.apache.dolphinscheduler.plugin.datasource.api.client.CommonDataSourceClient; import org.apache.dolphinscheduler.plugin.datasource.api.provider.JDBCDataSourceProvider; import org.apache.dolphinscheduler.plugin.datasource.utils.CommonUtil; import org.apache.dolphinscheduler.spi.datasource.BaseConnectionParam; import org.apache.dolphinscheduler.spi.enums.DbType; import org.apache.dolphinscheduler.spi.utils.Constants; import org.apache.dolphinscheduler.spi.utils.PropertyUtils; import org.apache.dolphinscheduler.spi.utils.StringUtils; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.UserGroupInformation; import java.io.IOException; import java.lang.reflect.Field; import java.sql.Connection; import java.sql.SQLException; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.zaxxer.hikari.HikariDataSource; import sun.security.krb5.Config; public class HiveDataSourceClient extends CommonDataSourceClient { private static final Logger logger = LoggerFactory.getLogger(HiveDataSourceClient.class); private ScheduledExecutorService kerberosRenewalService; private Configuration hadoopConf; protected HikariDataSource oneSessionDataSource; private UserGroupInformation ugi; public HiveDataSourceClient(BaseConnectionParam baseConnectionParam, DbType dbType) { super(baseConnectionParam, dbType); } @Override protected void preInit() { logger.info("PreInit in {}", getClass().getName()); this.kerberosRenewalService = Executors.newSingleThreadScheduledExecutor(); } @Override protected void initClient(BaseConnectionParam baseConnectionParam, DbType dbType) { logger.info("Create Configuration for hive configuration."); this.hadoopConf = createHadoopConf(); logger.info("Create Configuration success."); logger.info("Create UserGroupInformation."); this.ugi = createUserGroupInformation(baseConnectionParam.getUser()); logger.info("Create ugi success."); super.initClient(baseConnectionParam, dbType); this.oneSessionDataSource = JDBCDataSourceProvider.createOneSessionJdbcDataSource(baseConnectionParam); logger.info("Init {} success.", getClass().getName()); } @Override protected void checkEnv(BaseConnectionParam baseConnectionParam) { super.checkEnv(baseConnectionParam); checkKerberosEnv(); } private void checkKerberosEnv() { String krb5File = PropertyUtils.getString(JAVA_SECURITY_KRB5_CONF_PATH); Boolean kerberosStartupState = PropertyUtils.getBoolean(HADOOP_SECURITY_AUTHENTICATION_STARTUP_STATE, false); if (kerberosStartupState && StringUtils.isNotBlank(krb5File)) { System.setProperty(JAVA_SECURITY_KRB5_CONF, krb5File); try { Config.refresh(); Class<?> kerberosName = Class.forName("org.apache.hadoop.security.authentication.util.KerberosName"); Field field = kerberosName.getDeclaredField("defaultRealm"); field.setAccessible(true); field.set(null, Config.getInstance().getDefaultRealm()); } catch (Exception e) { throw new RuntimeException("Update Kerberos environment failed.", e); } } } private UserGroupInformation createUserGroupInformation(String username) { String krb5File = PropertyUtils.getString(Constants.JAVA_SECURITY_KRB5_CONF_PATH); String keytab = PropertyUtils.getString(Constants.LOGIN_USER_KEY_TAB_PATH); String principal = PropertyUtils.getString(Constants.LOGIN_USER_KEY_TAB_USERNAME); try { UserGroupInformation ugi = CommonUtil.createUGI(getHadoopConf(), principal, keytab, krb5File, username); try { Field isKeytabField = ugi.getClass().getDeclaredField("isKeytab"); isKeytabField.setAccessible(true); isKeytabField.set(ugi, true); } catch (NoSuchFieldException | IllegalAccessException e) { logger.warn(e.getMessage()); } kerberosRenewalService.scheduleWithFixedDelay(() -> { try { ugi.checkTGTAndReloginFromKeytab(); } catch (IOException e) { logger.error("Check TGT and Renewal from Keytab error", e); } }, 5, 5, TimeUnit.MINUTES); return ugi; } catch (IOException e) { throw new RuntimeException("createUserGroupInformation fail. ", e); } } protected Configuration createHadoopConf() { Configuration hadoopConf = new Configuration(); hadoopConf.setBoolean("ipc.client.fallback-to-simple-auth-allowed", true); return hadoopConf; } protected Configuration getHadoopConf() { return this.hadoopConf; } @Override public Connection getConnection() { try { return oneSessionDataSource.getConnection(); } catch (SQLException e) { logger.error("get oneSessionDataSource Connection fail SQLException: {}", e.getMessage(), e); return null; } } @Override public void close() { super.close(); logger.info("close HiveDataSourceClient."); kerberosRenewalService.shutdown(); this.ugi = null; this.oneSessionDataSource.close(); this.oneSessionDataSource = null; } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,518
[Feature][UI-Next] Refactor Login to better match the composition API implementation
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Refactor Login to better match the composition API implementation Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7518
https://github.com/apache/dolphinscheduler/pull/7519
85beb50f03457d9687e9f1761becd4f37ec41766
ed6a3b6c87f61a7e72c18848d59d2826bdb3e017
"2021-12-21T05:17:53Z"
java
"2021-12-21T07:24:04Z"
dolphinscheduler-ui-next/src/router/routes.ts
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import type { RouteRecordRaw } from 'vue-router' import type { Component } from 'vue' import utils from '@/utils' // All TSX files under the views folder automatically generate mapping relationship const modules = import.meta.glob('/src/views/**/**.tsx') const components: { [key: string]: Component } = utils.classification(modules) /** * Basic page */ const basePage: RouteRecordRaw[] = [ { path: '/', redirect: { name: 'home' }, component: () => import('@/layouts/content/Content'), children: [ { path: '/home', name: 'home', component: components['home'], }, ], }, ] /** * Login page */ const loginPage: RouteRecordRaw[] = [ { path: '/login', name: 'login', component: components['login'], }, ] const routes: RouteRecordRaw[] = [...basePage, ...loginPage] // 重新组织后导出 export default routes
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,518
[Feature][UI-Next] Refactor Login to better match the composition API implementation
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Refactor Login to better match the composition API implementation Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7518
https://github.com/apache/dolphinscheduler/pull/7519
85beb50f03457d9687e9f1761becd4f37ec41766
ed6a3b6c87f61a7e72c18848d59d2826bdb3e017
"2021-12-21T05:17:53Z"
java
"2021-12-21T07:24:04Z"
dolphinscheduler-ui-next/src/utils/index.ts
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import classification from './classification' const utils = { classification, } export default utils
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,518
[Feature][UI-Next] Refactor Login to better match the composition API implementation
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Refactor Login to better match the composition API implementation Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7518
https://github.com/apache/dolphinscheduler/pull/7519
85beb50f03457d9687e9f1761becd4f37ec41766
ed6a3b6c87f61a7e72c18848d59d2826bdb3e017
"2021-12-21T05:17:53Z"
java
"2021-12-21T07:24:04Z"
dolphinscheduler-ui-next/src/utils/mapping.ts
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,518
[Feature][UI-Next] Refactor Login to better match the composition API implementation
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Refactor Login to better match the composition API implementation Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7518
https://github.com/apache/dolphinscheduler/pull/7519
85beb50f03457d9687e9f1761becd4f37ec41766
ed6a3b6c87f61a7e72c18848d59d2826bdb3e017
"2021-12-21T05:17:53Z"
java
"2021-12-21T07:24:04Z"
dolphinscheduler-ui-next/src/views/login/index.tsx
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { defineComponent, reactive, ref, toRefs, withKeys } from 'vue' import styles from './index.module.scss' import { useI18n } from 'vue-i18n' import { NInput, NButton, NSwitch, NForm, NFormItem, FormRules } from 'naive-ui' import { useRouter } from 'vue-router' import type { Router } from 'vue-router' import { queryLog } from '@/service/modules/login' const login = defineComponent({ name: 'login', setup() { const { t, locale } = useI18n() const state = reactive({ loginFormRef: ref(), loginForm: { userName: '', userPassword: '', }, rules: { userName: { trigger: ['input', 'blur'], validator() { if (state.loginForm.userName === '') { return new Error(`${t('login.userName_tips')}`) } }, }, userPassword: { trigger: ['input', 'blur'], validator() { if (state.loginForm.userPassword === '') { return new Error(`${t('login.userPassword_tips')}`) } }, }, } as FormRules, }) const handleChange = (value: string) => { locale.value = value } const router: Router = useRouter() const handleLogin = () => { state.loginFormRef.validate((valid: any) => { if (!valid) { queryLog({ ...state.loginForm }).then((res: Response) => { console.log('res', res) router.push({ path: 'home' }) }) } else { console.log('Invalid') } }) } return { t, locale, handleChange, handleLogin, ...toRefs(state) } }, render() { return ( <div class={styles.container}> <div class={styles['language-switch']}> <NSwitch onUpdateValue={this.handleChange} checked-value='en_US' unchecked-value='zh_CN' > {{ checked: () => 'en_US', unchecked: () => 'zh_CN', }} </NSwitch> </div> <div class={styles['login-model']}> <div class={styles.logo}> <div class={styles['logo-img']}></div> </div> <div class={styles['form-model']}> <NForm rules={this.rules} ref='loginFormRef'> <NFormItem label={this.t('login.userName')} label-style={{ color: 'black' }} path='userName' > <NInput type='text' size='large' v-model={[this.loginForm.userName, 'value']} placeholder={this.t('login.userName_tips')} autofocus onKeydown={withKeys(this.handleLogin, ['enter'])} /> </NFormItem> <NFormItem label={this.t('login.userPassword')} label-style={{ color: 'black' }} path='userPassword' > <NInput type='password' size='large' v-model={[this.loginForm.userPassword, 'value']} placeholder={this.t('login.userPassword_tips')} onKeydown={withKeys(this.handleLogin, ['enter'])} /> </NFormItem> </NForm> <NButton round type='primary' onClick={this.handleLogin}> {this.t('login.signin')} </NButton> </div> </div> </div> ) }, }) export default login
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,518
[Feature][UI-Next] Refactor Login to better match the composition API implementation
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Refactor Login to better match the composition API implementation Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7518
https://github.com/apache/dolphinscheduler/pull/7519
85beb50f03457d9687e9f1761becd4f37ec41766
ed6a3b6c87f61a7e72c18848d59d2826bdb3e017
"2021-12-21T05:17:53Z"
java
"2021-12-21T07:24:04Z"
dolphinscheduler-ui-next/src/views/login/use-login.ts
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,518
[Feature][UI-Next] Refactor Login to better match the composition API implementation
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Refactor Login to better match the composition API implementation Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7518
https://github.com/apache/dolphinscheduler/pull/7519
85beb50f03457d9687e9f1761becd4f37ec41766
ed6a3b6c87f61a7e72c18848d59d2826bdb3e017
"2021-12-21T05:17:53Z"
java
"2021-12-21T07:24:04Z"
dolphinscheduler-ui-next/src/views/login/use-translate.ts
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,518
[Feature][UI-Next] Refactor Login to better match the composition API implementation
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Refactor Login to better match the composition API implementation Please refer to the main `issue` [#7332](https://github.com/apache/dolphinscheduler/issues/7332). ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7518
https://github.com/apache/dolphinscheduler/pull/7519
85beb50f03457d9687e9f1761becd4f37ec41766
ed6a3b6c87f61a7e72c18848d59d2826bdb3e017
"2021-12-21T05:17:53Z"
java
"2021-12-21T07:24:04Z"
dolphinscheduler-ui-next/src/views/login/use-validate.ts
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,927
[Feature][Python] Add workflow as code task type conditions
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type conditions. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6927
https://github.com/apache/dolphinscheduler/pull/7505
4c49a8b91fbeb26d6a18314b0d67905a9f296859
e23a4848c038c9b04327fd431d43fdf54eb9b689
"2021-11-19T07:06:09Z"
java
"2021-12-22T03:06:45Z"
dolphinscheduler-python/pydolphinscheduler/examples/task_conditions_example.py
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,927
[Feature][Python] Add workflow as code task type conditions
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type conditions. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6927
https://github.com/apache/dolphinscheduler/pull/7505
4c49a8b91fbeb26d6a18314b0d67905a9f296859
e23a4848c038c9b04327fd431d43fdf54eb9b689
"2021-11-19T07:06:09Z"
java
"2021-12-22T03:06:45Z"
dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/constants.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Constants for pydolphinscheduler.""" class ProcessDefinitionReleaseState: """Constants for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition` release state.""" ONLINE: str = "ONLINE" OFFLINE: str = "OFFLINE" class ProcessDefinitionDefault: """Constants default value for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition`.""" PROJECT: str = "project-pydolphin" TENANT: str = "tenant_pydolphin" USER: str = "userPythonGateway" # TODO simple set password same as username USER_PWD: str = "userPythonGateway" USER_EMAIL: str = "userPythonGateway@dolphinscheduler.com" USER_PHONE: str = "11111111111" USER_STATE: int = 1 QUEUE: str = "queuePythonGateway" WORKER_GROUP: str = "default" TIME_ZONE: str = "Asia/Shanghai" class TaskPriority(str): """Constants for task priority.""" HIGHEST = "HIGHEST" HIGH = "HIGH" MEDIUM = "MEDIUM" LOW = "LOW" LOWEST = "LOWEST" class TaskFlag(str): """Constants for task flag.""" YES = "YES" NO = "NO" class TaskTimeoutFlag(str): """Constants for task timeout flag.""" CLOSE = "CLOSE" class TaskType(str): """Constants for task type, it will also show you which kind we support up to now.""" SHELL = "SHELL" HTTP = "HTTP" PYTHON = "PYTHON" SQL = "SQL" SUB_PROCESS = "SUB_PROCESS" PROCEDURE = "PROCEDURE" DATAX = "DATAX" DEPENDENT = "DEPENDENT" class DefaultTaskCodeNum(str): """Constants and default value for default task code number.""" DEFAULT = 1 class JavaGatewayDefault(str): """Constants and default value for java gateway.""" RESULT_MESSAGE_KEYWORD = "msg" RESULT_MESSAGE_SUCCESS = "success" RESULT_STATUS_KEYWORD = "status" RESULT_STATUS_SUCCESS = "SUCCESS" RESULT_DATA = "data" class Delimiter(str): """Constants for delimiter.""" BAR = "-" DASH = "/" COLON = ":" UNDERSCORE = "_" class Time(str): """Constants for date.""" FMT_STD_DATE = "%Y-%m-%d" LEN_STD_DATE = 10 FMT_DASH_DATE = "%Y/%m/%d" FMT_SHORT_DATE = "%Y%m%d" LEN_SHORT_DATE = 8 FMT_STD_TIME = "%H:%M:%S" FMT_NO_COLON_TIME = "%H%M%S"
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,927
[Feature][Python] Add workflow as code task type conditions
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type conditions. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6927
https://github.com/apache/dolphinscheduler/pull/7505
4c49a8b91fbeb26d6a18314b0d67905a9f296859
e23a4848c038c9b04327fd431d43fdf54eb9b689
"2021-11-19T07:06:09Z"
java
"2021-12-22T03:06:45Z"
dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/tasks/condition.py
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,927
[Feature][Python] Add workflow as code task type conditions
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type conditions. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6927
https://github.com/apache/dolphinscheduler/pull/7505
4c49a8b91fbeb26d6a18314b0d67905a9f296859
e23a4848c038c9b04327fd431d43fdf54eb9b689
"2021-11-19T07:06:09Z"
java
"2021-12-22T03:06:45Z"
dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/tasks/dependent.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Task dependent.""" from typing import Dict, Optional, Tuple from pydolphinscheduler.constants import TaskType from pydolphinscheduler.core.base import Base from pydolphinscheduler.core.task import Task from pydolphinscheduler.exceptions import PyDSJavaGatewayException, PyDSParamException from pydolphinscheduler.java_gateway import launch_gateway DEPENDENT_ALL_TASK_IN_WORKFLOW = "0" class DependentDate(str): """Constant of Dependent date value. These values set according to Java server side, if you want to add and change it, please change Java server side first. """ # TODO Maybe we should add parent level to DependentDate for easy to use, such as # DependentDate.MONTH.THIS_MONTH # Hour CURRENT_HOUR = "currentHour" LAST_ONE_HOUR = "last1Hour" LAST_TWO_HOURS = "last2Hours" LAST_THREE_HOURS = "last3Hours" LAST_TWENTY_FOUR_HOURS = "last24Hours" # Day TODAY = "today" LAST_ONE_DAYS = "last1Days" LAST_TWO_DAYS = "last2Days" LAST_THREE_DAYS = "last3Days" LAST_SEVEN_DAYS = "last7Days" # Week THIS_WEEK = "thisWeek" LAST_WEEK = "lastWeek" LAST_MONDAY = "lastMonday" LAST_TUESDAY = "lastTuesday" LAST_WEDNESDAY = "lastWednesday" LAST_THURSDAY = "lastThursday" LAST_FRIDAY = "lastFriday" LAST_SATURDAY = "lastSaturday" LAST_SUNDAY = "lastSunday" # Month THIS_MONTH = "thisMonth" LAST_MONTH = "lastMonth" LAST_MONTH_BEGIN = "lastMonthBegin" LAST_MONTH_END = "lastMonthEnd" class DependentItem(Base): """Dependent item object, minimal unit for task dependent. It declare which project, process_definition, task are dependent to this task. """ _DEFINE_ATTR = { "project_code", "definition_code", "dep_task_code", "cycle", "date_value", } # TODO maybe we should conside overwrite operator `and` and `or` for DependentItem to # support more easy way to set relation def __init__( self, project_name: str, process_definition_name: str, dependent_task_name: Optional[str] = DEPENDENT_ALL_TASK_IN_WORKFLOW, dependent_date: Optional[DependentDate] = DependentDate.TODAY, ): obj_name = f"{project_name}.{process_definition_name}.{dependent_task_name}.{dependent_date}" super().__init__(obj_name) self.project_name = project_name self.process_definition_name = process_definition_name self.dependent_task_name = dependent_task_name if dependent_date is None: raise PyDSParamException( "Parameter dependent_date must provider by got None." ) else: self.dependent_date = dependent_date self._code = {} def __repr__(self) -> str: return "depend_item_list" @property def project_code(self) -> str: """Get dependent project code.""" return self.get_code_from_gateway().get("projectCode") @property def definition_code(self) -> str: """Get dependent definition code.""" return self.get_code_from_gateway().get("processDefinitionCode") @property def dep_task_code(self) -> str: """Get dependent tasks code list.""" if self.is_all_task: return DEPENDENT_ALL_TASK_IN_WORKFLOW else: return self.get_code_from_gateway().get("taskDefinitionCode") # TODO Maybe we should get cycle from dependent date class. @property def cycle(self) -> str: """Get dependent cycle.""" if "Hour" in self.dependent_date: return "hour" elif self.dependent_date == "today" or "Days" in self.dependent_date: return "day" elif "Month" in self.dependent_date: return "month" else: return "week" @property def date_value(self) -> str: """Get dependent date.""" return self.dependent_date @property def is_all_task(self) -> bool: """Check whether dependent all tasks or not.""" return self.dependent_task_name == DEPENDENT_ALL_TASK_IN_WORKFLOW @property def code_parameter(self) -> Tuple: """Get name info parameter to query code.""" param = ( self.project_name, self.process_definition_name, self.dependent_task_name if not self.is_all_task else None, ) return param def get_code_from_gateway(self) -> Dict: """Get project, definition, task code from given parameter.""" if self._code: return self._code else: gateway = launch_gateway() try: self._code = gateway.entry_point.getDependentInfo(*self.code_parameter) return self._code except Exception: raise PyDSJavaGatewayException("Function get_code_from_gateway error.") class DependentOperator(Base): """Set DependentItem or dependItemList with specific operator.""" _DEFINE_ATTR = { "relation", } DEPENDENT_ITEM = "DependentItem" DEPENDENT_OPERATOR = "DependentOperator" def __init__(self, *args): super().__init__(self.__class__.__name__) self.args = args def __repr__(self) -> str: return "depend_task_list" @classmethod def operator_name(cls) -> str: """Get operator name in different class.""" return cls.__name__.upper() @property def relation(self) -> str: """Get operator name in different class, for function :func:`get_define`.""" return self.operator_name() def set_define_attr(self) -> str: """Set attribute to function :func:`get_define`. It is a wrapper for both `And` and `Or` operator. """ result = [] attr = None for dependent in self.args: if isinstance(dependent, (DependentItem, DependentOperator)): if attr is None: attr = repr(dependent) elif repr(dependent) != attr: raise PyDSParamException( "Dependent %s operator parameter only support same type.", self.relation, ) else: raise PyDSParamException( "Dependent %s operator parameter support DependentItem and " "DependentOperator but got %s.", (self.relation, type(dependent)), ) result.append(dependent.get_define()) setattr(self, attr, result) return attr def get_define(self, camel_attr=True) -> Dict: """Overwrite Base.get_define to get task dependent specific get define.""" attr = self.set_define_attr() dependent_define_attr = self._DEFINE_ATTR.union({attr}) return super().get_define_custom( camel_attr=True, custom_attr=dependent_define_attr ) class And(DependentOperator): """Operator And for task dependent. It could accept both :class:`DependentItem` and children of :class:`DependentOperator`, and set AND condition to those args. """ def __init__(self, *args): super().__init__(*args) class Or(DependentOperator): """Operator Or for task dependent. It could accept both :class:`DependentItem` and children of :class:`DependentOperator`, and set OR condition to those args. """ def __init__(self, *args): super().__init__(*args) class Dependent(Task): """Task dependent object, declare behavior for dependent task to dolphinscheduler.""" def __init__(self, name: str, dependence: DependentOperator, *args, **kwargs): super().__init__(name, TaskType.DEPENDENT, *args, **kwargs) self.dependence = dependence @property def task_params(self, camel_attr: bool = True, custom_attr: set = None) -> Dict: """Override Task.task_params for dependent task. Dependent task have some specials attribute `dependence`, and in most of the task this attribute is None and use empty dict `{}` as default value. We do not use class attribute `_task_custom_attr` due to avoid attribute cover. """ params = super().task_params params["dependence"] = self.dependence.get_define() return params
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,927
[Feature][Python] Add workflow as code task type conditions
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type conditions. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6927
https://github.com/apache/dolphinscheduler/pull/7505
4c49a8b91fbeb26d6a18314b0d67905a9f296859
e23a4848c038c9b04327fd431d43fdf54eb9b689
"2021-11-19T07:06:09Z"
java
"2021-12-22T03:06:45Z"
dolphinscheduler-python/pydolphinscheduler/tests/tasks/test_condition.py
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,928
[Feature][Python] Add workflow as code task type switch
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type switch. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6928
https://github.com/apache/dolphinscheduler/pull/7531
e23a4848c038c9b04327fd431d43fdf54eb9b689
946a0c7c5768506e7ca92a21e7aed6ad5aa60871
"2021-11-19T07:07:07Z"
java
"2021-12-22T03:46:34Z"
dolphinscheduler-python/pydolphinscheduler/examples/task_switch_example.py
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,928
[Feature][Python] Add workflow as code task type switch
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type switch. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6928
https://github.com/apache/dolphinscheduler/pull/7531
e23a4848c038c9b04327fd431d43fdf54eb9b689
946a0c7c5768506e7ca92a21e7aed6ad5aa60871
"2021-11-19T07:07:07Z"
java
"2021-12-22T03:46:34Z"
dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/constants.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Constants for pydolphinscheduler.""" class ProcessDefinitionReleaseState: """Constants for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition` release state.""" ONLINE: str = "ONLINE" OFFLINE: str = "OFFLINE" class ProcessDefinitionDefault: """Constants default value for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition`.""" PROJECT: str = "project-pydolphin" TENANT: str = "tenant_pydolphin" USER: str = "userPythonGateway" # TODO simple set password same as username USER_PWD: str = "userPythonGateway" USER_EMAIL: str = "userPythonGateway@dolphinscheduler.com" USER_PHONE: str = "11111111111" USER_STATE: int = 1 QUEUE: str = "queuePythonGateway" WORKER_GROUP: str = "default" TIME_ZONE: str = "Asia/Shanghai" class TaskPriority(str): """Constants for task priority.""" HIGHEST = "HIGHEST" HIGH = "HIGH" MEDIUM = "MEDIUM" LOW = "LOW" LOWEST = "LOWEST" class TaskFlag(str): """Constants for task flag.""" YES = "YES" NO = "NO" class TaskTimeoutFlag(str): """Constants for task timeout flag.""" CLOSE = "CLOSE" class TaskType(str): """Constants for task type, it will also show you which kind we support up to now.""" SHELL = "SHELL" HTTP = "HTTP" PYTHON = "PYTHON" SQL = "SQL" SUB_PROCESS = "SUB_PROCESS" PROCEDURE = "PROCEDURE" DATAX = "DATAX" DEPENDENT = "DEPENDENT" CONDITIONS = "CONDITIONS" class DefaultTaskCodeNum(str): """Constants and default value for default task code number.""" DEFAULT = 1 class JavaGatewayDefault(str): """Constants and default value for java gateway.""" RESULT_MESSAGE_KEYWORD = "msg" RESULT_MESSAGE_SUCCESS = "success" RESULT_STATUS_KEYWORD = "status" RESULT_STATUS_SUCCESS = "SUCCESS" RESULT_DATA = "data" class Delimiter(str): """Constants for delimiter.""" BAR = "-" DASH = "/" COLON = ":" UNDERSCORE = "_" class Time(str): """Constants for date.""" FMT_STD_DATE = "%Y-%m-%d" LEN_STD_DATE = 10 FMT_DASH_DATE = "%Y/%m/%d" FMT_SHORT_DATE = "%Y%m%d" LEN_SHORT_DATE = 8 FMT_STD_TIME = "%H:%M:%S" FMT_NO_COLON_TIME = "%H%M%S"
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,928
[Feature][Python] Add workflow as code task type switch
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type switch. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6928
https://github.com/apache/dolphinscheduler/pull/7531
e23a4848c038c9b04327fd431d43fdf54eb9b689
946a0c7c5768506e7ca92a21e7aed6ad5aa60871
"2021-11-19T07:07:07Z"
java
"2021-12-22T03:46:34Z"
dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/tasks/switch.py
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
6,928
[Feature][Python] Add workflow as code task type switch
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add python api task type switch. sub task in #6407. we should cover all parameter from UI side and make it suitable for python. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/6928
https://github.com/apache/dolphinscheduler/pull/7531
e23a4848c038c9b04327fd431d43fdf54eb9b689
946a0c7c5768506e7ca92a21e7aed6ad5aa60871
"2021-11-19T07:07:07Z"
java
"2021-12-22T03:46:34Z"
dolphinscheduler-python/pydolphinscheduler/tests/tasks/test_switch.py
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,542
[Feature][UI Next] Add charts setting.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add charts setting. ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7542
https://github.com/apache/dolphinscheduler/pull/7547
46fa9ed9c83b65ae1b09a3fd5e714e1323c9346e
b2a378693f2030c6742feb9ab3b4874426dac5d0
"2021-12-22T07:12:19Z"
java
"2021-12-22T09:39:06Z"
dolphinscheduler-ui-next/src/components/chart/modules/Bar.tsx
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,542
[Feature][UI Next] Add charts setting.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add charts setting. ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7542
https://github.com/apache/dolphinscheduler/pull/7547
46fa9ed9c83b65ae1b09a3fd5e714e1323c9346e
b2a378693f2030c6742feb9ab3b4874426dac5d0
"2021-12-22T07:12:19Z"
java
"2021-12-22T09:39:06Z"
dolphinscheduler-ui-next/src/views/home/index.tsx
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { defineComponent } from 'vue' import styles from './index.module.scss' import PieChart from '@/components/chart/modules/Pie' import GaugeChart from '@/components/chart/modules/Gauge' export default defineComponent({ name: 'home', setup() {}, render() { return ( <div class={styles.container}> Home Test <PieChart /> <GaugeChart /> </div> ) }, })
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,537
[Bug] [Master] dependent node retry delay did not work
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://ftp.bmp.ovh/imgs/2021/12/2f85bc283fac5f14.png) ![](https://ftp.bmp.ovh/imgs/2021/12/6e2e6bcc4a4f3a1f.png) ### What you expected to happen Dependent node retry delay works. ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7537
https://github.com/apache/dolphinscheduler/pull/7551
b2a378693f2030c6742feb9ab3b4874426dac5d0
9f56123a26096e9e423a447e659180b9153de07d
"2021-12-22T02:31:36Z"
java
"2021-12-22T12:21:52Z"
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/StateWheelExecuteThread.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.server.master.runner; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.StateEvent; import org.apache.dolphinscheduler.common.enums.StateEventType; import org.apache.dolphinscheduler.common.enums.TimeoutFlag; import org.apache.dolphinscheduler.common.thread.Stopper; import org.apache.dolphinscheduler.common.utils.DateUtils; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.TaskDefinition; import org.apache.dolphinscheduler.dao.entity.TaskInstance; import org.apache.dolphinscheduler.server.master.cache.ProcessInstanceExecCacheManager; import org.apache.dolphinscheduler.server.master.config.MasterConfig; import org.apache.hadoop.util.ThreadUtil; import java.util.Map.Entry; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentLinkedQueue; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; /** * 1. timeout check wheel * 2. dependent task check wheel */ @Component public class StateWheelExecuteThread extends Thread { private static final Logger logger = LoggerFactory.getLogger(StateWheelExecuteThread.class); /** * process timeout check list */ private ConcurrentLinkedQueue<Integer> processInstanceTimeoutCheckList = new ConcurrentLinkedQueue<>(); /** * task time out check list, key is taskInstanceId, value is processInstanceId */ private ConcurrentHashMap<Integer, Integer> taskInstanceTimeoutCheckList = new ConcurrentHashMap<>(); /** * task retry check list, key is taskInstanceId, value is processInstanceId */ private ConcurrentHashMap<Integer, Integer> taskInstanceRetryCheckList = new ConcurrentHashMap<>(); @Autowired private MasterConfig masterConfig; @Autowired private WorkflowExecuteThreadPool workflowExecuteThreadPool; @Autowired private ProcessInstanceExecCacheManager processInstanceExecCacheManager; @Override public void run() { while (Stopper.isRunning()) { try { checkTask4Timeout(); checkTask4Retry(); checkProcess4Timeout(); } catch (Exception e) { logger.error("state wheel thread check error:", e); } ThreadUtil.sleepAtLeastIgnoreInterrupts((long) masterConfig.getStateWheelInterval() * Constants.SLEEP_TIME_MILLIS); } } public void addProcess4TimeoutCheck(ProcessInstance processInstance) { processInstanceTimeoutCheckList.add(processInstance.getId()); } public void removeProcess4TimeoutCheck(ProcessInstance processInstance) { processInstanceTimeoutCheckList.remove(processInstance.getId()); } public void addTask4TimeoutCheck(TaskInstance taskInstance) { if (taskInstanceTimeoutCheckList.containsKey(taskInstance.getId())) { return; } TaskDefinition taskDefinition = taskInstance.getTaskDefine(); if (taskDefinition == null) { logger.error("taskDefinition is null, taskId:{}", taskInstance.getId()); return; } if (TimeoutFlag.OPEN == taskDefinition.getTimeoutFlag()) { taskInstanceTimeoutCheckList.put(taskInstance.getId(), taskInstance.getProcessInstanceId()); } if (taskInstance.isDependTask() || taskInstance.isSubProcess()) { taskInstanceTimeoutCheckList.put(taskInstance.getId(), taskInstance.getProcessInstanceId()); } } public void removeTask4TimeoutCheck(TaskInstance taskInstance) { taskInstanceTimeoutCheckList.remove(taskInstance.getId()); } public void addTask4RetryCheck(TaskInstance taskInstance) { if (taskInstanceRetryCheckList.containsKey(taskInstance.getId())) { return; } TaskDefinition taskDefinition = taskInstance.getTaskDefine(); if (taskDefinition == null) { logger.error("taskDefinition is null, taskId:{}", taskInstance.getId()); return; } if (taskInstance.taskCanRetry()) { taskInstanceRetryCheckList.put(taskInstance.getId(), taskInstance.getProcessInstanceId()); } if (taskInstance.isDependTask() || taskInstance.isSubProcess()) { taskInstanceRetryCheckList.put(taskInstance.getId(), taskInstance.getProcessInstanceId()); } } public void removeTask4RetryCheck(TaskInstance taskInstance) { taskInstanceRetryCheckList.remove(taskInstance.getId()); } private void checkTask4Timeout() { if (taskInstanceTimeoutCheckList.isEmpty()) { return; } for (Entry<Integer, Integer> entry : taskInstanceTimeoutCheckList.entrySet()) { int processInstanceId = entry.getValue(); int taskInstanceId = entry.getKey(); WorkflowExecuteThread workflowExecuteThread = processInstanceExecCacheManager.getByProcessInstanceId(processInstanceId); if (workflowExecuteThread == null) { logger.warn("can not find workflowExecuteThread, this check event will remove, processInstanceId:{}, taskInstanceId:{}", processInstanceId, taskInstanceId); taskInstanceTimeoutCheckList.remove(taskInstanceId); continue; } TaskInstance taskInstance = workflowExecuteThread.getTaskInstance(taskInstanceId); if (taskInstance == null) { continue; } if (TimeoutFlag.OPEN == taskInstance.getTaskDefine().getTimeoutFlag()) { long timeRemain = DateUtils.getRemainTime(taskInstance.getStartTime(), (long) taskInstance.getTaskDefine().getTimeout() * Constants.SEC_2_MINUTES_TIME_UNIT); if (timeRemain < 0) { addTaskTimeoutEvent(taskInstance); taskInstanceTimeoutCheckList.remove(taskInstance.getId()); } } } } private void checkTask4Retry() { if (taskInstanceRetryCheckList.isEmpty()) { return; } for (Entry<Integer, Integer> entry : taskInstanceRetryCheckList.entrySet()) { int processInstanceId = entry.getValue(); int taskInstanceId = entry.getKey(); WorkflowExecuteThread workflowExecuteThread = processInstanceExecCacheManager.getByProcessInstanceId(processInstanceId); if (workflowExecuteThread == null) { logger.warn("can not find workflowExecuteThread, this check event will remove, processInstanceId:{}, taskInstanceId:{}", processInstanceId, taskInstanceId); taskInstanceRetryCheckList.remove(taskInstanceId); continue; } TaskInstance taskInstance = workflowExecuteThread.getTaskInstance(taskInstanceId); if (taskInstance == null) { continue; } if (taskInstance.taskCanRetry() && taskInstance.retryTaskIntervalOverTime()) { addTaskStateChangeEvent(taskInstance); taskInstanceRetryCheckList.remove(taskInstance.getId()); } if (taskInstance.isSubProcess() || taskInstance.isDependTask()) { addTaskStateChangeEvent(taskInstance); } } } private void checkProcess4Timeout() { if (processInstanceTimeoutCheckList.isEmpty()) { return; } for (Integer processInstanceId : processInstanceTimeoutCheckList) { if (processInstanceId == null) { continue; } WorkflowExecuteThread workflowExecuteThread = processInstanceExecCacheManager.getByProcessInstanceId(processInstanceId); if (workflowExecuteThread == null) { logger.warn("can not find workflowExecuteThread, this check event will remove, processInstanceId:{}", processInstanceId); processInstanceTimeoutCheckList.remove(processInstanceId); continue; } ProcessInstance processInstance = workflowExecuteThread.getProcessInstance(); if (processInstance == null) { continue; } long timeRemain = DateUtils.getRemainTime(processInstance.getStartTime(), (long) processInstance.getTimeout() * Constants.SEC_2_MINUTES_TIME_UNIT); if (timeRemain < 0) { addProcessTimeoutEvent(processInstance); processInstanceTimeoutCheckList.remove(processInstance.getId()); } } } private void addTaskStateChangeEvent(TaskInstance taskInstance) { StateEvent stateEvent = new StateEvent(); stateEvent.setType(StateEventType.TASK_STATE_CHANGE); stateEvent.setProcessInstanceId(taskInstance.getProcessInstanceId()); stateEvent.setTaskInstanceId(taskInstance.getId()); stateEvent.setExecutionStatus(ExecutionStatus.RUNNING_EXECUTION); workflowExecuteThreadPool.submitStateEvent(stateEvent); } private void addTaskTimeoutEvent(TaskInstance taskInstance) { StateEvent stateEvent = new StateEvent(); stateEvent.setType(StateEventType.TASK_TIMEOUT); stateEvent.setProcessInstanceId(taskInstance.getProcessInstanceId()); stateEvent.setTaskInstanceId(taskInstance.getId()); workflowExecuteThreadPool.submitStateEvent(stateEvent); } private void addProcessTimeoutEvent(ProcessInstance processInstance) { StateEvent stateEvent = new StateEvent(); stateEvent.setType(StateEventType.PROCESS_TIMEOUT); stateEvent.setProcessInstanceId(processInstance.getId()); workflowExecuteThreadPool.submitStateEvent(stateEvent); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,537
[Bug] [Master] dependent node retry delay did not work
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://ftp.bmp.ovh/imgs/2021/12/2f85bc283fac5f14.png) ![](https://ftp.bmp.ovh/imgs/2021/12/6e2e6bcc4a4f3a1f.png) ### What you expected to happen Dependent node retry delay works. ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7537
https://github.com/apache/dolphinscheduler/pull/7551
b2a378693f2030c6742feb9ab3b4874426dac5d0
9f56123a26096e9e423a447e659180b9153de07d
"2021-12-22T02:31:36Z"
java
"2021-12-22T12:21:52Z"
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteThread.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.server.master.runner; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVERY_START_NODE_STRING; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_START_NODES; import static org.apache.dolphinscheduler.common.Constants.DEFAULT_WORKER_GROUP; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.CommandType; import org.apache.dolphinscheduler.common.enums.DependResult; import org.apache.dolphinscheduler.common.enums.Direct; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.FailureStrategy; import org.apache.dolphinscheduler.common.enums.Flag; import org.apache.dolphinscheduler.common.enums.Priority; import org.apache.dolphinscheduler.common.enums.StateEvent; import org.apache.dolphinscheduler.common.enums.StateEventType; import org.apache.dolphinscheduler.common.enums.TaskDependType; import org.apache.dolphinscheduler.common.enums.TaskGroupQueueStatus; import org.apache.dolphinscheduler.common.enums.TaskTimeoutStrategy; import org.apache.dolphinscheduler.common.enums.TimeoutFlag; import org.apache.dolphinscheduler.common.graph.DAG; import org.apache.dolphinscheduler.common.model.TaskNode; import org.apache.dolphinscheduler.common.model.TaskNodeRelation; import org.apache.dolphinscheduler.common.process.ProcessDag; import org.apache.dolphinscheduler.common.process.Property; import org.apache.dolphinscheduler.common.utils.DateUtils; import org.apache.dolphinscheduler.common.utils.JSONUtils; import org.apache.dolphinscheduler.common.utils.NetUtils; import org.apache.dolphinscheduler.common.utils.ParameterUtils; import org.apache.dolphinscheduler.dao.entity.Command; import org.apache.dolphinscheduler.dao.entity.Environment; import org.apache.dolphinscheduler.dao.entity.ProcessDefinition; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation; import org.apache.dolphinscheduler.dao.entity.ProjectUser; import org.apache.dolphinscheduler.dao.entity.Schedule; import org.apache.dolphinscheduler.dao.entity.TaskDefinition; import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog; import org.apache.dolphinscheduler.dao.entity.TaskGroupQueue; import org.apache.dolphinscheduler.dao.entity.TaskInstance; import org.apache.dolphinscheduler.dao.utils.DagHelper; import org.apache.dolphinscheduler.remote.command.HostUpdateCommand; import org.apache.dolphinscheduler.remote.utils.Host; import org.apache.dolphinscheduler.server.master.config.MasterConfig; import org.apache.dolphinscheduler.server.master.dispatch.executor.NettyExecutorManager; import org.apache.dolphinscheduler.server.master.runner.task.ITaskProcessor; import org.apache.dolphinscheduler.server.master.runner.task.TaskAction; import org.apache.dolphinscheduler.server.master.runner.task.TaskProcessorFactory; import org.apache.dolphinscheduler.service.alert.ProcessAlertManager; import org.apache.dolphinscheduler.service.process.ProcessService; import org.apache.dolphinscheduler.service.quartz.cron.CronUtils; import org.apache.dolphinscheduler.service.queue.PeerTaskInstancePriorityQueue; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.lang.StringUtils; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Date; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Objects; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.atomic.AtomicBoolean; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.common.collect.Lists; /** * master exec thread,split dag */ public class WorkflowExecuteThread { /** * logger of WorkflowExecuteThread */ private static final Logger logger = LoggerFactory.getLogger(WorkflowExecuteThread.class); /** * master config */ private MasterConfig masterConfig; /** * process service */ private ProcessService processService; /** * alert manager */ private ProcessAlertManager processAlertManager; /** * netty executor manager */ private NettyExecutorManager nettyExecutorManager; /** * process instance */ private ProcessInstance processInstance; /** * process definition */ private ProcessDefinition processDefinition; /** * the object of DAG */ private DAG<String, TaskNode, TaskNodeRelation> dag; /** * key of workflow */ private String key; /** * start flag, true: start nodes submit completely */ private boolean isStart = false; /** * submit failure nodes */ private boolean taskFailedSubmit = false; /** * task instance hash map, taskId as key */ private Map<Integer, TaskInstance> taskInstanceMap = new ConcurrentHashMap<>(); /** * running TaskNode, taskId as key */ private final Map<Integer, ITaskProcessor> activeTaskProcessorMaps = new ConcurrentHashMap<>(); /** * valid task map, taskCode as key, taskId as value */ private Map<String, Integer> validTaskMap = new ConcurrentHashMap<>(); /** * error task map, taskCode as key, taskId as value */ private Map<String, Integer> errorTaskMap = new ConcurrentHashMap<>(); /** * complete task map, taskCode as key, taskId as value */ private Map<String, Integer> completeTaskMap = new ConcurrentHashMap<>(); /** * depend failed task map, taskCode as key, taskId as value */ private Map<String, Integer> dependFailedTaskMap = new ConcurrentHashMap<>(); /** * forbidden task map, code as key */ private Map<String, TaskNode> forbiddenTaskMap = new ConcurrentHashMap<>(); /** * skip task map, code as key */ private Map<String, TaskNode> skipTaskNodeMap = new ConcurrentHashMap<>(); /** * complement date list */ private List<Date> complementListDate = Lists.newLinkedList(); /** * state event queue */ private ConcurrentLinkedQueue<StateEvent> stateEvents = new ConcurrentLinkedQueue<>(); /** * ready to submit task queue */ private PeerTaskInstancePriorityQueue readyToSubmitTaskQueue = new PeerTaskInstancePriorityQueue(); /** * state wheel execute thread */ private StateWheelExecuteThread stateWheelExecuteThread; /** * constructor of WorkflowExecuteThread * * @param processInstance processInstance * @param processService processService * @param nettyExecutorManager nettyExecutorManager * @param processAlertManager processAlertManager * @param masterConfig masterConfig * @param stateWheelExecuteThread stateWheelExecuteThread */ public WorkflowExecuteThread(ProcessInstance processInstance , ProcessService processService , NettyExecutorManager nettyExecutorManager , ProcessAlertManager processAlertManager , MasterConfig masterConfig , StateWheelExecuteThread stateWheelExecuteThread) { this.processService = processService; this.processInstance = processInstance; this.masterConfig = masterConfig; this.nettyExecutorManager = nettyExecutorManager; this.processAlertManager = processAlertManager; this.stateWheelExecuteThread = stateWheelExecuteThread; } /** * the process start nodes are submitted completely. */ public boolean isStart() { return this.isStart; } /** * handle event */ public void handleEvents() { if (!isStart) { return; } while (!this.stateEvents.isEmpty()) { try { StateEvent stateEvent = this.stateEvents.peek(); if (stateEventHandler(stateEvent)) { this.stateEvents.remove(stateEvent); } } catch (Exception e) { logger.error("state handle error:", e); } } } public String getKey() { if (StringUtils.isNotEmpty(key) || this.processDefinition == null) { return key; } key = String.format("%d_%d_%d", this.processDefinition.getCode(), this.processDefinition.getVersion(), this.processInstance.getId()); return key; } public boolean addStateEvent(StateEvent stateEvent) { if (processInstance.getId() != stateEvent.getProcessInstanceId()) { logger.info("state event would be abounded :{}", stateEvent.toString()); return false; } this.stateEvents.add(stateEvent); return true; } public int eventSize() { return this.stateEvents.size(); } public ProcessInstance getProcessInstance() { return this.processInstance; } private boolean stateEventHandler(StateEvent stateEvent) { logger.info("process event: {}", stateEvent.toString()); if (!checkProcessInstance(stateEvent)) { return false; } boolean result = false; switch (stateEvent.getType()) { case PROCESS_STATE_CHANGE: result = processStateChangeHandler(stateEvent); break; case TASK_STATE_CHANGE: result = taskStateChangeHandler(stateEvent); break; case PROCESS_TIMEOUT: result = processTimeout(); break; case TASK_TIMEOUT: result = taskTimeout(stateEvent); break; case WAIT_TASK_GROUP: result = checkForceStartAndWakeUp(stateEvent); break; default: break; } if (result) { this.stateEvents.remove(stateEvent); } return result; } private boolean checkForceStartAndWakeUp(StateEvent stateEvent) { TaskGroupQueue taskGroupQueue = this.processService.loadTaskGroupQueue(stateEvent.getTaskInstanceId()); if (taskGroupQueue.getForceStart() == Flag.YES.getCode()) { ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId()); TaskInstance taskInstance = this.processService.findTaskInstanceById(stateEvent.getTaskInstanceId()); ProcessInstance processInstance = this.processService.findProcessInstanceById(taskInstance.getProcessInstanceId()); taskProcessor.dispatch(taskInstance, processInstance); this.processService.updateTaskGroupQueueStatus(taskGroupQueue.getId(), TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode()); return true; } if (taskGroupQueue.getInQueue() == Flag.YES.getCode()) { boolean acquireTaskGroup = processService.acquireTaskGroupAgain(taskGroupQueue); if (acquireTaskGroup) { ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId()); TaskInstance taskInstance = this.processService.findTaskInstanceById(stateEvent.getTaskInstanceId()); ProcessInstance processInstance = this.processService.findProcessInstanceById(taskInstance.getProcessInstanceId()); taskProcessor.dispatch(taskInstance, processInstance); return true; } } return false; } private boolean taskTimeout(StateEvent stateEvent) { if (!checkTaskInstanceByStateEvent(stateEvent)) { return true; } TaskInstance taskInstance = taskInstanceMap.get(stateEvent.getTaskInstanceId()); if (TimeoutFlag.CLOSE == taskInstance.getTaskDefine().getTimeoutFlag()) { return true; } TaskTimeoutStrategy taskTimeoutStrategy = taskInstance.getTaskDefine().getTimeoutNotifyStrategy(); if (TaskTimeoutStrategy.FAILED == taskTimeoutStrategy) { ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId()); taskProcessor.action(TaskAction.TIMEOUT); } else { processAlertManager.sendTaskTimeoutAlert(processInstance, taskInstance, taskInstance.getTaskDefine()); } return true; } private boolean processTimeout() { this.processAlertManager.sendProcessTimeoutAlert(this.processInstance, this.processDefinition); return true; } private boolean taskStateChangeHandler(StateEvent stateEvent) { if (!checkTaskInstanceByStateEvent(stateEvent)) { return true; } TaskInstance task = getTaskInstance(stateEvent.getTaskInstanceId()); if (task.getState() == null) { logger.error("task state is null, state handler error: {}", stateEvent); return true; } if (task.getState().typeIsFinished() && !completeTaskMap.containsKey(Long.toString(task.getTaskCode()))) { taskFinished(task); if (task.getTaskGroupId() > 0) { //release task group TaskInstance nextTaskInstance = this.processService.releaseTaskGroup(task); if (nextTaskInstance != null) { if (nextTaskInstance.getProcessInstanceId() == task.getProcessInstanceId()) { StateEvent nextEvent = new StateEvent(); nextEvent.setProcessInstanceId(this.processInstance.getId()); nextEvent.setTaskInstanceId(nextTaskInstance.getId()); nextEvent.setType(StateEventType.WAIT_TASK_GROUP); this.stateEvents.add(nextEvent); } else { ProcessInstance processInstance = this.processService.findProcessInstanceById(nextTaskInstance.getProcessInstanceId()); this.processService.sendStartTask2Master(processInstance, nextTaskInstance.getId(), org.apache.dolphinscheduler.remote.command.CommandType.TASK_WAKEUP_EVENT_REQUEST); } } } } else if (activeTaskProcessorMaps.containsKey(stateEvent.getTaskInstanceId())) { ITaskProcessor iTaskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId()); iTaskProcessor.run(); if (iTaskProcessor.taskState().typeIsFinished()) { task = processService.findTaskInstanceById(stateEvent.getTaskInstanceId()); taskFinished(task); } } else { logger.error("state handler error: {}", stateEvent); } return true; } private void taskFinished(TaskInstance task) { logger.info("work flow {} task {} state:{} ", processInstance.getId(), task.getId(), task.getState()); if (task.taskCanRetry()) { addTaskToStandByList(task); if (!task.retryTaskIntervalOverTime()) { logger.info("failure task will be submitted: process id: {}, task instance id: {} state:{} retry times:{} / {}, interval:{}", processInstance.getId(), task.getId(), task.getState(), task.getRetryTimes(), task.getMaxRetryTimes(), task.getRetryInterval()); stateWheelExecuteThread.addTask4TimeoutCheck(task); stateWheelExecuteThread.addTask4RetryCheck(task); } else { submitStandByTask(); } return; } completeTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); activeTaskProcessorMaps.remove(task.getId()); stateWheelExecuteThread.removeTask4TimeoutCheck(task); stateWheelExecuteThread.removeTask4RetryCheck(task); if (task.getState().typeIsSuccess()) { processInstance.setVarPool(task.getVarPool()); processService.saveProcessInstance(processInstance); submitPostNode(Long.toString(task.getTaskCode())); } else if (task.getState().typeIsFailure()) { if (task.isConditionsTask() || DagHelper.haveConditionsAfterNode(Long.toString(task.getTaskCode()), dag)) { submitPostNode(Long.toString(task.getTaskCode())); } else { errorTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); if (processInstance.getFailureStrategy() == FailureStrategy.END) { killAllTasks(); } } } this.updateProcessInstanceState(); } /** * update process instance */ public void refreshProcessInstance(int processInstanceId) { logger.info("process instance update: {}", processInstanceId); processInstance = processService.findProcessInstanceById(processInstanceId); processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); processInstance.setProcessDefinition(processDefinition); } /** * update task instance */ public void refreshTaskInstance(int taskInstanceId) { logger.info("task instance update: {} ", taskInstanceId); TaskInstance taskInstance = processService.findTaskInstanceById(taskInstanceId); if (taskInstance == null) { logger.error("can not find task instance, id:{}", taskInstanceId); return; } processService.packageTaskInstance(taskInstance, processInstance); taskInstanceMap.put(taskInstance.getId(), taskInstance); validTaskMap.remove(Long.toString(taskInstance.getTaskCode())); if (Flag.YES == taskInstance.getFlag()) { validTaskMap.put(Long.toString(taskInstance.getTaskCode()), taskInstance.getId()); } } /** * check process instance by state event */ public boolean checkProcessInstance(StateEvent stateEvent) { if (this.processInstance.getId() != stateEvent.getProcessInstanceId()) { logger.error("mismatch process instance id: {}, state event:{}", this.processInstance.getId(), stateEvent); return false; } return true; } /** * check if task instance exist by state event */ public boolean checkTaskInstanceByStateEvent(StateEvent stateEvent) { if (stateEvent.getTaskInstanceId() == 0) { logger.error("task instance id null, state event:{}", stateEvent); return false; } if (!taskInstanceMap.containsKey(stateEvent.getTaskInstanceId())) { logger.error("mismatch task instance id, event:{}", stateEvent); return false; } return true; } /** * check if task instance exist by task code */ public boolean checkTaskInstanceByCode(long taskCode) { if (taskInstanceMap == null || taskInstanceMap.size() == 0) { return false; } for (TaskInstance taskInstance : taskInstanceMap.values()) { if (taskInstance.getTaskCode() == taskCode) { return true; } } return false; } /** * check if task instance exist by id */ public boolean checkTaskInstanceById(int taskInstanceId) { if (taskInstanceMap == null || taskInstanceMap.size() == 0) { return false; } return taskInstanceMap.containsKey(taskInstanceId); } /** * get task instance from memory */ public TaskInstance getTaskInstance(int taskInstanceId) { if (taskInstanceMap.containsKey(taskInstanceId)) { return taskInstanceMap.get(taskInstanceId); } return null; } private boolean processStateChangeHandler(StateEvent stateEvent) { try { logger.info("process:{} state {} change to {}", processInstance.getId(), processInstance.getState(), stateEvent.getExecutionStatus()); if (processComplementData()) { return true; } if (stateEvent.getExecutionStatus().typeIsFinished()) { endProcess(); } if (processInstance.getState() == ExecutionStatus.READY_STOP) { killAllTasks(); } return true; } catch (Exception e) { logger.error("process state change error:", e); } return true; } private boolean processComplementData() throws Exception { if (!needComplementProcess()) { return false; } if (processInstance.getState() == ExecutionStatus.READY_STOP) { return false; } Date scheduleDate = processInstance.getScheduleTime(); if (scheduleDate == null) { scheduleDate = complementListDate.get(0); } else if (processInstance.getState().typeIsFinished()) { endProcess(); if (complementListDate.size() <= 0) { logger.info("process complement end. process id:{}", processInstance.getId()); return true; } int index = complementListDate.indexOf(scheduleDate); if (index >= complementListDate.size() - 1 || !processInstance.getState().typeIsSuccess()) { logger.info("process complement end. process id:{}", processInstance.getId()); // complement data ends || no success return true; } logger.info("process complement continue. process id:{}, schedule time:{} complementListDate:{}", processInstance.getId(), processInstance.getScheduleTime(), complementListDate.toString()); scheduleDate = complementListDate.get(index + 1); //the next process complement processInstance.setId(0); } processInstance.setScheduleTime(scheduleDate); Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam()); if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) { cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); } processInstance.setState(ExecutionStatus.RUNNING_EXECUTION); processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime())); processInstance.setStartTime(new Date()); processInstance.setEndTime(null); processService.saveProcessInstance(processInstance); this.taskInstanceMap.clear(); startProcess(); return true; } private boolean needComplementProcess() { if (processInstance.isComplementData() && Flag.NO == processInstance.getIsSubProcess()) { return true; } return false; } /** * process start handle */ public void startProcess() { if (this.taskInstanceMap.size() > 0) { return; } try { isStart = false; buildFlowDag(); initTaskQueue(); submitPostNode(null); isStart = true; } catch (Exception e) { logger.error("start process error, process instance id:{}", processInstance.getId(), e); } } /** * process end handle */ private void endProcess() { this.stateEvents.clear(); if (processDefinition.getExecutionType().typeIsSerialWait()) { checkSerialProcess(processDefinition); } if (processInstance.getState().typeIsWaitingThread()) { processService.createRecoveryWaitingThreadCommand(null, processInstance); } if (processAlertManager.isNeedToSendWarning(processInstance)) { ProjectUser projectUser = processService.queryProjectWithUserByProcessInstanceId(processInstance.getId()); processAlertManager.sendAlertProcessInstance(processInstance, getValidTaskList(), projectUser); } if (checkTaskQueue()) { //release task group processService.releaseAllTaskGroup(processInstance.getId()); } } public void checkSerialProcess(ProcessDefinition processDefinition) { int nextInstanceId = processInstance.getNextProcessInstanceId(); if (nextInstanceId == 0) { ProcessInstance nextProcessInstance = this.processService.loadNextProcess4Serial(processInstance.getProcessDefinition().getCode(), ExecutionStatus.SERIAL_WAIT.getCode()); if (nextProcessInstance == null) { return; } nextInstanceId = nextProcessInstance.getId(); } ProcessInstance nextProcessInstance = this.processService.findProcessInstanceById(nextInstanceId); if (nextProcessInstance.getState().typeIsFinished() || nextProcessInstance.getState().typeIsRunning()) { return; } Map<String, Object> cmdParam = new HashMap<>(); cmdParam.put(CMD_PARAM_RECOVER_PROCESS_ID_STRING, nextInstanceId); Command command = new Command(); command.setCommandType(CommandType.RECOVER_SERIAL_WAIT); command.setProcessDefinitionCode(processDefinition.getCode()); command.setCommandParam(JSONUtils.toJsonString(cmdParam)); processService.createCommand(command); } /** * generate process dag * * @throws Exception exception */ private void buildFlowDag() throws Exception { if (this.dag != null) { return; } processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); processInstance.setProcessDefinition(processDefinition); List<TaskInstance> recoverNodeList = getStartTaskInstanceList(processInstance.getCommandParam()); List<ProcessTaskRelation> processTaskRelations = processService.findRelationByCode(processDefinition.getProjectCode(), processDefinition.getCode()); List<TaskDefinitionLog> taskDefinitionLogs = processService.getTaskDefineLogListByRelation(processTaskRelations); List<TaskNode> taskNodeList = processService.transformTask(processTaskRelations, taskDefinitionLogs); forbiddenTaskMap.clear(); taskNodeList.forEach(taskNode -> { if (taskNode.isForbidden()) { forbiddenTaskMap.put(Long.toString(taskNode.getCode()), taskNode); } }); // generate process to get DAG info List<String> recoveryNodeCodeList = getRecoveryNodeCodeList(recoverNodeList); List<String> startNodeNameList = parseStartNodeName(processInstance.getCommandParam()); ProcessDag processDag = generateFlowDag(taskNodeList, startNodeNameList, recoveryNodeCodeList, processInstance.getTaskDependType()); if (processDag == null) { logger.error("processDag is null"); return; } // generate process dag dag = DagHelper.buildDagGraph(processDag); } /** * init task queue */ private void initTaskQueue() { taskFailedSubmit = false; activeTaskProcessorMaps.clear(); dependFailedTaskMap.clear(); completeTaskMap.clear(); errorTaskMap.clear(); if (!isNewProcessInstance()) { List<TaskInstance> validTaskInstanceList = processService.findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance task : validTaskInstanceList) { validTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); taskInstanceMap.put(task.getId(), task); if (task.isTaskComplete()) { completeTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); } if (task.isConditionsTask() || DagHelper.haveConditionsAfterNode(Long.toString(task.getTaskCode()), dag)) { continue; } if (task.getState().typeIsFailure() && !task.taskCanRetry()) { errorTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); } } } if (processInstance.isComplementData() && complementListDate.size() == 0) { Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam()); if (cmdParam != null && cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) { Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE)); Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); List<Schedule> schedules = processService.queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode()); if (complementListDate.size() == 0 && needComplementProcess()) { complementListDate = CronUtils.getSelfFireDateList(start, end, schedules); logger.info(" process definition code:{} complement data: {}", processInstance.getProcessDefinitionCode(), complementListDate.toString()); if (complementListDate.size() > 0 && Flag.NO == processInstance.getIsSubProcess()) { processInstance.setScheduleTime(complementListDate.get(0)); processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime())); processService.updateProcessInstance(processInstance); } } } } } /** * submit task to execute * * @param taskInstance task instance * @return TaskInstance */ private TaskInstance submitTaskExec(TaskInstance taskInstance) { try { ITaskProcessor taskProcessor = TaskProcessorFactory.getTaskProcessor(taskInstance.getTaskType()); if (taskInstance.getState() == ExecutionStatus.RUNNING_EXECUTION && taskProcessor.getType().equalsIgnoreCase(Constants.COMMON_TASK_TYPE)) { notifyProcessHostUpdate(taskInstance); } TaskDefinition taskDefinition = processService.findTaskDefinition( taskInstance.getTaskCode(), taskInstance.getTaskDefinitionVersion()); taskInstance.setTaskGroupId(taskDefinition.getTaskGroupId()); // package task instance before submit processService.packageTaskInstance(taskInstance, processInstance); boolean submit = taskProcessor.submit(taskInstance, processInstance, masterConfig.getTaskCommitRetryTimes(), masterConfig.getTaskCommitInterval(), masterConfig.isTaskLogger()); if (!submit) { logger.error("process id:{} name:{} submit standby task id:{} name:{} failed!", processInstance.getId(), processInstance.getName(), taskInstance.getId(), taskInstance.getName()); return null; } validTaskMap.put(Long.toString(taskInstance.getTaskCode()), taskInstance.getId()); taskInstanceMap.put(taskInstance.getId(), taskInstance); activeTaskProcessorMaps.put(taskInstance.getId(), taskProcessor); taskProcessor.run(); stateWheelExecuteThread.addTask4TimeoutCheck(taskInstance); stateWheelExecuteThread.addTask4RetryCheck(taskInstance); if (taskProcessor.taskState().typeIsFinished()) { StateEvent stateEvent = new StateEvent(); stateEvent.setProcessInstanceId(this.processInstance.getId()); stateEvent.setTaskInstanceId(taskInstance.getId()); stateEvent.setExecutionStatus(taskProcessor.taskState()); stateEvent.setType(StateEventType.TASK_STATE_CHANGE); this.stateEvents.add(stateEvent); } return taskInstance; } catch (Exception e) { logger.error("submit standby task error", e); return null; } } private void notifyProcessHostUpdate(TaskInstance taskInstance) { if (StringUtils.isEmpty(taskInstance.getHost())) { return; } try { HostUpdateCommand hostUpdateCommand = new HostUpdateCommand(); hostUpdateCommand.setProcessHost(NetUtils.getAddr(masterConfig.getListenPort())); hostUpdateCommand.setTaskInstanceId(taskInstance.getId()); Host host = new Host(taskInstance.getHost()); nettyExecutorManager.doExecute(host, hostUpdateCommand.convert2Command()); } catch (Exception e) { logger.error("notify process host update", e); } } /** * find task instance in db. * in case submit more than one same name task in the same time. * * @param taskCode task code * @param taskVersion task version * @return TaskInstance */ private TaskInstance findTaskIfExists(Long taskCode, int taskVersion) { List<TaskInstance> validTaskInstanceList = getValidTaskList(); for (TaskInstance taskInstance : validTaskInstanceList) { if (taskInstance.getTaskCode() == taskCode && taskInstance.getTaskDefinitionVersion() == taskVersion) { return taskInstance; } } return null; } /** * encapsulation task * * @param processInstance process instance * @param taskNode taskNode * @return TaskInstance */ private TaskInstance createTaskInstance(ProcessInstance processInstance, TaskNode taskNode) { TaskInstance taskInstance = findTaskIfExists(taskNode.getCode(), taskNode.getVersion()); if (taskInstance == null) { taskInstance = new TaskInstance(); taskInstance.setTaskCode(taskNode.getCode()); taskInstance.setTaskDefinitionVersion(taskNode.getVersion()); // task name taskInstance.setName(taskNode.getName()); // task instance state taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS); // process instance id taskInstance.setProcessInstanceId(processInstance.getId()); // task instance type taskInstance.setTaskType(taskNode.getType().toUpperCase()); // task instance whether alert taskInstance.setAlertFlag(Flag.NO); // task instance start time taskInstance.setStartTime(null); // task instance flag taskInstance.setFlag(Flag.YES); // task dry run flag taskInstance.setDryRun(processInstance.getDryRun()); // task instance retry times taskInstance.setRetryTimes(0); // max task instance retry times taskInstance.setMaxRetryTimes(taskNode.getMaxRetryTimes()); // retry task instance interval taskInstance.setRetryInterval(taskNode.getRetryInterval()); //set task param taskInstance.setTaskParams(taskNode.getTaskParams()); // task instance priority if (taskNode.getTaskInstancePriority() == null) { taskInstance.setTaskInstancePriority(Priority.MEDIUM); } else { taskInstance.setTaskInstancePriority(taskNode.getTaskInstancePriority()); } String processWorkerGroup = processInstance.getWorkerGroup(); processWorkerGroup = StringUtils.isBlank(processWorkerGroup) ? DEFAULT_WORKER_GROUP : processWorkerGroup; String taskWorkerGroup = StringUtils.isBlank(taskNode.getWorkerGroup()) ? processWorkerGroup : taskNode.getWorkerGroup(); Long processEnvironmentCode = Objects.isNull(processInstance.getEnvironmentCode()) ? -1 : processInstance.getEnvironmentCode(); Long taskEnvironmentCode = Objects.isNull(taskNode.getEnvironmentCode()) ? processEnvironmentCode : taskNode.getEnvironmentCode(); if (!processWorkerGroup.equals(DEFAULT_WORKER_GROUP) && taskWorkerGroup.equals(DEFAULT_WORKER_GROUP)) { taskInstance.setWorkerGroup(processWorkerGroup); taskInstance.setEnvironmentCode(processEnvironmentCode); } else { taskInstance.setWorkerGroup(taskWorkerGroup); taskInstance.setEnvironmentCode(taskEnvironmentCode); } if (!taskInstance.getEnvironmentCode().equals(-1L)) { Environment environment = processService.findEnvironmentByCode(taskInstance.getEnvironmentCode()); if (Objects.nonNull(environment) && StringUtils.isNotEmpty(environment.getConfig())) { taskInstance.setEnvironmentConfig(environment.getConfig()); } } // delay execution time taskInstance.setDelayTime(taskNode.getDelayTime()); } return taskInstance; } public void getPreVarPool(TaskInstance taskInstance, Set<String> preTask) { Map<String, Property> allProperty = new HashMap<>(); Map<String, TaskInstance> allTaskInstance = new HashMap<>(); if (CollectionUtils.isNotEmpty(preTask)) { for (String preTaskCode : preTask) { Integer taskId = completeTaskMap.get(preTaskCode); if (taskId == null) { continue; } TaskInstance preTaskInstance = taskInstanceMap.get(taskId); if (preTaskInstance == null) { continue; } String preVarPool = preTaskInstance.getVarPool(); if (StringUtils.isNotEmpty(preVarPool)) { List<Property> properties = JSONUtils.toList(preVarPool, Property.class); for (Property info : properties) { setVarPoolValue(allProperty, allTaskInstance, preTaskInstance, info); } } } if (allProperty.size() > 0) { taskInstance.setVarPool(JSONUtils.toJsonString(allProperty.values())); } } } private void setVarPoolValue(Map<String, Property> allProperty, Map<String, TaskInstance> allTaskInstance, TaskInstance preTaskInstance, Property thisProperty) { //for this taskInstance all the param in this part is IN. thisProperty.setDirect(Direct.IN); //get the pre taskInstance Property's name String proName = thisProperty.getProp(); //if the Previous nodes have the Property of same name if (allProperty.containsKey(proName)) { //comparison the value of two Property Property otherPro = allProperty.get(proName); //if this property'value of loop is empty,use the other,whether the other's value is empty or not if (StringUtils.isEmpty(thisProperty.getValue())) { allProperty.put(proName, otherPro); //if property'value of loop is not empty,and the other's value is not empty too, use the earlier value } else if (StringUtils.isNotEmpty(otherPro.getValue())) { TaskInstance otherTask = allTaskInstance.get(proName); if (otherTask.getEndTime().getTime() > preTaskInstance.getEndTime().getTime()) { allProperty.put(proName, thisProperty); allTaskInstance.put(proName, preTaskInstance); } else { allProperty.put(proName, otherPro); } } else { allProperty.put(proName, thisProperty); allTaskInstance.put(proName, preTaskInstance); } } else { allProperty.put(proName, thisProperty); allTaskInstance.put(proName, preTaskInstance); } } /** * get complete task instance map, taskCode as key */ private Map<String, TaskInstance> getCompleteTaskInstanceMap() { Map<String, TaskInstance> completeTaskInstanceMap = new HashMap<>(); for (Integer taskInstanceId : completeTaskMap.values()) { TaskInstance taskInstance = taskInstanceMap.get(taskInstanceId); completeTaskInstanceMap.put(Long.toString(taskInstance.getTaskCode()), taskInstance); } return completeTaskInstanceMap; } /** * get valid task list */ private List<TaskInstance> getValidTaskList() { List<TaskInstance> validTaskInstanceList = new ArrayList<>(); for (Integer taskInstanceId : validTaskMap.values()) { validTaskInstanceList.add(taskInstanceMap.get(taskInstanceId)); } return validTaskInstanceList; } private void submitPostNode(String parentNodeCode) { Set<String> submitTaskNodeList = DagHelper.parsePostNodes(parentNodeCode, skipTaskNodeMap, dag, getCompleteTaskInstanceMap()); List<TaskInstance> taskInstances = new ArrayList<>(); for (String taskNode : submitTaskNodeList) { TaskNode taskNodeObject = dag.getNode(taskNode); if (checkTaskInstanceByCode(taskNodeObject.getCode())) { continue; } TaskInstance task = createTaskInstance(processInstance, taskNodeObject); taskInstances.add(task); } // if previous node success , post node submit for (TaskInstance task : taskInstances) { if (readyToSubmitTaskQueue.contains(task)) { continue; } if (completeTaskMap.containsKey(Long.toString(task.getTaskCode()))) { logger.info("task {} has already run success", task.getName()); continue; } if (task.getState().typeIsPause() || task.getState().typeIsCancel()) { logger.info("task {} stopped, the state is {}", task.getName(), task.getState()); continue; } addTaskToStandByList(task); } submitStandByTask(); updateProcessInstanceState(); } /** * determine whether the dependencies of the task node are complete * * @return DependResult */ private DependResult isTaskDepsComplete(String taskCode) { Collection<String> startNodes = dag.getBeginNode(); // if vertex,returns true directly if (startNodes.contains(taskCode)) { return DependResult.SUCCESS; } TaskNode taskNode = dag.getNode(taskCode); List<String> depCodeList = taskNode.getDepList(); for (String depsNode : depCodeList) { if (!dag.containsNode(depsNode) || forbiddenTaskMap.containsKey(depsNode) || skipTaskNodeMap.containsKey(depsNode)) { continue; } // dependencies must be fully completed if (!completeTaskMap.containsKey(depsNode)) { return DependResult.WAITING; } Integer depsTaskId = completeTaskMap.get(depsNode); ExecutionStatus depTaskState = taskInstanceMap.get(depsTaskId).getState(); if (depTaskState.typeIsPause() || depTaskState.typeIsCancel()) { return DependResult.NON_EXEC; } // ignore task state if current task is condition if (taskNode.isConditionsTask()) { continue; } if (!dependTaskSuccess(depsNode, taskCode)) { return DependResult.FAILED; } } logger.info("taskCode: {} completeDependTaskList: {}", taskCode, Arrays.toString(completeTaskMap.keySet().toArray())); return DependResult.SUCCESS; } /** * depend node is completed, but here need check the condition task branch is the next node */ private boolean dependTaskSuccess(String dependNodeName, String nextNodeName) { if (dag.getNode(dependNodeName).isConditionsTask()) { //condition task need check the branch to run List<String> nextTaskList = DagHelper.parseConditionTask(dependNodeName, skipTaskNodeMap, dag, getCompleteTaskInstanceMap()); if (!nextTaskList.contains(nextNodeName)) { return false; } } else { Integer taskInstanceId = completeTaskMap.get(dependNodeName); ExecutionStatus depTaskState = taskInstanceMap.get(taskInstanceId).getState(); if (depTaskState.typeIsFailure()) { return false; } } return true; } /** * query task instance by complete state * * @param state state * @return task instance list */ private List<TaskInstance> getCompleteTaskByState(ExecutionStatus state) { List<TaskInstance> resultList = new ArrayList<>(); for (Integer taskInstanceId : completeTaskMap.values()) { TaskInstance taskInstance = taskInstanceMap.get(taskInstanceId); if (taskInstance != null && taskInstance.getState() == state) { resultList.add(taskInstance); } } return resultList; } /** * where there are ongoing tasks * * @param state state * @return ExecutionStatus */ private ExecutionStatus runningState(ExecutionStatus state) { if (state == ExecutionStatus.READY_STOP || state == ExecutionStatus.READY_PAUSE || state == ExecutionStatus.WAITING_THREAD || state == ExecutionStatus.DELAY_EXECUTION) { // if the running task is not completed, the state remains unchanged return state; } else { return ExecutionStatus.RUNNING_EXECUTION; } } /** * exists failure task,contains submit failure、dependency failure,execute failure(retry after) * * @return Boolean whether has failed task */ private boolean hasFailedTask() { if (this.taskFailedSubmit) { return true; } if (this.errorTaskMap.size() > 0) { return true; } return this.dependFailedTaskMap.size() > 0; } /** * process instance failure * * @return Boolean whether process instance failed */ private boolean processFailed() { if (hasFailedTask()) { if (processInstance.getFailureStrategy() == FailureStrategy.END) { return true; } if (processInstance.getFailureStrategy() == FailureStrategy.CONTINUE) { return readyToSubmitTaskQueue.size() == 0 && activeTaskProcessorMaps.size() == 0; } } return false; } /** * whether task for waiting thread * * @return Boolean whether has waiting thread task */ private boolean hasWaitingThreadTask() { List<TaskInstance> waitingList = getCompleteTaskByState(ExecutionStatus.WAITING_THREAD); return CollectionUtils.isNotEmpty(waitingList); } /** * prepare for pause * 1,failed retry task in the preparation queue , returns to failure directly * 2,exists pause task,complement not completed, pending submission of tasks, return to suspension * 3,success * * @return ExecutionStatus */ private ExecutionStatus processReadyPause() { if (hasRetryTaskInStandBy()) { return ExecutionStatus.FAILURE; } List<TaskInstance> pauseList = getCompleteTaskByState(ExecutionStatus.PAUSE); if (CollectionUtils.isNotEmpty(pauseList) || !isComplementEnd() || readyToSubmitTaskQueue.size() > 0) { return ExecutionStatus.PAUSE; } else { return ExecutionStatus.SUCCESS; } } /** * generate the latest process instance status by the tasks state * * @return process instance execution status */ private ExecutionStatus getProcessInstanceState(ProcessInstance instance) { ExecutionStatus state = instance.getState(); if (activeTaskProcessorMaps.size() > 0 || hasRetryTaskInStandBy()) { // active task and retry task exists return runningState(state); } // process failure if (processFailed()) { return ExecutionStatus.FAILURE; } // waiting thread if (hasWaitingThreadTask()) { return ExecutionStatus.WAITING_THREAD; } // pause if (state == ExecutionStatus.READY_PAUSE) { return processReadyPause(); } // stop if (state == ExecutionStatus.READY_STOP) { List<TaskInstance> stopList = getCompleteTaskByState(ExecutionStatus.STOP); List<TaskInstance> killList = getCompleteTaskByState(ExecutionStatus.KILL); if (CollectionUtils.isNotEmpty(stopList) || CollectionUtils.isNotEmpty(killList) || !isComplementEnd()) { return ExecutionStatus.STOP; } else { return ExecutionStatus.SUCCESS; } } // success if (state == ExecutionStatus.RUNNING_EXECUTION) { List<TaskInstance> killTasks = getCompleteTaskByState(ExecutionStatus.KILL); if (readyToSubmitTaskQueue.size() > 0) { //tasks currently pending submission, no retries, indicating that depend is waiting to complete return ExecutionStatus.RUNNING_EXECUTION; } else if (CollectionUtils.isNotEmpty(killTasks)) { // tasks maybe killed manually return ExecutionStatus.FAILURE; } else { // if the waiting queue is empty and the status is in progress, then success return ExecutionStatus.SUCCESS; } } return state; } /** * whether complement end * * @return Boolean whether is complement end */ private boolean isComplementEnd() { if (!processInstance.isComplementData()) { return true; } try { Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam()); Date endTime = DateUtils.getScheduleDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); return processInstance.getScheduleTime().equals(endTime); } catch (Exception e) { logger.error("complement end failed ", e); return false; } } /** * updateProcessInstance process instance state * after each batch of tasks is executed, the status of the process instance is updated */ private void updateProcessInstanceState() { ExecutionStatus state = getProcessInstanceState(processInstance); if (processInstance.getState() != state) { logger.info( "work flow process instance [id: {}, name:{}], state change from {} to {}, cmd type: {}", processInstance.getId(), processInstance.getName(), processInstance.getState(), state, processInstance.getCommandType()); processInstance.setState(state); if (state.typeIsFinished()) { processInstance.setEndTime(new Date()); } processService.updateProcessInstance(processInstance); StateEvent stateEvent = new StateEvent(); stateEvent.setExecutionStatus(processInstance.getState()); stateEvent.setProcessInstanceId(this.processInstance.getId()); stateEvent.setType(StateEventType.PROCESS_STATE_CHANGE); this.processStateChangeHandler(stateEvent); } } /** * get task dependency result * * @param taskInstance task instance * @return DependResult */ private DependResult getDependResultForTask(TaskInstance taskInstance) { return isTaskDepsComplete(Long.toString(taskInstance.getTaskCode())); } /** * add task to standby list * * @param taskInstance task instance */ private void addTaskToStandByList(TaskInstance taskInstance) { logger.info("add task to stand by list: {}", taskInstance.getName()); try { if (!readyToSubmitTaskQueue.contains(taskInstance)) { readyToSubmitTaskQueue.put(taskInstance); } } catch (Exception e) { logger.error("add task instance to readyToSubmitTaskQueue error, taskName: {}", taskInstance.getName(), e); } } /** * remove task from stand by list * * @param taskInstance task instance */ private void removeTaskFromStandbyList(TaskInstance taskInstance) { logger.info("remove task from stand by list, id: {} name:{}", taskInstance.getId(), taskInstance.getName()); try { readyToSubmitTaskQueue.remove(taskInstance); } catch (Exception e) { logger.error("remove task instance from readyToSubmitTaskQueue error, task id:{}, Name: {}", taskInstance.getId(), taskInstance.getName(), e); } } /** * has retry task in standby * * @return Boolean whether has retry task in standby */ private boolean hasRetryTaskInStandBy() { for (Iterator<TaskInstance> iter = readyToSubmitTaskQueue.iterator(); iter.hasNext(); ) { if (iter.next().getState().typeIsFailure()) { return true; } } return false; } /** * close the on going tasks */ private void killAllTasks() { logger.info("kill called on process instance id: {}, num: {}", processInstance.getId(), activeTaskProcessorMaps.size()); for (int taskId : activeTaskProcessorMaps.keySet()) { TaskInstance taskInstance = processService.findTaskInstanceById(taskId); if (taskInstance == null || taskInstance.getState().typeIsFinished()) { continue; } ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(taskId); taskProcessor.action(TaskAction.STOP); if (taskProcessor.taskState().typeIsFinished()) { StateEvent stateEvent = new StateEvent(); stateEvent.setType(StateEventType.TASK_STATE_CHANGE); stateEvent.setProcessInstanceId(this.processInstance.getId()); stateEvent.setTaskInstanceId(taskInstance.getId()); stateEvent.setExecutionStatus(taskProcessor.taskState()); this.addStateEvent(stateEvent); } } } public boolean workFlowFinish() { return this.processInstance.getState().typeIsFinished(); } /** * handling the list of tasks to be submitted */ private void submitStandByTask() { try { int length = readyToSubmitTaskQueue.size(); for (int i = 0; i < length; i++) { TaskInstance task = readyToSubmitTaskQueue.peek(); if (task == null) { continue; } // stop tasks which is retrying if forced success happens if (task.taskCanRetry()) { TaskInstance retryTask = processService.findTaskInstanceById(task.getId()); if (retryTask != null && retryTask.getState().equals(ExecutionStatus.FORCED_SUCCESS)) { task.setState(retryTask.getState()); logger.info("task: {} has been forced success, put it into complete task list and stop retrying", task.getName()); removeTaskFromStandbyList(task); completeTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); taskInstanceMap.put(task.getId(), task); submitPostNode(Long.toString(task.getTaskCode())); continue; } } //init varPool only this task is the first time running if (task.isFirstRun()) { //get pre task ,get all the task varPool to this task Set<String> preTask = dag.getPreviousNodes(Long.toString(task.getTaskCode())); getPreVarPool(task, preTask); } DependResult dependResult = getDependResultForTask(task); if (DependResult.SUCCESS == dependResult) { if (task.retryTaskIntervalOverTime()) { int originalId = task.getId(); TaskInstance taskInstance = submitTaskExec(task); if (taskInstance == null) { this.taskFailedSubmit = true; } else { removeTaskFromStandbyList(task); if (taskInstance.getId() != originalId) { activeTaskProcessorMaps.remove(originalId); } } } } else if (DependResult.FAILED == dependResult) { // if the dependency fails, the current node is not submitted and the state changes to failure. dependFailedTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); removeTaskFromStandbyList(task); logger.info("task {},id:{} depend result : {}", task.getName(), task.getId(), dependResult); } else if (DependResult.NON_EXEC == dependResult) { // for some reasons(depend task pause/stop) this task would not be submit removeTaskFromStandbyList(task); logger.info("remove task {},id:{} , because depend result : {}", task.getName(), task.getId(), dependResult); } } } catch (Exception e) { logger.error("submit standby task error", e); } } /** * get recovery task instance * * @param taskId task id * @return recovery task instance */ private TaskInstance getRecoveryTaskInstance(String taskId) { if (!StringUtils.isNotEmpty(taskId)) { return null; } try { Integer intId = Integer.valueOf(taskId); TaskInstance task = processService.findTaskInstanceById(intId); if (task == null) { logger.error("start node id cannot be found: {}", taskId); } else { return task; } } catch (Exception e) { logger.error("get recovery task instance failed ", e); } return null; } /** * get start task instance list * * @param cmdParam command param * @return task instance list */ private List<TaskInstance> getStartTaskInstanceList(String cmdParam) { List<TaskInstance> instanceList = new ArrayList<>(); Map<String, String> paramMap = JSONUtils.toMap(cmdParam); if (paramMap != null && paramMap.containsKey(CMD_PARAM_RECOVERY_START_NODE_STRING)) { String[] idList = paramMap.get(CMD_PARAM_RECOVERY_START_NODE_STRING).split(Constants.COMMA); for (String nodeId : idList) { TaskInstance task = getRecoveryTaskInstance(nodeId); if (task != null) { instanceList.add(task); } } } return instanceList; } /** * parse "StartNodeNameList" from cmd param * * @param cmdParam command param * @return start node name list */ private List<String> parseStartNodeName(String cmdParam) { List<String> startNodeNameList = new ArrayList<>(); Map<String, String> paramMap = JSONUtils.toMap(cmdParam); if (paramMap == null) { return startNodeNameList; } if (paramMap.containsKey(CMD_PARAM_START_NODES)) { startNodeNameList = Arrays.asList(paramMap.get(CMD_PARAM_START_NODES).split(Constants.COMMA)); } return startNodeNameList; } /** * generate start node code list from parsing command param; * if "StartNodeIdList" exists in command param, return StartNodeIdList * * @return recovery node code list */ private List<String> getRecoveryNodeCodeList(List<TaskInstance> recoverNodeList) { List<String> recoveryNodeCodeList = new ArrayList<>(); if (CollectionUtils.isNotEmpty(recoverNodeList)) { for (TaskInstance task : recoverNodeList) { recoveryNodeCodeList.add(Long.toString(task.getTaskCode())); } } return recoveryNodeCodeList; } /** * generate flow dag * * @param totalTaskNodeList total task node list * @param startNodeNameList start node name list * @param recoveryNodeCodeList recovery node code list * @param depNodeType depend node type * @return ProcessDag process dag * @throws Exception exception */ public ProcessDag generateFlowDag(List<TaskNode> totalTaskNodeList, List<String> startNodeNameList, List<String> recoveryNodeCodeList, TaskDependType depNodeType) throws Exception { return DagHelper.generateFlowDag(totalTaskNodeList, startNodeNameList, recoveryNodeCodeList, depNodeType); } /** * check task queue */ private boolean checkTaskQueue() { AtomicBoolean result = new AtomicBoolean(false); taskInstanceMap.forEach((id, taskInstance) -> { if (taskInstance != null && taskInstance.getTaskGroupId() > 0) { result.set(true); } }); return result.get(); } /** * is new process instance */ private boolean isNewProcessInstance() { if (ExecutionStatus.RUNNING_EXECUTION == processInstance.getState() && processInstance.getRunTimes() == 1) { return true; } else { return false; } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,333
[Bug] [MasterServer] Duplicate key TaskDefinition
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ``` [ERROR] 2021-12-11 10:59:03.107 org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread:[264] - handler error: java.lang.IllegalStateException: Duplicate key TaskDefinition{id=3, code=3736405945376, name='echo', version=3, description='', projectCode=3733897458592, userId=1, taskType=SHELL, taskParams='{"resourceList":[],"localParams":[],"rawScript":"echo \"test\"","dependence":{},"conditionResult":{"successNode":[],"failedNode":[]},"waitStartTimeout":{},"switchResult":{}}', taskParamList=null, taskParamMap=null, flag=YES, taskPriority=MEDIUM, userName='null', projectName='null', workerGroup='default', failRetryTimes=0, environmentCode='-1', failRetryInterval=1, timeoutFlag=CLOSE, timeoutNotifyStrategy=WARN, timeout=0, delayTime=0, resourceIds='', createTime=Sat Dec 04 06:31:26 CST 2021, updateTime=Sun Dec 05 12:00:32 CST 2021} at java.util.stream.Collectors.lambda$throwingMerger$0(Collectors.java:133) at java.util.HashMap.merge(HashMap.java:1254) at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1320) at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1380) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at org.apache.dolphinscheduler.service.process.ProcessService.transformTask(ProcessService.java:2474) at org.apache.dolphinscheduler.service.process.ProcessService$$FastClassBySpringCGLIB$$ed138739.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.service.process.ProcessService$$EnhancerBySpringCGLIB$$607206c9.transformTask(<generated>) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.buildFlowDag(WorkflowExecuteThread.java:733) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.startProcess(WorkflowExecuteThread.java:666) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:259) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### What you expected to happen process instance run successfully ### How to reproduce create a process with multi task. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7333
https://github.com/apache/dolphinscheduler/pull/7335
9f56123a26096e9e423a447e659180b9153de07d
d3533851a02bb3293527ee8ca50e01f32f907197
"2021-12-11T04:11:58Z"
java
"2021-12-22T14:44:08Z"
dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.service.process; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_EMPTY_SUB_PROCESS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_FATHER_PARAMS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID; import static org.apache.dolphinscheduler.common.Constants.LOCAL_PARAMS; import static java.util.stream.Collectors.toSet; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.AuthorizationType; import org.apache.dolphinscheduler.common.enums.CommandType; import org.apache.dolphinscheduler.common.enums.Direct; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.FailureStrategy; import org.apache.dolphinscheduler.common.enums.Flag; import org.apache.dolphinscheduler.common.enums.ReleaseState; import org.apache.dolphinscheduler.common.enums.TaskDependType; import org.apache.dolphinscheduler.common.enums.TaskGroupQueueStatus; import org.apache.dolphinscheduler.common.enums.TimeoutFlag; import org.apache.dolphinscheduler.common.enums.WarningType; import org.apache.dolphinscheduler.common.graph.DAG; import org.apache.dolphinscheduler.common.model.DateInterval; import org.apache.dolphinscheduler.common.model.TaskNode; import org.apache.dolphinscheduler.common.model.TaskNodeRelation; import org.apache.dolphinscheduler.common.process.ProcessDag; import org.apache.dolphinscheduler.common.process.Property; import org.apache.dolphinscheduler.common.process.ResourceInfo; import org.apache.dolphinscheduler.common.task.AbstractParameters; import org.apache.dolphinscheduler.common.task.TaskTimeoutParameter; import org.apache.dolphinscheduler.common.task.subprocess.SubProcessParameters; import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils; import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils.CodeGenerateException; import org.apache.dolphinscheduler.common.utils.DateUtils; import org.apache.dolphinscheduler.common.utils.JSONUtils; import org.apache.dolphinscheduler.common.utils.ParameterUtils; import org.apache.dolphinscheduler.common.utils.PropertyUtils; import org.apache.dolphinscheduler.common.utils.TaskParametersUtils; import org.apache.dolphinscheduler.dao.entity.Command; import org.apache.dolphinscheduler.dao.entity.DagData; import org.apache.dolphinscheduler.dao.entity.DataSource; import org.apache.dolphinscheduler.dao.entity.Environment; import org.apache.dolphinscheduler.dao.entity.ErrorCommand; import org.apache.dolphinscheduler.dao.entity.ProcessDefinition; import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.ProcessInstanceMap; import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation; import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog; import org.apache.dolphinscheduler.dao.entity.Project; import org.apache.dolphinscheduler.dao.entity.ProjectUser; import org.apache.dolphinscheduler.dao.entity.Resource; import org.apache.dolphinscheduler.dao.entity.Schedule; import org.apache.dolphinscheduler.dao.entity.TaskDefinition; import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog; import org.apache.dolphinscheduler.dao.entity.TaskGroup; import org.apache.dolphinscheduler.dao.entity.TaskGroupQueue; import org.apache.dolphinscheduler.dao.entity.TaskInstance; import org.apache.dolphinscheduler.dao.entity.Tenant; import org.apache.dolphinscheduler.dao.entity.UdfFunc; import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.dao.mapper.CommandMapper; import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper; import org.apache.dolphinscheduler.dao.mapper.EnvironmentMapper; import org.apache.dolphinscheduler.dao.mapper.ErrorCommandMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationLogMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationMapper; import org.apache.dolphinscheduler.dao.mapper.ProjectMapper; import org.apache.dolphinscheduler.dao.mapper.ResourceMapper; import org.apache.dolphinscheduler.dao.mapper.ResourceUserMapper; import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper; import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper; import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.TaskGroupMapper; import org.apache.dolphinscheduler.dao.mapper.TaskGroupQueueMapper; import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper; import org.apache.dolphinscheduler.dao.mapper.TenantMapper; import org.apache.dolphinscheduler.dao.mapper.UdfFuncMapper; import org.apache.dolphinscheduler.dao.mapper.UserMapper; import org.apache.dolphinscheduler.dao.utils.DagHelper; import org.apache.dolphinscheduler.remote.command.StateEventChangeCommand; import org.apache.dolphinscheduler.remote.command.TaskEventChangeCommand; import org.apache.dolphinscheduler.remote.processor.StateEventCallbackService; import org.apache.dolphinscheduler.remote.utils.Host; import org.apache.dolphinscheduler.service.bean.SpringApplicationContext; import org.apache.dolphinscheduler.service.exceptions.ServiceException; import org.apache.dolphinscheduler.service.log.LogClientService; import org.apache.dolphinscheduler.service.quartz.cron.CronUtils; import org.apache.dolphinscheduler.spi.enums.ResourceType; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.lang.StringUtils; import java.util.ArrayList; import java.util.Arrays; import java.util.Date; import java.util.EnumMap; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Objects; import java.util.Set; import java.util.stream.Collectors; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import org.springframework.transaction.annotation.Transactional; import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.node.ObjectNode; import com.google.common.collect.Lists; /** * process relative dao that some mappers in this. */ @Component public class ProcessService { private final Logger logger = LoggerFactory.getLogger(getClass()); private final int[] stateArray = new int[]{ExecutionStatus.SUBMITTED_SUCCESS.ordinal(), ExecutionStatus.RUNNING_EXECUTION.ordinal(), ExecutionStatus.DELAY_EXECUTION.ordinal(), ExecutionStatus.READY_PAUSE.ordinal(), ExecutionStatus.READY_STOP.ordinal()}; @Autowired private UserMapper userMapper; @Autowired private ProcessDefinitionMapper processDefineMapper; @Autowired private ProcessDefinitionLogMapper processDefineLogMapper; @Autowired private ProcessInstanceMapper processInstanceMapper; @Autowired private DataSourceMapper dataSourceMapper; @Autowired private ProcessInstanceMapMapper processInstanceMapMapper; @Autowired private TaskInstanceMapper taskInstanceMapper; @Autowired private CommandMapper commandMapper; @Autowired private ScheduleMapper scheduleMapper; @Autowired private UdfFuncMapper udfFuncMapper; @Autowired private ResourceMapper resourceMapper; @Autowired private ResourceUserMapper resourceUserMapper; @Autowired private ErrorCommandMapper errorCommandMapper; @Autowired private TenantMapper tenantMapper; @Autowired private ProjectMapper projectMapper; @Autowired private TaskDefinitionMapper taskDefinitionMapper; @Autowired private TaskDefinitionLogMapper taskDefinitionLogMapper; @Autowired private ProcessTaskRelationMapper processTaskRelationMapper; @Autowired private ProcessTaskRelationLogMapper processTaskRelationLogMapper; @Autowired StateEventCallbackService stateEventCallbackService; @Autowired private EnvironmentMapper environmentMapper; @Autowired private TaskGroupQueueMapper taskGroupQueueMapper; @Autowired private TaskGroupMapper taskGroupMapper; /** * handle Command (construct ProcessInstance from Command) , wrapped in transaction * * @param logger logger * @param host host * @param command found command * @return process instance */ @Transactional public ProcessInstance handleCommand(Logger logger, String host, Command command) { ProcessInstance processInstance = constructProcessInstance(command, host); // cannot construct process instance, return null if (processInstance == null) { logger.error("scan command, command parameter is error: {}", command); moveToErrorCommand(command, "process instance is null"); return null; } processInstance.setCommandType(command.getCommandType()); processInstance.addHistoryCmd(command.getCommandType()); //if the processDefination is serial ProcessDefinition processDefinition = this.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); if (processDefinition.getExecutionType().typeIsSerial()) { saveSerialProcess(processInstance, processDefinition); if (processInstance.getState() != ExecutionStatus.SUBMITTED_SUCCESS) { setSubProcessParam(processInstance); deleteCommandWithCheck(command.getId()); return null; } } else { saveProcessInstance(processInstance); } setSubProcessParam(processInstance); deleteCommandWithCheck(command.getId()); return processInstance; } private void saveSerialProcess(ProcessInstance processInstance, ProcessDefinition processDefinition) { processInstance.setState(ExecutionStatus.SERIAL_WAIT); saveProcessInstance(processInstance); //serial wait //when we get the running instance(or waiting instance) only get the priority instance(by id) if (processDefinition.getExecutionType().typeIsSerialWait()) { while (true) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isEmpty(runningProcessInstances)) { processInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS); saveProcessInstance(processInstance); return; } ProcessInstance runningProcess = runningProcessInstances.get(0); if (this.processInstanceMapper.updateNextProcessIdById(processInstance.getId(), runningProcess.getId())) { return; } } } else if (processDefinition.getExecutionType().typeIsSerialDiscard()) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isEmpty(runningProcessInstances)) { processInstance.setState(ExecutionStatus.STOP); saveProcessInstance(processInstance); } } else if (processDefinition.getExecutionType().typeIsSerialPriority()) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isNotEmpty(runningProcessInstances)) { for (ProcessInstance info : runningProcessInstances) { info.setCommandType(CommandType.STOP); info.addHistoryCmd(CommandType.STOP); info.setState(ExecutionStatus.READY_STOP); int update = updateProcessInstance(info); // determine whether the process is normal if (update > 0) { String host = info.getHost(); String address = host.split(":")[0]; int port = Integer.parseInt(host.split(":")[1]); StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand( info.getId(), 0, info.getState(), info.getId(), 0 ); try { stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command()); } catch (Exception e) { logger.error("sendResultError"); } } } } } } /** * save error command, and delete original command * * @param command command * @param message message */ public void moveToErrorCommand(Command command, String message) { ErrorCommand errorCommand = new ErrorCommand(command, message); this.errorCommandMapper.insert(errorCommand); this.commandMapper.deleteById(command.getId()); } /** * set process waiting thread * * @param command command * @param processInstance processInstance * @return process instance */ private ProcessInstance setWaitingThreadProcess(Command command, ProcessInstance processInstance) { processInstance.setState(ExecutionStatus.WAITING_THREAD); if (command.getCommandType() != CommandType.RECOVER_WAITING_THREAD) { processInstance.addHistoryCmd(command.getCommandType()); } saveProcessInstance(processInstance); this.setSubProcessParam(processInstance); createRecoveryWaitingThreadCommand(command, processInstance); return null; } /** * insert one command * * @param command command * @return create result */ public int createCommand(Command command) { int result = 0; if (command != null) { result = commandMapper.insert(command); } return result; } /** * get command page */ public List<Command> findCommandPage(int pageSize, int pageNumber) { return commandMapper.queryCommandPage(pageSize, pageNumber * pageSize); } /** * check the input command exists in queue list * * @param command command * @return create command result */ public boolean verifyIsNeedCreateCommand(Command command) { boolean isNeedCreate = true; EnumMap<CommandType, Integer> cmdTypeMap = new EnumMap<>(CommandType.class); cmdTypeMap.put(CommandType.REPEAT_RUNNING, 1); cmdTypeMap.put(CommandType.RECOVER_SUSPENDED_PROCESS, 1); cmdTypeMap.put(CommandType.START_FAILURE_TASK_PROCESS, 1); CommandType commandType = command.getCommandType(); if (cmdTypeMap.containsKey(commandType)) { ObjectNode cmdParamObj = JSONUtils.parseObject(command.getCommandParam()); int processInstanceId = cmdParamObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt(); List<Command> commands = commandMapper.selectList(null); // for all commands for (Command tmpCommand : commands) { if (cmdTypeMap.containsKey(tmpCommand.getCommandType())) { ObjectNode tempObj = JSONUtils.parseObject(tmpCommand.getCommandParam()); if (tempObj != null && processInstanceId == tempObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt()) { isNeedCreate = false; break; } } } } return isNeedCreate; } /** * find process instance detail by id * * @param processId processId * @return process instance */ public ProcessInstance findProcessInstanceDetailById(int processId) { return processInstanceMapper.queryDetailById(processId); } /** * get task node list by definitionId */ public List<TaskDefinition> getTaskNodeListByDefinition(long defineCode) { ProcessDefinition processDefinition = processDefineMapper.queryByCode(defineCode); if (processDefinition == null) { logger.error("process define not exists"); return Lists.newArrayList(); } List<ProcessTaskRelationLog> processTaskRelations = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion()); Set<TaskDefinition> taskDefinitionSet = new HashSet<>(); for (ProcessTaskRelationLog processTaskRelation : processTaskRelations) { if (processTaskRelation.getPostTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion())); } } if (taskDefinitionSet.isEmpty()) { return Lists.newArrayList(); } List<TaskDefinitionLog> taskDefinitionLogs = taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet); return Lists.newArrayList(taskDefinitionLogs); } /** * find process instance by id * * @param processId processId * @return process instance */ public ProcessInstance findProcessInstanceById(int processId) { return processInstanceMapper.selectById(processId); } /** * find process define by id. * * @param processDefinitionId processDefinitionId * @return process definition */ public ProcessDefinition findProcessDefineById(int processDefinitionId) { return processDefineMapper.selectById(processDefinitionId); } /** * find process define by code and version. * * @param processDefinitionCode processDefinitionCode * @return process definition */ public ProcessDefinition findProcessDefinition(Long processDefinitionCode, int version) { ProcessDefinition processDefinition = processDefineMapper.queryByCode(processDefinitionCode); if (processDefinition == null || processDefinition.getVersion() != version) { processDefinition = processDefineLogMapper.queryByDefinitionCodeAndVersion(processDefinitionCode, version); if (processDefinition != null) { processDefinition.setId(0); } } return processDefinition; } /** * find process define by code. * * @param processDefinitionCode processDefinitionCode * @return process definition */ public ProcessDefinition findProcessDefinitionByCode(Long processDefinitionCode) { return processDefineMapper.queryByCode(processDefinitionCode); } /** * delete work process instance by id * * @param processInstanceId processInstanceId * @return delete process instance result */ public int deleteWorkProcessInstanceById(int processInstanceId) { return processInstanceMapper.deleteById(processInstanceId); } /** * delete all sub process by parent instance id * * @param processInstanceId processInstanceId * @return delete all sub process instance result */ public int deleteAllSubWorkProcessByParentId(int processInstanceId) { List<Integer> subProcessIdList = processInstanceMapMapper.querySubIdListByParentId(processInstanceId); for (Integer subId : subProcessIdList) { deleteAllSubWorkProcessByParentId(subId); deleteWorkProcessMapByParentId(subId); removeTaskLogFile(subId); deleteWorkProcessInstanceById(subId); } return 1; } /** * remove task log file * * @param processInstanceId processInstanceId */ public void removeTaskLogFile(Integer processInstanceId) { List<TaskInstance> taskInstanceList = findValidTaskListByProcessId(processInstanceId); if (CollectionUtils.isEmpty(taskInstanceList)) { return; } try (LogClientService logClient = new LogClientService()) { for (TaskInstance taskInstance : taskInstanceList) { String taskLogPath = taskInstance.getLogPath(); if (StringUtils.isEmpty(taskInstance.getHost())) { continue; } int port = PropertyUtils.getInt(Constants.RPC_PORT, 50051); String ip = ""; try { ip = Host.of(taskInstance.getHost()).getIp(); } catch (Exception e) { // compatible old version ip = taskInstance.getHost(); } // remove task log from loggerserver logClient.removeTaskLog(ip, port, taskLogPath); } } } /** * recursive query sub process definition id by parent id. * * @param parentCode parentCode * @param ids ids */ public void recurseFindSubProcess(long parentCode, List<Long> ids) { List<TaskDefinition> taskNodeList = this.getTaskNodeListByDefinition(parentCode); if (taskNodeList != null && !taskNodeList.isEmpty()) { for (TaskDefinition taskNode : taskNodeList) { String parameter = taskNode.getTaskParams(); ObjectNode parameterJson = JSONUtils.parseObject(parameter); if (parameterJson.get(CMD_PARAM_SUB_PROCESS_DEFINE_CODE) != null) { SubProcessParameters subProcessParam = JSONUtils.parseObject(parameter, SubProcessParameters.class); ids.add(subProcessParam.getProcessDefinitionCode()); recurseFindSubProcess(subProcessParam.getProcessDefinitionCode(), ids); } } } } /** * create recovery waiting thread command when thread pool is not enough for the process instance. * sub work process instance need not to create recovery command. * create recovery waiting thread command and delete origin command at the same time. * if the recovery command is exists, only update the field update_time * * @param originCommand originCommand * @param processInstance processInstance */ public void createRecoveryWaitingThreadCommand(Command originCommand, ProcessInstance processInstance) { // sub process doesnot need to create wait command if (processInstance.getIsSubProcess() == Flag.YES) { if (originCommand != null) { commandMapper.deleteById(originCommand.getId()); } return; } Map<String, String> cmdParam = new HashMap<>(); cmdParam.put(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD, String.valueOf(processInstance.getId())); // process instance quit by "waiting thread" state if (originCommand == null) { Command command = new Command( CommandType.RECOVER_WAITING_THREAD, processInstance.getTaskDependType(), processInstance.getFailureStrategy(), processInstance.getExecutorId(), processInstance.getProcessDefinition().getCode(), JSONUtils.toJsonString(cmdParam), processInstance.getWarningType(), processInstance.getWarningGroupId(), processInstance.getScheduleTime(), processInstance.getWorkerGroup(), processInstance.getEnvironmentCode(), processInstance.getProcessInstancePriority(), processInstance.getDryRun(), processInstance.getId(), processInstance.getProcessDefinitionVersion() ); saveCommand(command); return; } // update the command time if current command if recover from waiting if (originCommand.getCommandType() == CommandType.RECOVER_WAITING_THREAD) { originCommand.setUpdateTime(new Date()); saveCommand(originCommand); } else { // delete old command and create new waiting thread command commandMapper.deleteById(originCommand.getId()); originCommand.setId(0); originCommand.setCommandType(CommandType.RECOVER_WAITING_THREAD); originCommand.setUpdateTime(new Date()); originCommand.setCommandParam(JSONUtils.toJsonString(cmdParam)); originCommand.setProcessInstancePriority(processInstance.getProcessInstancePriority()); saveCommand(originCommand); } } /** * get schedule time from command * * @param command command * @param cmdParam cmdParam map * @return date */ private Date getScheduleTime(Command command, Map<String, String> cmdParam) { Date scheduleTime = command.getScheduleTime(); if (scheduleTime == null && cmdParam != null && cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) { Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE)); Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); List<Schedule> schedules = queryReleaseSchedulerListByProcessDefinitionCode(command.getProcessDefinitionCode()); List<Date> complementDateList = CronUtils.getSelfFireDateList(start, end, schedules); if (complementDateList.size() > 0) { scheduleTime = complementDateList.get(0); } else { logger.error("set scheduler time error: complement date list is empty, command: {}", command.toString()); } } return scheduleTime; } /** * generate a new work process instance from command. * * @param processDefinition processDefinition * @param command command * @param cmdParam cmdParam map * @return process instance */ private ProcessInstance generateNewProcessInstance(ProcessDefinition processDefinition, Command command, Map<String, String> cmdParam) { ProcessInstance processInstance = new ProcessInstance(processDefinition); processInstance.setProcessDefinitionCode(processDefinition.getCode()); processInstance.setProcessDefinitionVersion(processDefinition.getVersion()); processInstance.setState(ExecutionStatus.RUNNING_EXECUTION); processInstance.setRecovery(Flag.NO); processInstance.setStartTime(new Date()); processInstance.setRunTimes(1); processInstance.setMaxTryTimes(0); processInstance.setCommandParam(command.getCommandParam()); processInstance.setCommandType(command.getCommandType()); processInstance.setIsSubProcess(Flag.NO); processInstance.setTaskDependType(command.getTaskDependType()); processInstance.setFailureStrategy(command.getFailureStrategy()); processInstance.setExecutorId(command.getExecutorId()); WarningType warningType = command.getWarningType() == null ? WarningType.NONE : command.getWarningType(); processInstance.setWarningType(warningType); Integer warningGroupId = command.getWarningGroupId() == null ? 0 : command.getWarningGroupId(); processInstance.setWarningGroupId(warningGroupId); processInstance.setDryRun(command.getDryRun()); if (command.getScheduleTime() != null) { processInstance.setScheduleTime(command.getScheduleTime()); } processInstance.setCommandStartTime(command.getStartTime()); processInstance.setLocations(processDefinition.getLocations()); // reset global params while there are start parameters setGlobalParamIfCommanded(processDefinition, cmdParam); // curing global params processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), getCommandTypeIfComplement(processInstance, command), processInstance.getScheduleTime())); // set process instance priority processInstance.setProcessInstancePriority(command.getProcessInstancePriority()); String workerGroup = StringUtils.isBlank(command.getWorkerGroup()) ? Constants.DEFAULT_WORKER_GROUP : command.getWorkerGroup(); processInstance.setWorkerGroup(workerGroup); processInstance.setEnvironmentCode(Objects.isNull(command.getEnvironmentCode()) ? -1 : command.getEnvironmentCode()); processInstance.setTimeout(processDefinition.getTimeout()); processInstance.setTenantId(processDefinition.getTenantId()); return processInstance; } private void setGlobalParamIfCommanded(ProcessDefinition processDefinition, Map<String, String> cmdParam) { // get start params from command param Map<String, String> startParamMap = new HashMap<>(); if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_START_PARAMS)) { String startParamJson = cmdParam.get(Constants.CMD_PARAM_START_PARAMS); startParamMap = JSONUtils.toMap(startParamJson); } Map<String, String> fatherParamMap = new HashMap<>(); if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_FATHER_PARAMS)) { String fatherParamJson = cmdParam.get(Constants.CMD_PARAM_FATHER_PARAMS); fatherParamMap = JSONUtils.toMap(fatherParamJson); } startParamMap.putAll(fatherParamMap); // set start param into global params if (startParamMap.size() > 0 && processDefinition.getGlobalParamMap() != null) { for (Map.Entry<String, String> param : processDefinition.getGlobalParamMap().entrySet()) { String val = startParamMap.get(param.getKey()); if (val != null) { param.setValue(val); } } } } /** * get process tenant * there is tenant id in definition, use the tenant of the definition. * if there is not tenant id in the definiton or the tenant not exist * use definition creator's tenant. * * @param tenantId tenantId * @param userId userId * @return tenant */ public Tenant getTenantForProcess(int tenantId, int userId) { Tenant tenant = null; if (tenantId >= 0) { tenant = tenantMapper.queryById(tenantId); } if (userId == 0) { return null; } if (tenant == null) { User user = userMapper.selectById(userId); tenant = tenantMapper.queryById(user.getTenantId()); } return tenant; } /** * get an environment * use the code of the environment to find a environment. * * @param environmentCode environmentCode * @return Environment */ public Environment findEnvironmentByCode(Long environmentCode) { Environment environment = null; if (environmentCode >= 0) { environment = environmentMapper.queryByEnvironmentCode(environmentCode); } return environment; } /** * check command parameters is valid * * @param command command * @param cmdParam cmdParam map * @return whether command param is valid */ private Boolean checkCmdParam(Command command, Map<String, String> cmdParam) { if (command.getTaskDependType() == TaskDependType.TASK_ONLY || command.getTaskDependType() == TaskDependType.TASK_PRE) { if (cmdParam == null || !cmdParam.containsKey(Constants.CMD_PARAM_START_NODES) || cmdParam.get(Constants.CMD_PARAM_START_NODES).isEmpty()) { logger.error("command node depend type is {}, but start nodes is null ", command.getTaskDependType()); return false; } } return true; } /** * construct process instance according to one command. * * @param command command * @param host host * @return process instance */ private ProcessInstance constructProcessInstance(Command command, String host) { ProcessInstance processInstance; ProcessDefinition processDefinition; CommandType commandType = command.getCommandType(); processDefinition = this.findProcessDefinition(command.getProcessDefinitionCode(), command.getProcessDefinitionVersion()); if (processDefinition == null) { logger.error("cannot find the work process define! define code : {}", command.getProcessDefinitionCode()); return null; } Map<String, String> cmdParam = JSONUtils.toMap(command.getCommandParam()); int processInstanceId = command.getProcessInstanceId(); if (processInstanceId == 0) { processInstance = generateNewProcessInstance(processDefinition, command, cmdParam); } else { processInstance = this.findProcessInstanceDetailById(processInstanceId); if (processInstance == null) { return processInstance; } } if (cmdParam != null) { CommandType commandTypeIfComplement = getCommandTypeIfComplement(processInstance, command); // reset global params while repeat running is needed by cmdParam if (commandTypeIfComplement == CommandType.REPEAT_RUNNING) { setGlobalParamIfCommanded(processDefinition, cmdParam); } // Recalculate global parameters after rerun. processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), commandTypeIfComplement, processInstance.getScheduleTime())); processInstance.setProcessDefinition(processDefinition); } //reset command parameter if (processInstance.getCommandParam() != null) { Map<String, String> processCmdParam = JSONUtils.toMap(processInstance.getCommandParam()); for (Map.Entry<String, String> entry : processCmdParam.entrySet()) { if (!cmdParam.containsKey(entry.getKey())) { cmdParam.put(entry.getKey(), entry.getValue()); } } } // reset command parameter if sub process if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) { processInstance.setCommandParam(command.getCommandParam()); } if (Boolean.FALSE.equals(checkCmdParam(command, cmdParam))) { logger.error("command parameter check failed!"); return null; } if (command.getScheduleTime() != null) { processInstance.setScheduleTime(command.getScheduleTime()); } processInstance.setHost(host); ExecutionStatus runStatus = ExecutionStatus.RUNNING_EXECUTION; int runTime = processInstance.getRunTimes(); switch (commandType) { case START_PROCESS: break; case START_FAILURE_TASK_PROCESS: // find failed tasks and init these tasks List<Integer> failedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.FAILURE); List<Integer> toleranceList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.NEED_FAULT_TOLERANCE); List<Integer> killedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.KILL); cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); failedList.addAll(killedList); failedList.addAll(toleranceList); for (Integer taskId : failedList) { initTaskInstance(this.findTaskInstanceById(taskId)); } cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING, String.join(Constants.COMMA, convertIntListToString(failedList))); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); processInstance.setRunTimes(runTime + 1); break; case START_CURRENT_TASK_PROCESS: break; case RECOVER_WAITING_THREAD: break; case RECOVER_SUSPENDED_PROCESS: // find pause tasks and init task's state cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); List<Integer> suspendedNodeList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.PAUSE); List<Integer> stopNodeList = findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.KILL); suspendedNodeList.addAll(stopNodeList); for (Integer taskId : suspendedNodeList) { // initialize the pause state initTaskInstance(this.findTaskInstanceById(taskId)); } cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING, String.join(",", convertIntListToString(suspendedNodeList))); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); processInstance.setRunTimes(runTime + 1); break; case RECOVER_TOLERANCE_FAULT_PROCESS: // recover tolerance fault process processInstance.setRecovery(Flag.YES); runStatus = processInstance.getState(); break; case COMPLEMENT_DATA: // delete all the valid tasks when complement data if id is not null if (processInstance.getId() != 0) { List<TaskInstance> taskInstanceList = this.findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance taskInstance : taskInstanceList) { taskInstance.setFlag(Flag.NO); this.updateTaskInstance(taskInstance); } } break; case REPEAT_RUNNING: // delete the recover task names from command parameter if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) { cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); } // delete all the valid tasks when repeat running List<TaskInstance> validTaskList = findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance taskInstance : validTaskList) { taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); } processInstance.setStartTime(new Date()); processInstance.setEndTime(null); processInstance.setRunTimes(runTime + 1); initComplementDataParam(processDefinition, processInstance, cmdParam); break; case SCHEDULER: break; default: break; } processInstance.setState(runStatus); return processInstance; } /** * get process definition by command * If it is a fault-tolerant command, get the specified version of ProcessDefinition through ProcessInstance * Otherwise, get the latest version of ProcessDefinition * * @return ProcessDefinition */ private ProcessDefinition getProcessDefinitionByCommand(long processDefinitionCode, Map<String, String> cmdParam) { if (cmdParam != null) { int processInstanceId = 0; if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING)); } else if (cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_SUB_PROCESS)); } else if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD)); } if (processInstanceId != 0) { ProcessInstance processInstance = this.findProcessInstanceDetailById(processInstanceId); if (processInstance == null) { return null; } return processDefineLogMapper.queryByDefinitionCodeAndVersion( processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); } } return processDefineMapper.queryByCode(processDefinitionCode); } /** * return complement data if the process start with complement data * * @param processInstance processInstance * @param command command * @return command type */ private CommandType getCommandTypeIfComplement(ProcessInstance processInstance, Command command) { if (CommandType.COMPLEMENT_DATA == processInstance.getCmdTypeIfComplement()) { return CommandType.COMPLEMENT_DATA; } else { return command.getCommandType(); } } /** * initialize complement data parameters * * @param processDefinition processDefinition * @param processInstance processInstance * @param cmdParam cmdParam */ private void initComplementDataParam(ProcessDefinition processDefinition, ProcessInstance processInstance, Map<String, String> cmdParam) { if (!processInstance.isComplementData()) { return; } Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE)); Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); List<Schedule> listSchedules = queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode()); List<Date> complementDate = CronUtils.getSelfFireDateList(start, end, listSchedules); if (complementDate.size() > 0 && Flag.NO == processInstance.getIsSubProcess()) { processInstance.setScheduleTime(complementDate.get(0)); } processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime())); } /** * set sub work process parameters. * handle sub work process instance, update relation table and command parameters * set sub work process flag, extends parent work process command parameters * * @param subProcessInstance subProcessInstance */ public void setSubProcessParam(ProcessInstance subProcessInstance) { String cmdParam = subProcessInstance.getCommandParam(); if (StringUtils.isEmpty(cmdParam)) { return; } Map<String, String> paramMap = JSONUtils.toMap(cmdParam); // write sub process id into cmd param. if (paramMap.containsKey(CMD_PARAM_SUB_PROCESS) && CMD_PARAM_EMPTY_SUB_PROCESS.equals(paramMap.get(CMD_PARAM_SUB_PROCESS))) { paramMap.remove(CMD_PARAM_SUB_PROCESS); paramMap.put(CMD_PARAM_SUB_PROCESS, String.valueOf(subProcessInstance.getId())); subProcessInstance.setCommandParam(JSONUtils.toJsonString(paramMap)); subProcessInstance.setIsSubProcess(Flag.YES); this.saveProcessInstance(subProcessInstance); } // copy parent instance user def params to sub process.. String parentInstanceId = paramMap.get(CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID); if (StringUtils.isNotEmpty(parentInstanceId)) { ProcessInstance parentInstance = findProcessInstanceDetailById(Integer.parseInt(parentInstanceId)); if (parentInstance != null) { subProcessInstance.setGlobalParams( joinGlobalParams(parentInstance.getGlobalParams(), subProcessInstance.getGlobalParams())); this.saveProcessInstance(subProcessInstance); } else { logger.error("sub process command params error, cannot find parent instance: {} ", cmdParam); } } ProcessInstanceMap processInstanceMap = JSONUtils.parseObject(cmdParam, ProcessInstanceMap.class); if (processInstanceMap == null || processInstanceMap.getParentProcessInstanceId() == 0) { return; } // update sub process id to process map table processInstanceMap.setProcessInstanceId(subProcessInstance.getId()); this.updateWorkProcessInstanceMap(processInstanceMap); } /** * join parent global params into sub process. * only the keys doesn't in sub process global would be joined. * * @param parentGlobalParams parentGlobalParams * @param subGlobalParams subGlobalParams * @return global params join */ private String joinGlobalParams(String parentGlobalParams, String subGlobalParams) { List<Property> parentPropertyList = JSONUtils.toList(parentGlobalParams, Property.class); List<Property> subPropertyList = JSONUtils.toList(subGlobalParams, Property.class); Map<String, String> subMap = subPropertyList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue)); for (Property parent : parentPropertyList) { if (!subMap.containsKey(parent.getProp())) { subPropertyList.add(parent); } } return JSONUtils.toJsonString(subPropertyList); } /** * initialize task instance * * @param taskInstance taskInstance */ private void initTaskInstance(TaskInstance taskInstance) { if (!taskInstance.isSubProcess() && (taskInstance.getState().typeIsCancel() || taskInstance.getState().typeIsFailure())) { taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); return; } taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS); updateTaskInstance(taskInstance); } /** * retry submit task to db */ public TaskInstance submitTaskWithRetry(ProcessInstance processInstance, TaskInstance taskInstance, int commitRetryTimes, int commitInterval) { int retryTimes = 1; TaskInstance task = null; while (retryTimes <= commitRetryTimes) { try { // submit task to db task = SpringApplicationContext.getBean(ProcessService.class).submitTask(processInstance, taskInstance); if (task != null && task.getId() != 0) { break; } logger.error("task commit to db failed , taskId {} has already retry {} times, please check the database", taskInstance.getId(), retryTimes); Thread.sleep(commitInterval); } catch (Exception e) { logger.error("task commit to mysql failed", e); } retryTimes += 1; } return task; } /** * submit task to db * submit sub process to command * * @param processInstance processInstance * @param taskInstance taskInstance * @return task instance */ @Transactional(rollbackFor = Exception.class) public TaskInstance submitTask(ProcessInstance processInstance, TaskInstance taskInstance) { logger.info("start submit task : {}, instance id:{}, state: {}", taskInstance.getName(), taskInstance.getProcessInstanceId(), processInstance.getState()); //submit to db TaskInstance task = submitTaskInstanceToDB(taskInstance, processInstance); if (task == null) { logger.error("end submit task to db error, task name:{}, process id:{} state: {} ", taskInstance.getName(), taskInstance.getProcessInstance(), processInstance.getState()); return null; } if (!task.getState().typeIsFinished()) { createSubWorkProcess(processInstance, task); } logger.info("end submit task to db successfully:{} {} state:{} complete, instance id:{} state: {} ", taskInstance.getId(), taskInstance.getName(), task.getState(), processInstance.getId(), processInstance.getState()); return task; } /** * set work process instance map * consider o * repeat running does not generate new sub process instance * set map {parent instance id, task instance id, 0(child instance id)} * * @param parentInstance parentInstance * @param parentTask parentTask * @return process instance map */ private ProcessInstanceMap setProcessInstanceMap(ProcessInstance parentInstance, TaskInstance parentTask) { ProcessInstanceMap processMap = findWorkProcessMapByParent(parentInstance.getId(), parentTask.getId()); if (processMap != null) { return processMap; } if (parentInstance.getCommandType() == CommandType.REPEAT_RUNNING) { // update current task id to map processMap = findPreviousTaskProcessMap(parentInstance, parentTask); if (processMap != null) { processMap.setParentTaskInstanceId(parentTask.getId()); updateWorkProcessInstanceMap(processMap); return processMap; } } // new task processMap = new ProcessInstanceMap(); processMap.setParentProcessInstanceId(parentInstance.getId()); processMap.setParentTaskInstanceId(parentTask.getId()); createWorkProcessInstanceMap(processMap); return processMap; } /** * find previous task work process map. * * @param parentProcessInstance parentProcessInstance * @param parentTask parentTask * @return process instance map */ private ProcessInstanceMap findPreviousTaskProcessMap(ProcessInstance parentProcessInstance, TaskInstance parentTask) { Integer preTaskId = 0; List<TaskInstance> preTaskList = this.findPreviousTaskListByWorkProcessId(parentProcessInstance.getId()); for (TaskInstance task : preTaskList) { if (task.getName().equals(parentTask.getName())) { preTaskId = task.getId(); ProcessInstanceMap map = findWorkProcessMapByParent(parentProcessInstance.getId(), preTaskId); if (map != null) { return map; } } } logger.info("sub process instance is not found,parent task:{},parent instance:{}", parentTask.getId(), parentProcessInstance.getId()); return null; } /** * create sub work process command * * @param parentProcessInstance parentProcessInstance * @param task task */ public void createSubWorkProcess(ProcessInstance parentProcessInstance, TaskInstance task) { if (!task.isSubProcess()) { return; } //check create sub work flow firstly ProcessInstanceMap instanceMap = findWorkProcessMapByParent(parentProcessInstance.getId(), task.getId()); if (null != instanceMap && CommandType.RECOVER_TOLERANCE_FAULT_PROCESS == parentProcessInstance.getCommandType()) { // recover failover tolerance would not create a new command when the sub command already have been created return; } instanceMap = setProcessInstanceMap(parentProcessInstance, task); ProcessInstance childInstance = null; if (instanceMap.getProcessInstanceId() != 0) { childInstance = findProcessInstanceById(instanceMap.getProcessInstanceId()); } Command subProcessCommand = createSubProcessCommand(parentProcessInstance, childInstance, instanceMap, task); updateSubProcessDefinitionByParent(parentProcessInstance, subProcessCommand.getProcessDefinitionCode()); initSubInstanceState(childInstance); createCommand(subProcessCommand); logger.info("sub process command created: {} ", subProcessCommand); } /** * complement data needs transform parent parameter to child. */ private String getSubWorkFlowParam(ProcessInstanceMap instanceMap, ProcessInstance parentProcessInstance, Map<String, String> fatherParams) { // set sub work process command String processMapStr = JSONUtils.toJsonString(instanceMap); Map<String, String> cmdParam = JSONUtils.toMap(processMapStr); if (parentProcessInstance.isComplementData()) { Map<String, String> parentParam = JSONUtils.toMap(parentProcessInstance.getCommandParam()); String endTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE); String startTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, endTime); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, startTime); processMapStr = JSONUtils.toJsonString(cmdParam); } if (fatherParams.size() != 0) { cmdParam.put(CMD_PARAM_FATHER_PARAMS, JSONUtils.toJsonString(fatherParams)); processMapStr = JSONUtils.toJsonString(cmdParam); } return processMapStr; } public Map<String, String> getGlobalParamMap(String globalParams) { List<Property> propList; Map<String, String> globalParamMap = new HashMap<>(); if (StringUtils.isNotEmpty(globalParams)) { propList = JSONUtils.toList(globalParams, Property.class); globalParamMap = propList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue)); } return globalParamMap; } /** * create sub work process command */ public Command createSubProcessCommand(ProcessInstance parentProcessInstance, ProcessInstance childInstance, ProcessInstanceMap instanceMap, TaskInstance task) { CommandType commandType = getSubCommandType(parentProcessInstance, childInstance); Map<String, String> subProcessParam = JSONUtils.toMap(task.getTaskParams()); long childDefineCode = 0L; if (subProcessParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE)) { childDefineCode = Long.parseLong(subProcessParam.get(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE)); } ProcessDefinition subProcessDefinition = processDefineMapper.queryByCode(childDefineCode); Object localParams = subProcessParam.get(Constants.LOCAL_PARAMS); List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class); Map<String, String> globalMap = this.getGlobalParamMap(parentProcessInstance.getGlobalParams()); Map<String, String> fatherParams = new HashMap<>(); if (CollectionUtils.isNotEmpty(allParam)) { for (Property info : allParam) { fatherParams.put(info.getProp(), globalMap.get(info.getProp())); } } String processParam = getSubWorkFlowParam(instanceMap, parentProcessInstance, fatherParams); int subProcessInstanceId = childInstance == null ? 0 : childInstance.getId(); return new Command( commandType, TaskDependType.TASK_POST, parentProcessInstance.getFailureStrategy(), parentProcessInstance.getExecutorId(), subProcessDefinition.getCode(), processParam, parentProcessInstance.getWarningType(), parentProcessInstance.getWarningGroupId(), parentProcessInstance.getScheduleTime(), task.getWorkerGroup(), task.getEnvironmentCode(), parentProcessInstance.getProcessInstancePriority(), parentProcessInstance.getDryRun(), subProcessInstanceId, subProcessDefinition.getVersion() ); } /** * initialize sub work flow state * child instance state would be initialized when 'recovery from pause/stop/failure' */ private void initSubInstanceState(ProcessInstance childInstance) { if (childInstance != null) { childInstance.setState(ExecutionStatus.RUNNING_EXECUTION); updateProcessInstance(childInstance); } } /** * get sub work flow command type * child instance exist: child command = fatherCommand * child instance not exists: child command = fatherCommand[0] */ private CommandType getSubCommandType(ProcessInstance parentProcessInstance, ProcessInstance childInstance) { CommandType commandType = parentProcessInstance.getCommandType(); if (childInstance == null) { String fatherHistoryCommand = parentProcessInstance.getHistoryCmd(); commandType = CommandType.valueOf(fatherHistoryCommand.split(Constants.COMMA)[0]); } return commandType; } /** * update sub process definition * * @param parentProcessInstance parentProcessInstance * @param childDefinitionCode childDefinitionId */ private void updateSubProcessDefinitionByParent(ProcessInstance parentProcessInstance, long childDefinitionCode) { ProcessDefinition fatherDefinition = this.findProcessDefinition(parentProcessInstance.getProcessDefinitionCode(), parentProcessInstance.getProcessDefinitionVersion()); ProcessDefinition childDefinition = this.findProcessDefinitionByCode(childDefinitionCode); if (childDefinition != null && fatherDefinition != null) { childDefinition.setWarningGroupId(fatherDefinition.getWarningGroupId()); processDefineMapper.updateById(childDefinition); } } /** * submit task to mysql * * @param taskInstance taskInstance * @param processInstance processInstance * @return task instance */ public TaskInstance submitTaskInstanceToDB(TaskInstance taskInstance, ProcessInstance processInstance) { ExecutionStatus processInstanceState = processInstance.getState(); if (taskInstance.getState().typeIsFailure()) { if (taskInstance.isSubProcess()) { taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1); } else { if (processInstanceState != ExecutionStatus.READY_STOP && processInstanceState != ExecutionStatus.READY_PAUSE) { // failure task set invalid taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); // crate new task instance if (taskInstance.getState() != ExecutionStatus.NEED_FAULT_TOLERANCE) { taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1); } taskInstance.setSubmitTime(null); taskInstance.setLogPath(null); taskInstance.setExecutePath(null); taskInstance.setStartTime(null); taskInstance.setEndTime(null); taskInstance.setFlag(Flag.YES); taskInstance.setHost(null); taskInstance.setId(0); } } } taskInstance.setExecutorId(processInstance.getExecutorId()); taskInstance.setProcessInstancePriority(processInstance.getProcessInstancePriority()); taskInstance.setState(getSubmitTaskState(taskInstance, processInstance)); if (taskInstance.getSubmitTime() == null) { taskInstance.setSubmitTime(new Date()); } if (taskInstance.getFirstSubmitTime() == null) { taskInstance.setFirstSubmitTime(taskInstance.getSubmitTime()); } boolean saveResult = saveTaskInstance(taskInstance); if (!saveResult) { return null; } return taskInstance; } /** * get submit task instance state by the work process state * cannot modify the task state when running/kill/submit success, or this * task instance is already exists in task queue . * return pause if work process state is ready pause * return stop if work process state is ready stop * if all of above are not satisfied, return submit success * * @param taskInstance taskInstance * @param processInstance processInstance * @return process instance state */ public ExecutionStatus getSubmitTaskState(TaskInstance taskInstance, ProcessInstance processInstance) { ExecutionStatus state = taskInstance.getState(); // running, delayed or killed // the task already exists in task queue // return state if ( state == ExecutionStatus.RUNNING_EXECUTION || state == ExecutionStatus.DELAY_EXECUTION || state == ExecutionStatus.KILL ) { return state; } //return pasue /stop if process instance state is ready pause / stop // or return submit success if (processInstance.getState() == ExecutionStatus.READY_PAUSE) { state = ExecutionStatus.PAUSE; } else if (processInstance.getState() == ExecutionStatus.READY_STOP || !checkProcessStrategy(taskInstance, processInstance)) { state = ExecutionStatus.KILL; } else { state = ExecutionStatus.SUBMITTED_SUCCESS; } return state; } /** * check process instance strategy * * @param taskInstance taskInstance * @return check strategy result */ private boolean checkProcessStrategy(TaskInstance taskInstance, ProcessInstance processInstance) { FailureStrategy failureStrategy = processInstance.getFailureStrategy(); if (failureStrategy == FailureStrategy.CONTINUE) { return true; } List<TaskInstance> taskInstances = this.findValidTaskListByProcessId(taskInstance.getProcessInstanceId()); for (TaskInstance task : taskInstances) { if (task.getState() == ExecutionStatus.FAILURE && task.getRetryTimes() >= task.getMaxRetryTimes()) { return false; } } return true; } /** * insert or update work process instance to data base * * @param processInstance processInstance */ public void saveProcessInstance(ProcessInstance processInstance) { if (processInstance == null) { logger.error("save error, process instance is null!"); return; } if (processInstance.getId() != 0) { processInstanceMapper.updateById(processInstance); } else { processInstanceMapper.insert(processInstance); } } /** * insert or update command * * @param command command * @return save command result */ public int saveCommand(Command command) { if (command.getId() != 0) { return commandMapper.updateById(command); } else { return commandMapper.insert(command); } } /** * insert or update task instance * * @param taskInstance taskInstance * @return save task instance result */ public boolean saveTaskInstance(TaskInstance taskInstance) { if (taskInstance.getId() != 0) { return updateTaskInstance(taskInstance); } else { return createTaskInstance(taskInstance); } } /** * insert task instance * * @param taskInstance taskInstance * @return create task instance result */ public boolean createTaskInstance(TaskInstance taskInstance) { int count = taskInstanceMapper.insert(taskInstance); return count > 0; } /** * update task instance * * @param taskInstance taskInstance * @return update task instance result */ public boolean updateTaskInstance(TaskInstance taskInstance) { int count = taskInstanceMapper.updateById(taskInstance); return count > 0; } /** * find task instance by id * * @param taskId task id * @return task intance */ public TaskInstance findTaskInstanceById(Integer taskId) { return taskInstanceMapper.selectById(taskId); } /** * package task instance */ public void packageTaskInstance(TaskInstance taskInstance, ProcessInstance processInstance) { taskInstance.setProcessInstance(processInstance); taskInstance.setProcessDefine(processInstance.getProcessDefinition()); TaskDefinition taskDefinition = this.findTaskDefinition( taskInstance.getTaskCode(), taskInstance.getTaskDefinitionVersion()); this.updateTaskDefinitionResources(taskDefinition); taskInstance.setTaskDefine(taskDefinition); } /** * Update {@link ResourceInfo} information in {@link TaskDefinition} * * @param taskDefinition the given {@link TaskDefinition} */ public void updateTaskDefinitionResources(TaskDefinition taskDefinition) { Map<String, Object> taskParameters = JSONUtils.parseObject( taskDefinition.getTaskParams(), new TypeReference<Map<String, Object>>() { }); if (taskParameters != null) { // if contains mainJar field, query resource from database // Flink, Spark, MR if (taskParameters.containsKey("mainJar")) { Object mainJarObj = taskParameters.get("mainJar"); ResourceInfo mainJar = JSONUtils.parseObject( JSONUtils.toJsonString(mainJarObj), ResourceInfo.class); ResourceInfo resourceInfo = updateResourceInfo(mainJar); if (resourceInfo != null) { taskParameters.put("mainJar", resourceInfo); } } // update resourceList information if (taskParameters.containsKey("resourceList")) { String resourceListStr = JSONUtils.toJsonString(taskParameters.get("resourceList")); List<ResourceInfo> resourceInfos = JSONUtils.toList(resourceListStr, ResourceInfo.class); List<ResourceInfo> updatedResourceInfos = resourceInfos .stream() .map(this::updateResourceInfo) .filter(Objects::nonNull) .collect(Collectors.toList()); taskParameters.put("resourceList", updatedResourceInfos); } // set task parameters taskDefinition.setTaskParams(JSONUtils.toJsonString(taskParameters)); } } /** * update {@link ResourceInfo} by given original ResourceInfo * * @param res origin resource info * @return {@link ResourceInfo} */ private ResourceInfo updateResourceInfo(ResourceInfo res) { ResourceInfo resourceInfo = null; // only if mainJar is not null and does not contains "resourceName" field if (res != null) { int resourceId = res.getId(); if (resourceId <= 0) { logger.error("invalid resourceId, {}", resourceId); return null; } resourceInfo = new ResourceInfo(); // get resource from database, only one resource should be returned Resource resource = getResourceById(resourceId); resourceInfo.setId(resourceId); resourceInfo.setRes(resource.getFileName()); resourceInfo.setResourceName(resource.getFullName()); if (logger.isInfoEnabled()) { logger.info("updated resource info {}", JSONUtils.toJsonString(resourceInfo)); } } return resourceInfo; } /** * get id list by task state * * @param instanceId instanceId * @param state state * @return task instance states */ public List<Integer> findTaskIdByInstanceState(int instanceId, ExecutionStatus state) { return taskInstanceMapper.queryTaskByProcessIdAndState(instanceId, state.ordinal()); } /** * find valid task list by process definition id * * @param processInstanceId processInstanceId * @return task instance list */ public List<TaskInstance> findValidTaskListByProcessId(Integer processInstanceId) { return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.YES); } /** * find previous task list by work process id * * @param processInstanceId processInstanceId * @return task instance list */ public List<TaskInstance> findPreviousTaskListByWorkProcessId(Integer processInstanceId) { return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.NO); } /** * update work process instance map * * @param processInstanceMap processInstanceMap * @return update process instance result */ public int updateWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) { return processInstanceMapMapper.updateById(processInstanceMap); } /** * create work process instance map * * @param processInstanceMap processInstanceMap * @return create process instance result */ public int createWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) { int count = 0; if (processInstanceMap != null) { return processInstanceMapMapper.insert(processInstanceMap); } return count; } /** * find work process map by parent process id and parent task id. * * @param parentWorkProcessId parentWorkProcessId * @param parentTaskId parentTaskId * @return process instance map */ public ProcessInstanceMap findWorkProcessMapByParent(Integer parentWorkProcessId, Integer parentTaskId) { return processInstanceMapMapper.queryByParentId(parentWorkProcessId, parentTaskId); } /** * delete work process map by parent process id * * @param parentWorkProcessId parentWorkProcessId * @return delete process map result */ public int deleteWorkProcessMapByParentId(int parentWorkProcessId) { return processInstanceMapMapper.deleteByParentProcessId(parentWorkProcessId); } /** * find sub process instance * * @param parentProcessId parentProcessId * @param parentTaskId parentTaskId * @return process instance */ public ProcessInstance findSubProcessInstance(Integer parentProcessId, Integer parentTaskId) { ProcessInstance processInstance = null; ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryByParentId(parentProcessId, parentTaskId); if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) { return processInstance; } processInstance = findProcessInstanceById(processInstanceMap.getProcessInstanceId()); return processInstance; } /** * find parent process instance * * @param subProcessId subProcessId * @return process instance */ public ProcessInstance findParentProcessInstance(Integer subProcessId) { ProcessInstance processInstance = null; ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(subProcessId); if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) { return processInstance; } processInstance = findProcessInstanceById(processInstanceMap.getParentProcessInstanceId()); return processInstance; } /** * change task state * * @param state state * @param startTime startTime * @param host host * @param executePath executePath * @param logPath logPath */ public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state, Date startTime, String host, String executePath, String logPath) { taskInstance.setState(state); taskInstance.setStartTime(startTime); taskInstance.setHost(host); taskInstance.setExecutePath(executePath); taskInstance.setLogPath(logPath); saveTaskInstance(taskInstance); } /** * update process instance * * @param processInstance processInstance * @return update process instance result */ public int updateProcessInstance(ProcessInstance processInstance) { return processInstanceMapper.updateById(processInstance); } /** * change task state * * @param state state * @param endTime endTime * @param varPool varPool */ public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state, Date endTime, int processId, String appIds, String varPool) { taskInstance.setPid(processId); taskInstance.setAppLink(appIds); taskInstance.setState(state); taskInstance.setEndTime(endTime); taskInstance.setVarPool(varPool); changeOutParam(taskInstance); saveTaskInstance(taskInstance); } /** * for show in page of taskInstance */ public void changeOutParam(TaskInstance taskInstance) { if (StringUtils.isEmpty(taskInstance.getVarPool())) { return; } List<Property> properties = JSONUtils.toList(taskInstance.getVarPool(), Property.class); if (CollectionUtils.isEmpty(properties)) { return; } //if the result more than one line,just get the first . Map<String, Object> taskParams = JSONUtils.parseObject(taskInstance.getTaskParams(), new TypeReference<Map<String, Object>>() { }); Object localParams = taskParams.get(LOCAL_PARAMS); if (localParams == null) { return; } List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class); Map<String, String> outProperty = new HashMap<>(); for (Property info : properties) { if (info.getDirect() == Direct.OUT) { outProperty.put(info.getProp(), info.getValue()); } } for (Property info : allParam) { if (info.getDirect() == Direct.OUT) { String paramName = info.getProp(); info.setValue(outProperty.get(paramName)); } } taskParams.put(LOCAL_PARAMS, allParam); taskInstance.setTaskParams(JSONUtils.toJsonString(taskParams)); } /** * convert integer list to string list * * @param intList intList * @return string list */ public List<String> convertIntListToString(List<Integer> intList) { if (intList == null) { return new ArrayList<>(); } List<String> result = new ArrayList<>(intList.size()); for (Integer intVar : intList) { result.add(String.valueOf(intVar)); } return result; } /** * query schedule by id * * @param id id * @return schedule */ public Schedule querySchedule(int id) { return scheduleMapper.selectById(id); } /** * query Schedule by processDefinitionCode * * @param processDefinitionCode processDefinitionCode * @see Schedule */ public List<Schedule> queryReleaseSchedulerListByProcessDefinitionCode(long processDefinitionCode) { return scheduleMapper.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode); } /** * query need failover process instance * * @param host host * @return process instance list */ public List<ProcessInstance> queryNeedFailoverProcessInstances(String host) { return processInstanceMapper.queryByHostAndStatus(host, stateArray); } /** * process need failover process instance * * @param processInstance processInstance */ @Transactional(rollbackFor = RuntimeException.class) public void processNeedFailoverProcessInstances(ProcessInstance processInstance) { //1 update processInstance host is null processInstance.setHost(Constants.NULL); processInstanceMapper.updateById(processInstance); ProcessDefinition processDefinition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); //2 insert into recover command Command cmd = new Command(); cmd.setProcessDefinitionCode(processDefinition.getCode()); cmd.setProcessDefinitionVersion(processDefinition.getVersion()); cmd.setProcessInstanceId(processInstance.getId()); cmd.setCommandParam(String.format("{\"%s\":%d}", Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING, processInstance.getId())); cmd.setExecutorId(processInstance.getExecutorId()); cmd.setCommandType(CommandType.RECOVER_TOLERANCE_FAULT_PROCESS); createCommand(cmd); } /** * query all need failover task instances by host * * @param host host * @return task instance list */ public List<TaskInstance> queryNeedFailoverTaskInstances(String host) { return taskInstanceMapper.queryByHostAndStatus(host, stateArray); } /** * find data source by id * * @param id id * @return datasource */ public DataSource findDataSourceById(int id) { return dataSourceMapper.selectById(id); } /** * update process instance state by id * * @param processInstanceId processInstanceId * @param executionStatus executionStatus * @return update process result */ public int updateProcessInstanceState(Integer processInstanceId, ExecutionStatus executionStatus) { ProcessInstance instance = processInstanceMapper.selectById(processInstanceId); instance.setState(executionStatus); return processInstanceMapper.updateById(instance); } /** * find process instance by the task id * * @param taskId taskId * @return process instance */ public ProcessInstance findProcessInstanceByTaskId(int taskId) { TaskInstance taskInstance = taskInstanceMapper.selectById(taskId); if (taskInstance != null) { return processInstanceMapper.selectById(taskInstance.getProcessInstanceId()); } return null; } /** * find udf function list by id list string * * @param ids ids * @return udf function list */ public List<UdfFunc> queryUdfFunListByIds(int[] ids) { return udfFuncMapper.queryUdfByIdStr(ids, null); } /** * find tenant code by resource name * * @param resName resource name * @param resourceType resource type * @return tenant code */ public String queryTenantCodeByResName(String resName, ResourceType resourceType) { // in order to query tenant code successful although the version is older String fullName = resName.startsWith("/") ? resName : String.format("/%s", resName); List<Resource> resourceList = resourceMapper.queryResource(fullName, resourceType.ordinal()); if (CollectionUtils.isEmpty(resourceList)) { return StringUtils.EMPTY; } int userId = resourceList.get(0).getUserId(); User user = userMapper.selectById(userId); if (Objects.isNull(user)) { return StringUtils.EMPTY; } Tenant tenant = tenantMapper.queryById(user.getTenantId()); if (Objects.isNull(tenant)) { return StringUtils.EMPTY; } return tenant.getTenantCode(); } /** * find schedule list by process define codes. * * @param codes codes * @return schedule list */ public List<Schedule> selectAllByProcessDefineCode(long[] codes) { return scheduleMapper.selectAllByProcessDefineArray(codes); } /** * find last scheduler process instance in the date interval * * @param definitionCode definitionCode * @param dateInterval dateInterval * @return process instance */ public ProcessInstance findLastSchedulerProcessInterval(Long definitionCode, DateInterval dateInterval) { return processInstanceMapper.queryLastSchedulerProcess(definitionCode, dateInterval.getStartTime(), dateInterval.getEndTime()); } /** * find last manual process instance interval * * @param definitionCode process definition code * @param dateInterval dateInterval * @return process instance */ public ProcessInstance findLastManualProcessInterval(Long definitionCode, DateInterval dateInterval) { return processInstanceMapper.queryLastManualProcess(definitionCode, dateInterval.getStartTime(), dateInterval.getEndTime()); } /** * find last running process instance * * @param definitionCode process definition code * @param startTime start time * @param endTime end time * @return process instance */ public ProcessInstance findLastRunningProcess(Long definitionCode, Date startTime, Date endTime) { return processInstanceMapper.queryLastRunningProcess(definitionCode, startTime, endTime, stateArray); } /** * query user queue by process instance * * @param processInstance processInstance * @return queue */ public String queryUserQueueByProcessInstance(ProcessInstance processInstance) { String queue = ""; if (processInstance == null) { return queue; } User executor = userMapper.selectById(processInstance.getExecutorId()); if (executor != null) { queue = executor.getQueue(); } return queue; } /** * query project name and user name by processInstanceId. * * @param processInstanceId processInstanceId * @return projectName and userName */ public ProjectUser queryProjectWithUserByProcessInstanceId(int processInstanceId) { return projectMapper.queryProjectWithUserByProcessInstanceId(processInstanceId); } /** * get task worker group * * @param taskInstance taskInstance * @return workerGroupId */ public String getTaskWorkerGroup(TaskInstance taskInstance) { String workerGroup = taskInstance.getWorkerGroup(); if (StringUtils.isNotBlank(workerGroup)) { return workerGroup; } int processInstanceId = taskInstance.getProcessInstanceId(); ProcessInstance processInstance = findProcessInstanceById(processInstanceId); if (processInstance != null) { return processInstance.getWorkerGroup(); } logger.info("task : {} will use default worker group", taskInstance.getId()); return Constants.DEFAULT_WORKER_GROUP; } /** * get have perm project list * * @param userId userId * @return project list */ public List<Project> getProjectListHavePerm(int userId) { List<Project> createProjects = projectMapper.queryProjectCreatedByUser(userId); List<Project> authedProjects = projectMapper.queryAuthedProjectListByUserId(userId); if (createProjects == null) { createProjects = new ArrayList<>(); } if (authedProjects != null) { createProjects.addAll(authedProjects); } return createProjects; } /** * list unauthorized udf function * * @param userId user id * @param needChecks data source id array * @return unauthorized udf function list */ public <T> List<T> listUnauthorized(int userId, T[] needChecks, AuthorizationType authorizationType) { List<T> resultList = new ArrayList<>(); if (Objects.nonNull(needChecks) && needChecks.length > 0) { Set<T> originResSet = new HashSet<>(Arrays.asList(needChecks)); switch (authorizationType) { case RESOURCE_FILE_ID: case UDF_FILE: List<Resource> ownUdfResources = resourceMapper.listAuthorizedResourceById(userId, needChecks); addAuthorizedResources(ownUdfResources, userId); Set<Integer> authorizedResourceFiles = ownUdfResources.stream().map(Resource::getId).collect(toSet()); originResSet.removeAll(authorizedResourceFiles); break; case RESOURCE_FILE_NAME: List<Resource> ownResources = resourceMapper.listAuthorizedResource(userId, needChecks); addAuthorizedResources(ownResources, userId); Set<String> authorizedResources = ownResources.stream().map(Resource::getFullName).collect(toSet()); originResSet.removeAll(authorizedResources); break; case DATASOURCE: Set<Integer> authorizedDatasources = dataSourceMapper.listAuthorizedDataSource(userId, needChecks).stream().map(DataSource::getId).collect(toSet()); originResSet.removeAll(authorizedDatasources); break; case UDF: Set<Integer> authorizedUdfs = udfFuncMapper.listAuthorizedUdfFunc(userId, needChecks).stream().map(UdfFunc::getId).collect(toSet()); originResSet.removeAll(authorizedUdfs); break; default: break; } resultList.addAll(originResSet); } return resultList; } /** * get user by user id * * @param userId user id * @return User */ public User getUserById(int userId) { return userMapper.selectById(userId); } /** * get resource by resource id * * @param resourceId resource id * @return Resource */ public Resource getResourceById(int resourceId) { return resourceMapper.selectById(resourceId); } /** * list resources by ids * * @param resIds resIds * @return resource list */ public List<Resource> listResourceByIds(Integer[] resIds) { return resourceMapper.listResourceByIds(resIds); } /** * format task app id in task instance */ public String formatTaskAppId(TaskInstance taskInstance) { ProcessInstance processInstance = findProcessInstanceById(taskInstance.getProcessInstanceId()); if (processInstance == null) { return ""; } ProcessDefinition definition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); if (definition == null) { return ""; } return String.format("%s_%s_%s", definition.getId(), processInstance.getId(), taskInstance.getId()); } /** * switch process definition version to process definition log version */ public int switchVersion(ProcessDefinition processDefinition, ProcessDefinitionLog processDefinitionLog) { if (null == processDefinition || null == processDefinitionLog) { return Constants.DEFINITION_FAILURE; } processDefinitionLog.setId(processDefinition.getId()); processDefinitionLog.setReleaseState(ReleaseState.OFFLINE); processDefinitionLog.setFlag(Flag.YES); int result = processDefineMapper.updateById(processDefinitionLog); if (result > 0) { result = switchProcessTaskRelationVersion(processDefinitionLog); if (result <= 0) { return Constants.DEFINITION_FAILURE; } } return result; } public int switchProcessTaskRelationVersion(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); if (!processTaskRelationList.isEmpty()) { processTaskRelationMapper.deleteByCode(processDefinition.getProjectCode(), processDefinition.getCode()); } List<ProcessTaskRelationLog> processTaskRelationLogList = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion()); return processTaskRelationMapper.batchInsert(processTaskRelationLogList); } /** * get resource ids * * @param taskDefinition taskDefinition * @return resource ids */ public String getResourceIds(TaskDefinition taskDefinition) { Set<Integer> resourceIds = null; AbstractParameters params = TaskParametersUtils.getParameters(taskDefinition.getTaskType(), taskDefinition.getTaskParams()); if (params != null && CollectionUtils.isNotEmpty(params.getResourceFilesList())) { resourceIds = params.getResourceFilesList(). stream() .filter(t -> t.getId() != 0) .map(ResourceInfo::getId) .collect(Collectors.toSet()); } if (CollectionUtils.isEmpty(resourceIds)) { return StringUtils.EMPTY; } return StringUtils.join(resourceIds, ","); } public int saveTaskDefine(User operator, long projectCode, List<TaskDefinitionLog> taskDefinitionLogs) { Date now = new Date(); List<TaskDefinitionLog> newTaskDefinitionLogs = new ArrayList<>(); List<TaskDefinitionLog> updateTaskDefinitionLogs = new ArrayList<>(); for (TaskDefinitionLog taskDefinitionLog : taskDefinitionLogs) { taskDefinitionLog.setProjectCode(projectCode); taskDefinitionLog.setUpdateTime(now); taskDefinitionLog.setOperateTime(now); taskDefinitionLog.setOperator(operator.getId()); taskDefinitionLog.setResourceIds(getResourceIds(taskDefinitionLog)); if (taskDefinitionLog.getCode() > 0 && taskDefinitionLog.getVersion() > 0) { TaskDefinitionLog definitionCodeAndVersion = taskDefinitionLogMapper .queryByDefinitionCodeAndVersion(taskDefinitionLog.getCode(), taskDefinitionLog.getVersion()); if (definitionCodeAndVersion != null) { if (!taskDefinitionLog.equals(definitionCodeAndVersion)) { taskDefinitionLog.setUserId(definitionCodeAndVersion.getUserId()); Integer version = taskDefinitionLogMapper.queryMaxVersionForDefinition(taskDefinitionLog.getCode()); taskDefinitionLog.setVersion(version + 1); taskDefinitionLog.setCreateTime(definitionCodeAndVersion.getCreateTime()); updateTaskDefinitionLogs.add(taskDefinitionLog); } continue; } } taskDefinitionLog.setUserId(operator.getId()); taskDefinitionLog.setVersion(Constants.VERSION_FIRST); taskDefinitionLog.setCreateTime(now); if (taskDefinitionLog.getCode() == 0) { try { taskDefinitionLog.setCode(CodeGenerateUtils.getInstance().genCode()); } catch (CodeGenerateException e) { logger.error("Task code get error, ", e); return Constants.DEFINITION_FAILURE; } } newTaskDefinitionLogs.add(taskDefinitionLog); } int insertResult = 0; int updateResult = 0; for (TaskDefinitionLog taskDefinitionToUpdate : updateTaskDefinitionLogs) { TaskDefinition task = taskDefinitionMapper.queryByCode(taskDefinitionToUpdate.getCode()); if (task == null) { newTaskDefinitionLogs.add(taskDefinitionToUpdate); } else { insertResult += taskDefinitionLogMapper.insert(taskDefinitionToUpdate); taskDefinitionToUpdate.setId(task.getId()); updateResult += taskDefinitionMapper.updateById(taskDefinitionToUpdate); } } if (!newTaskDefinitionLogs.isEmpty()) { updateResult += taskDefinitionMapper.batchInsert(newTaskDefinitionLogs); insertResult += taskDefinitionLogMapper.batchInsert(newTaskDefinitionLogs); } return (insertResult & updateResult) > 0 ? 1 : Constants.EXIT_CODE_SUCCESS; } /** * save processDefinition (including create or update processDefinition) */ public int saveProcessDefine(User operator, ProcessDefinition processDefinition, Boolean isFromProcessDefine) { ProcessDefinitionLog processDefinitionLog = new ProcessDefinitionLog(processDefinition); Integer version = processDefineLogMapper.queryMaxVersionForDefinition(processDefinition.getCode()); int insertVersion = version == null || version == 0 ? Constants.VERSION_FIRST : version + 1; processDefinitionLog.setVersion(insertVersion); processDefinitionLog.setReleaseState(isFromProcessDefine ? ReleaseState.OFFLINE : ReleaseState.ONLINE); processDefinitionLog.setOperator(operator.getId()); processDefinitionLog.setOperateTime(processDefinition.getUpdateTime()); int insertLog = processDefineLogMapper.insert(processDefinitionLog); int result; if (0 == processDefinition.getId()) { result = processDefineMapper.insert(processDefinitionLog); } else { processDefinitionLog.setId(processDefinition.getId()); result = processDefineMapper.updateById(processDefinitionLog); } return (insertLog & result) > 0 ? insertVersion : 0; } /** * save task relations */ public int saveTaskRelation(User operator, long projectCode, long processDefinitionCode, int processDefinitionVersion, List<ProcessTaskRelationLog> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) { if (taskRelationList.isEmpty()) { return Constants.EXIT_CODE_SUCCESS; } Map<Long, TaskDefinitionLog> taskDefinitionLogMap = null; if (CollectionUtils.isNotEmpty(taskDefinitionLogs)) { taskDefinitionLogMap = taskDefinitionLogs.stream() .collect(Collectors.toMap(TaskDefinition::getCode, taskDefinitionLog -> taskDefinitionLog)); } Date now = new Date(); for (ProcessTaskRelationLog processTaskRelationLog : taskRelationList) { processTaskRelationLog.setProjectCode(projectCode); processTaskRelationLog.setProcessDefinitionCode(processDefinitionCode); processTaskRelationLog.setProcessDefinitionVersion(processDefinitionVersion); if (taskDefinitionLogMap != null) { TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(processTaskRelationLog.getPreTaskCode()); if (taskDefinitionLog != null) { processTaskRelationLog.setPreTaskVersion(taskDefinitionLog.getVersion()); } processTaskRelationLog.setPostTaskVersion(taskDefinitionLogMap.get(processTaskRelationLog.getPostTaskCode()).getVersion()); } processTaskRelationLog.setCreateTime(now); processTaskRelationLog.setUpdateTime(now); processTaskRelationLog.setOperator(operator.getId()); processTaskRelationLog.setOperateTime(now); } List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode); if (!processTaskRelationList.isEmpty()) { Set<Integer> processTaskRelationSet = processTaskRelationList.stream().map(ProcessTaskRelation::hashCode).collect(toSet()); Set<Integer> taskRelationSet = taskRelationList.stream().map(ProcessTaskRelationLog::hashCode).collect(toSet()); boolean result = CollectionUtils.isEqualCollection(processTaskRelationSet, taskRelationSet); if (result) { return Constants.EXIT_CODE_SUCCESS; } processTaskRelationMapper.deleteByCode(projectCode, processDefinitionCode); } int result = processTaskRelationMapper.batchInsert(taskRelationList); int resultLog = processTaskRelationLogMapper.batchInsert(taskRelationList); return (result & resultLog) > 0 ? Constants.EXIT_CODE_SUCCESS : Constants.EXIT_CODE_FAILURE; } public boolean isTaskOnline(long taskCode) { List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByTaskCode(taskCode); if (!processTaskRelationList.isEmpty()) { Set<Long> processDefinitionCodes = processTaskRelationList .stream() .map(ProcessTaskRelation::getProcessDefinitionCode) .collect(Collectors.toSet()); List<ProcessDefinition> processDefinitionList = processDefineMapper.queryByCodes(processDefinitionCodes); // check process definition is already online for (ProcessDefinition processDefinition : processDefinitionList) { if (processDefinition.getReleaseState() == ReleaseState.ONLINE) { return true; } } } return false; } /** * Generate the DAG Graph based on the process definition id * * @param processDefinition process definition * @return dag graph */ public DAG<String, TaskNode, TaskNodeRelation> genDagGraph(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); List<TaskNode> taskNodeList = transformTask(processTaskRelations, Lists.newArrayList()); ProcessDag processDag = DagHelper.getProcessDag(taskNodeList, new ArrayList<>(processTaskRelations)); // Generate concrete Dag to be executed return DagHelper.buildDagGraph(processDag); } /** * generate DagData */ public DagData genDagData(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); List<TaskDefinitionLog> taskDefinitionLogList = genTaskDefineList(processTaskRelations); List<TaskDefinition> taskDefinitions = taskDefinitionLogList.stream() .map(taskDefinitionLog -> JSONUtils.parseObject(JSONUtils.toJsonString(taskDefinitionLog), TaskDefinition.class)) .collect(Collectors.toList()); return new DagData(processDefinition, processTaskRelations, taskDefinitions); } public List<TaskDefinitionLog> genTaskDefineList(List<ProcessTaskRelation> processTaskRelations) { Set<TaskDefinition> taskDefinitionSet = new HashSet<>(); for (ProcessTaskRelation processTaskRelation : processTaskRelations) { if (processTaskRelation.getPreTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPreTaskCode(), processTaskRelation.getPreTaskVersion())); } if (processTaskRelation.getPostTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion())); } } if (taskDefinitionSet.isEmpty()) { return Lists.newArrayList(); } return taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet); } public List<TaskDefinitionLog> getTaskDefineLogListByRelation(List<ProcessTaskRelation> processTaskRelations) { List<TaskDefinitionLog> taskDefinitionLogs = com.google.common.collect.Lists.newArrayList(); for (ProcessTaskRelation processTaskRelation : processTaskRelations) { if (processTaskRelation.getPreTaskCode() > 0) { taskDefinitionLogs.add((TaskDefinitionLog) this.findTaskDefinition(processTaskRelation.getPreTaskCode(), processTaskRelation.getPreTaskVersion())); } if (processTaskRelation.getPostTaskCode() > 0) { taskDefinitionLogs.add((TaskDefinitionLog) this.findTaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion())); } } return taskDefinitionLogs; } /** * find task definition by code and version */ public TaskDefinition findTaskDefinition(long taskCode, int taskDefinitionVersion) { return taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(taskCode, taskDefinitionVersion); } /** * find process task relation list by projectCode and processDefinitionCode */ public List<ProcessTaskRelation> findRelationByCode(long projectCode, long processDefinitionCode) { return processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode); } /** * add authorized resources * * @param ownResources own resources * @param userId userId */ private void addAuthorizedResources(List<Resource> ownResources, int userId) { List<Integer> relationResourceIds = resourceUserMapper.queryResourcesIdListByUserIdAndPerm(userId, 7); List<Resource> relationResources = CollectionUtils.isNotEmpty(relationResourceIds) ? resourceMapper.queryResourceListById(relationResourceIds) : new ArrayList<>(); ownResources.addAll(relationResources); } /** * Use temporarily before refactoring taskNode */ public List<TaskNode> transformTask(List<ProcessTaskRelation> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) { Map<Long, List<Long>> taskCodeMap = new HashMap<>(); for (ProcessTaskRelation processTaskRelation : taskRelationList) { taskCodeMap.compute(processTaskRelation.getPostTaskCode(), (k, v) -> { if (v == null) { v = new ArrayList<>(); } if (processTaskRelation.getPreTaskCode() != 0L) { v.add(processTaskRelation.getPreTaskCode()); } return v; }); } if (CollectionUtils.isEmpty(taskDefinitionLogs)) { taskDefinitionLogs = genTaskDefineList(taskRelationList); } Map<Long, TaskDefinitionLog> taskDefinitionLogMap = taskDefinitionLogs.stream() .collect(Collectors.toMap(TaskDefinitionLog::getCode, taskDefinitionLog -> taskDefinitionLog)); List<TaskNode> taskNodeList = new ArrayList<>(); for (Entry<Long, List<Long>> code : taskCodeMap.entrySet()) { TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(code.getKey()); if (taskDefinitionLog != null) { TaskNode taskNode = new TaskNode(); taskNode.setCode(taskDefinitionLog.getCode()); taskNode.setVersion(taskDefinitionLog.getVersion()); taskNode.setName(taskDefinitionLog.getName()); taskNode.setDesc(taskDefinitionLog.getDescription()); taskNode.setType(taskDefinitionLog.getTaskType().toUpperCase()); taskNode.setRunFlag(taskDefinitionLog.getFlag() == Flag.YES ? Constants.FLOWNODE_RUN_FLAG_NORMAL : Constants.FLOWNODE_RUN_FLAG_FORBIDDEN); taskNode.setMaxRetryTimes(taskDefinitionLog.getFailRetryTimes()); taskNode.setRetryInterval(taskDefinitionLog.getFailRetryInterval()); Map<String, Object> taskParamsMap = taskNode.taskParamsToJsonObj(taskDefinitionLog.getTaskParams()); taskNode.setConditionResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.CONDITION_RESULT))); taskNode.setSwitchResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.SWITCH_RESULT))); taskNode.setDependence(JSONUtils.toJsonString(taskParamsMap.get(Constants.DEPENDENCE))); taskParamsMap.remove(Constants.CONDITION_RESULT); taskParamsMap.remove(Constants.DEPENDENCE); taskNode.setParams(JSONUtils.toJsonString(taskParamsMap)); taskNode.setTaskInstancePriority(taskDefinitionLog.getTaskPriority()); taskNode.setWorkerGroup(taskDefinitionLog.getWorkerGroup()); taskNode.setEnvironmentCode(taskDefinitionLog.getEnvironmentCode()); taskNode.setTimeout(JSONUtils.toJsonString(new TaskTimeoutParameter(taskDefinitionLog.getTimeoutFlag() == TimeoutFlag.OPEN, taskDefinitionLog.getTimeoutNotifyStrategy(), taskDefinitionLog.getTimeout()))); taskNode.setDelayTime(taskDefinitionLog.getDelayTime()); taskNode.setPreTasks(JSONUtils.toJsonString(code.getValue().stream().map(taskDefinitionLogMap::get).map(TaskDefinition::getCode).collect(Collectors.toList()))); taskNodeList.add(taskNode); } } return taskNodeList; } public Map<ProcessInstance, TaskInstance> notifyProcessList(int processId) { HashMap<ProcessInstance, TaskInstance> processTaskMap = new HashMap<>(); //find sub tasks ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(processId); if (processInstanceMap == null) { return processTaskMap; } ProcessInstance fatherProcess = this.findProcessInstanceById(processInstanceMap.getParentProcessInstanceId()); TaskInstance fatherTask = this.findTaskInstanceById(processInstanceMap.getParentTaskInstanceId()); if (fatherProcess != null) { processTaskMap.put(fatherProcess, fatherTask); } return processTaskMap; } /** * the first time (when submit the task ) get the resource of the task group * @param taskId task id * @param taskName * @param groupId * @param processId * @param priority * @return */ public boolean acquireTaskGroup(int taskId, String taskName, int groupId, int processId, int priority) { TaskGroup taskGroup = taskGroupMapper.selectById(groupId); if (taskGroup == null) { return true; } // if task group is not applicable if (taskGroup.getStatus() == Flag.NO.getCode()) { return true; } TaskGroupQueue taskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskId); if (taskGroupQueue == null) { taskGroupQueue = insertIntoTaskGroupQueue(taskId, taskName, groupId, processId, priority, TaskGroupQueueStatus.WAIT_QUEUE); } else { if (taskGroupQueue.getStatus() == TaskGroupQueueStatus.ACQUIRE_SUCCESS) { return true; } taskGroupQueue.setInQueue(Flag.NO.getCode()); taskGroupQueue.setStatus(TaskGroupQueueStatus.WAIT_QUEUE); this.taskGroupQueueMapper.updateById(taskGroupQueue); } //check priority List<TaskGroupQueue> highPriorityTasks = taskGroupQueueMapper.queryHighPriorityTasks(groupId, priority, TaskGroupQueueStatus.WAIT_QUEUE.getCode()); if (CollectionUtils.isNotEmpty(highPriorityTasks)) { this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return false; } //try to get taskGroup int count = taskGroupMapper.selectAvailableCountById(groupId); if (count == 1 && robTaskGroupResouce(taskGroupQueue)) { return true; } this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return false; } /** * try to get the task group resource(when other task release the resource) * @param taskGroupQueue * @return */ public boolean robTaskGroupResouce(TaskGroupQueue taskGroupQueue) { TaskGroup taskGroup = taskGroupMapper.selectById(taskGroupQueue.getGroupId()); int affectedCount = taskGroupMapper.updateTaskGroupResource(taskGroup.getId(),taskGroupQueue.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode()); if (affectedCount > 0) { taskGroupQueue.setStatus(TaskGroupQueueStatus.ACQUIRE_SUCCESS); this.taskGroupQueueMapper.updateById(taskGroupQueue); this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return true; } return false; } public boolean acquireTaskGroupAgain(TaskGroupQueue taskGroupQueue) { return robTaskGroupResouce(taskGroupQueue); } public void releaseAllTaskGroup(int processInstanceId) { List<TaskInstance> taskInstances = this.taskInstanceMapper.loadAllInfosNoRelease(processInstanceId, TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode()); for (TaskInstance info : taskInstances) { releaseTaskGroup(info); } } /** * release the TGQ resource when the corresponding task is finished. * * @return the result code and msg */ public TaskInstance releaseTaskGroup(TaskInstance taskInstance) { TaskGroup taskGroup = taskGroupMapper.selectById(taskInstance.getTaskGroupId()); if (taskGroup == null) { return null; } TaskGroupQueue thisTaskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskInstance.getId()); if (thisTaskGroupQueue.getStatus() == TaskGroupQueueStatus.RELEASE) { return null; } try { while (taskGroupMapper.releaseTaskGroupResource(taskGroup.getId(), taskGroup.getUseSize() , thisTaskGroupQueue.getId(), TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode()) != 1) { thisTaskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskInstance.getId()); if (thisTaskGroupQueue.getStatus() == TaskGroupQueueStatus.RELEASE) { return null; } taskGroup = taskGroupMapper.selectById(taskInstance.getTaskGroupId()); } } catch (Exception e) { logger.error("release the task group error",e); } logger.info("updateTask:{}",taskInstance.getName()); changeTaskGroupQueueStatus(taskInstance.getId(), TaskGroupQueueStatus.RELEASE); TaskGroupQueue taskGroupQueue = this.taskGroupQueueMapper.queryTheHighestPriorityTasks(taskGroup.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode(), Flag.NO.getCode(), Flag.NO.getCode()); if (taskGroupQueue == null) { return null; } while (this.taskGroupQueueMapper.updateInQueueCAS(Flag.NO.getCode(), Flag.YES.getCode(), taskGroupQueue.getId()) != 1) { taskGroupQueue = this.taskGroupQueueMapper.queryTheHighestPriorityTasks(taskGroup.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode(), Flag.NO.getCode(), Flag.NO.getCode()); if (taskGroupQueue == null) { return null; } } return this.taskInstanceMapper.selectById(taskGroupQueue.getTaskId()); } /** * release the TGQ resource when the corresponding task is finished. * * @param taskId task id * @return the result code and msg */ public void changeTaskGroupQueueStatus(int taskId, TaskGroupQueueStatus status) { TaskGroupQueue taskGroupQueue = taskGroupQueueMapper.queryByTaskId(taskId); taskGroupQueue.setStatus(status); taskGroupQueue.setUpdateTime(new Date(System.currentTimeMillis())); taskGroupQueueMapper.updateById(taskGroupQueue); } /** * insert into task group queue * * @param taskId task id * @param taskName task name * @param groupId group id * @param processId process id * @param priority priority * @return result and msg code */ public TaskGroupQueue insertIntoTaskGroupQueue(Integer taskId, String taskName, Integer groupId, Integer processId, Integer priority, TaskGroupQueueStatus status) { TaskGroupQueue taskGroupQueue = new TaskGroupQueue(taskId, taskName, groupId, processId, priority, status); taskGroupQueueMapper.insert(taskGroupQueue); return taskGroupQueue; } public int updateTaskGroupQueueStatus(Integer taskId, int status) { return taskGroupQueueMapper.updateStatusByTaskId(taskId, status); } public int updateTaskGroupQueue(TaskGroupQueue taskGroupQueue) { return taskGroupQueueMapper.updateById(taskGroupQueue); } public TaskGroupQueue loadTaskGroupQueue(int taskId) { return this.taskGroupQueueMapper.queryByTaskId(taskId); } public void sendStartTask2Master(ProcessInstance processInstance,int taskId, org.apache.dolphinscheduler.remote.command.CommandType taskType) { String host = processInstance.getHost(); String address = host.split(":")[0]; int port = Integer.parseInt(host.split(":")[1]); TaskEventChangeCommand taskEventChangeCommand = new TaskEventChangeCommand( processInstance.getId(), taskId ); stateEventCallbackService.sendResult(address, port, taskEventChangeCommand.convert2Command(taskType)); } public ProcessInstance loadNextProcess4Serial(long code, int state) { return this.processInstanceMapper.loadNextProcess4Serial(code, state); } private void deleteCommandWithCheck(int commandId) { int delete = this.commandMapper.deleteById(commandId); if (delete != 1) { throw new ServiceException("delete command fail, id:" + commandId); } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,480
[Bug] [Standalone] Cannot query task log information through WEBUI
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened Cannot query task log information through WEBUI ### What you expected to happen Normally query task log information ### How to reproduce Query the task log on the task instance page ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7480
https://github.com/apache/dolphinscheduler/pull/7481
cdd4e7bf03d0e4c0f486ed8d327151475167584e
146471eb487293684be81a0697b2bb56c86bb187
"2021-12-18T06:39:20Z"
java
"2021-12-22T16:54:44Z"
dolphinscheduler-standalone-server/pom.xml
<?xml version="1.0" encoding="UTF-8"?> <!-- ~ Licensed to the Apache Software Foundation (ASF) under one or more ~ contributor license agreements. See the NOTICE file distributed with ~ this work for additional information regarding copyright ownership. ~ The ASF licenses this file to You under the Apache License, Version 2.0 ~ (the "License"); you may not use this file except in compliance with ~ the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, software ~ distributed under the License is distributed on an "AS IS" BASIS, ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ~ See the License for the specific language governing permissions and ~ limitations under the License. --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <parent> <artifactId>dolphinscheduler</artifactId> <groupId>org.apache.dolphinscheduler</groupId> <version>2.0.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <artifactId>dolphinscheduler-standalone-server</artifactId> <dependencies> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-master</artifactId> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-worker</artifactId> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-api</artifactId> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-server</artifactId> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-test</artifactId> <version>${curator.test}</version> <exclusions> <exclusion> <groupId>org.javassist</groupId> <artifactId>javassist</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-python</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <excludes> <exclude>*.yaml</exclude> <exclude>*.xml</exclude> </excludes> </configuration> </plugin> <plugin> <artifactId>maven-assembly-plugin</artifactId> <executions> <execution> <id>dolphinscheduler-standalone-server</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <finalName>standalone-server</finalName> <descriptors> <descriptor>src/main/assembly/dolphinscheduler-standalone-server.xml</descriptor> </descriptors> <appendAssemblyId>false</appendAssemblyId> </configuration> </execution> </executions> </plugin> </plugins> </build> <profiles> <profile> <id>docker</id> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> </plugin> </plugins> </build> </profile> </profiles> </project>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,480
[Bug] [Standalone] Cannot query task log information through WEBUI
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened Cannot query task log information through WEBUI ### What you expected to happen Normally query task log information ### How to reproduce Query the task log on the task instance page ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7480
https://github.com/apache/dolphinscheduler/pull/7481
cdd4e7bf03d0e4c0f486ed8d327151475167584e
146471eb487293684be81a0697b2bb56c86bb187
"2021-12-18T06:39:20Z"
java
"2021-12-22T16:54:44Z"
dolphinscheduler-standalone-server/src/main/java/org/apache/dolphinscheduler/StandaloneServer.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler; import org.apache.curator.test.TestingServer; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class StandaloneServer { public static void main(String[] args) throws Exception { final TestingServer server = new TestingServer(true); System.setProperty("registry.zookeeper.connect-string", server.getConnectString()); SpringApplication.run(StandaloneServer.class, args); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,480
[Bug] [Standalone] Cannot query task log information through WEBUI
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened Cannot query task log information through WEBUI ### What you expected to happen Normally query task log information ### How to reproduce Query the task log on the task instance page ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7480
https://github.com/apache/dolphinscheduler/pull/7481
cdd4e7bf03d0e4c0f486ed8d327151475167584e
146471eb487293684be81a0697b2bb56c86bb187
"2021-12-18T06:39:20Z"
java
"2021-12-22T16:54:44Z"
dolphinscheduler-standalone-server/src/main/resources/logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?> <!-- ~ Licensed to the Apache Software Foundation (ASF) under one or more ~ contributor license agreements. See the NOTICE file distributed with ~ this work for additional information regarding copyright ownership. ~ The ASF licenses this file to You under the Apache License, Version 2.0 ~ (the "License"); you may not use this file except in compliance with ~ the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, software ~ distributed under the License is distributed on an "AS IS" BASIS, ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ~ See the License for the specific language governing permissions and ~ limitations under the License. --> <configuration scan="true" scanPeriod="120 seconds"> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern> [%level] %date{yyyy-MM-dd HH:mm:ss.SSS} %logger{96}:[%line] - %msg%n </pattern> <charset>UTF-8</charset> </encoder> </appender> <logger name="org.apache.zookeeper" level="WARN"/> <logger name="org.apache.hbase" level="WARN"/> <logger name="org.apache.hadoop" level="WARN"/> <root level="INFO"> <appender-ref ref="STDOUT"/> </root> </configuration>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,480
[Bug] [Standalone] Cannot query task log information through WEBUI
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened Cannot query task log information through WEBUI ### What you expected to happen Normally query task log information ### How to reproduce Query the task log on the task instance page ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7480
https://github.com/apache/dolphinscheduler/pull/7481
cdd4e7bf03d0e4c0f486ed8d327151475167584e
146471eb487293684be81a0697b2bb56c86bb187
"2021-12-18T06:39:20Z"
java
"2021-12-22T16:54:44Z"
pom.xml
<?xml version="1.0" encoding="UTF-8"?> <!-- ~ Licensed to the Apache Software Foundation (ASF) under one or more ~ contributor license agreements. See the NOTICE file distributed with ~ this work for additional information regarding copyright ownership. ~ The ASF licenses this file to You under the Apache License, Version 2.0 ~ (the "License"); you may not use this file except in compliance with ~ the License. You may obtain a copy of the License at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ ~ Unless required by applicable law or agreed to in writing, software ~ distributed under the License is distributed on an "AS IS" BASIS, ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. ~ See the License for the specific language governing permissions and ~ limitations under the License. --> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler</artifactId> <version>2.0.0-SNAPSHOT</version> <packaging>pom</packaging> <name>${project.artifactId}</name> <url>https://dolphinscheduler.apache.org</url> <description>Dolphin Scheduler is a distributed and easy-to-expand visual DAG workflow scheduling system, dedicated to solving the complex dependencies in data processing, making the scheduling system out of the box for data processing. </description> <scm> <connection>scm:git:https://github.com/apache/dolphinscheduler.git</connection> <developerConnection>scm:git:https://github.com/apache/dolphinscheduler.git</developerConnection> <url>https://github.com/apache/dolphinscheduler</url> <tag>HEAD</tag> </scm> <mailingLists> <mailingList> <name>DolphinScheduler Developer List</name> <post>dev@dolphinscheduler.apache.org</post> <subscribe>dev-subscribe@dolphinscheduler.apache.org</subscribe> <unsubscribe>dev-unsubscribe@dolphinscheduler.apache.org</unsubscribe> </mailingList> </mailingLists> <parent> <groupId>org.apache</groupId> <artifactId>apache</artifactId> <version>21</version> </parent> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <curator.version>4.3.0</curator.version> <zookeeper.version>3.4.14</zookeeper.version> <spring.version>5.3.12</spring.version> <spring.boot.version>2.5.6</spring.boot.version> <java.version>1.8</java.version> <logback.version>1.2.3</logback.version> <hadoop.version>2.7.3</hadoop.version> <quartz.version>2.3.2</quartz.version> <jackson.version>2.10.5</jackson.version> <mybatis-plus.version>3.2.0</mybatis-plus.version> <mybatis.spring.version>2.0.1</mybatis.spring.version> <cron.utils.version>9.1.3</cron.utils.version> <druid.version>1.2.4</druid.version> <h2.version>1.4.200</h2.version> <commons.codec.version>1.11</commons.codec.version> <commons.logging.version>1.1.1</commons.logging.version> <httpclient.version>4.4.1</httpclient.version> <httpcore.version>4.4.1</httpcore.version> <junit.version>4.12</junit.version> <mysql.connector.version>8.0.16</mysql.connector.version> <slf4j.api.version>1.7.5</slf4j.api.version> <slf4j.log4j12.version>1.7.5</slf4j.log4j12.version> <commons.collections.version>3.2.2</commons.collections.version> <commons.httpclient>3.0.1</commons.httpclient> <commons.beanutils.version>1.9.4</commons.beanutils.version> <commons.configuration.version>1.10</commons.configuration.version> <commons.lang.version>2.6</commons.lang.version> <commons.email.version>1.5</commons.email.version> <poi.version>4.1.2</poi.version> <javax.servlet.api.version>3.1.0</javax.servlet.api.version> <commons.collections4.version>4.1</commons.collections4.version> <guava.version>24.1-jre</guava.version> <postgresql.version>42.2.5</postgresql.version> <hive.jdbc.version>2.1.0</hive.jdbc.version> <commons.io.version>2.4</commons.io.version> <oshi.core.version>3.9.1</oshi.core.version> <clickhouse.jdbc.version>0.1.52</clickhouse.jdbc.version> <mssql.jdbc.version>6.1.0.jre8</mssql.jdbc.version> <presto.jdbc.version>0.238.1</presto.jdbc.version> <spotbugs.version>3.1.12</spotbugs.version> <checkstyle.version>3.1.2</checkstyle.version> <curator.test>2.12.0</curator.test> <frontend-maven-plugin.version>1.6</frontend-maven-plugin.version> <maven-compiler-plugin.version>3.3</maven-compiler-plugin.version> <maven-assembly-plugin.version>3.3.0</maven-assembly-plugin.version> <maven-release-plugin.version>2.5.3</maven-release-plugin.version> <maven-javadoc-plugin.version>2.10.3</maven-javadoc-plugin.version> <maven-source-plugin.version>2.4</maven-source-plugin.version> <maven-surefire-plugin.version>2.22.1</maven-surefire-plugin.version> <maven-dependency-plugin.version>3.1.1</maven-dependency-plugin.version> <rpm-maven-plugion.version>2.2.0</rpm-maven-plugion.version> <jacoco.version>0.8.7</jacoco.version> <jcip.version>1.0</jcip.version> <maven.deploy.skip>false</maven.deploy.skip> <cobertura-maven-plugin.version>2.7</cobertura-maven-plugin.version> <servlet-api.version>2.5</servlet-api.version> <swagger.version>1.9.3</swagger.version> <springfox.version>2.9.2</springfox.version> <swagger-models.version>1.5.24</swagger-models.version> <guava-retry.version>2.0.0</guava-retry.version> <protostuff.version>1.7.2</protostuff.version> <reflections.version>0.9.12</reflections.version> <byte-buddy.version>1.9.16</byte-buddy.version> <java-websocket.version>1.5.1</java-websocket.version> <py4j.version>0.10.9</py4j.version> <auto-service.version>1.0.1</auto-service.version> <jacoco.skip>false</jacoco.skip> <netty.version>4.1.53.Final</netty.version> <maven-jar-plugin.version>3.2.0</maven-jar-plugin.version> <powermock.version>2.0.9</powermock.version> <jsr305.version>3.0.0</jsr305.version> <commons-compress.version>1.19</commons-compress.version> <commons-math3.version>3.1.1</commons-math3.version> <error_prone_annotations.version>2.5.1</error_prone_annotations.version> <exec-maven-plugin.version>3.0.0</exec-maven-plugin.version> <janino.version>3.1.6</janino.version> <docker.hub>apache</docker.hub> <docker.repo>${project.name}</docker.repo> <docker.tag>${project.version}</docker.tag> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>io.netty</groupId> <artifactId>netty-bom</artifactId> <version>${netty.version}</version> <scope>import</scope> <type>pom</type> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>${spring.boot.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.netty</groupId> <artifactId>netty-all</artifactId> <version>${netty.version}</version> </dependency> <dependency> <groupId>org.java-websocket</groupId> <artifactId>Java-WebSocket</artifactId> <version>${java-websocket.version}</version> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-boot-starter</artifactId> <version>${mybatis-plus.version}</version> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus</artifactId> <version>${mybatis-plus.version}</version> </dependency> <!-- quartz--> <dependency> <groupId>org.quartz-scheduler</groupId> <artifactId>quartz</artifactId> <version>${quartz.version}</version> </dependency> <dependency> <groupId>org.quartz-scheduler</groupId> <artifactId>quartz-jobs</artifactId> <version>${quartz.version}</version> </dependency> <dependency> <groupId>com.cronutils</groupId> <artifactId>cron-utils</artifactId> <version>${cron.utils.version}</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>druid</artifactId> <version>${druid.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-beans</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>${spring.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-server</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-master</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-worker</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-standalone-server</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-common</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-plugin</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-registry-plugin</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-dao</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-api</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-remote</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-service</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-meter</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-spi</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-python</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-api</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-server</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-dingtalk</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-email</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-feishu</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-http</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-script</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-slack</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-alert-wechat</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-registry-api</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-registry-zookeeper</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-plugin</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-all</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-api</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-clickhouse</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-db2</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-hive</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-mysql</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-oracle</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-postgresql</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-datasource-sqlserver</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-api</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-datax</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-flink</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-http</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-mr</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-pigeon</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-procedure</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-python</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-shell</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-spark</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-sql</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-task-sqoop</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.dolphinscheduler</groupId> <artifactId>dolphinscheduler-ui</artifactId> <version>${project.version}</version> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-framework</artifactId> <version>${curator.version}</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> <version>${zookeeper.version}</version> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> <exclusion> <artifactId>netty</artifactId> <groupId>io.netty</groupId> </exclusion> <exclusion> <groupId>com.github.spotbugs</groupId> <artifactId>spotbugs-annotations</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-client</artifactId> <version>${curator.version}</version> <exclusions> <exclusion> <groupId>log4j-1.2-api</groupId> <artifactId>org.apache.logging.log4j</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-recipes</artifactId> <version>${curator.version}</version> <exclusions> <exclusion> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.curator</groupId> <artifactId>curator-test</artifactId> <version>${curator.test}</version> </dependency> <dependency> <groupId>commons-codec</groupId> <artifactId>commons-codec</artifactId> <version>${commons.codec.version}</version> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>${commons.logging.version}</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>${httpclient.version}</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore</artifactId> <version>${httpcore.version}</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-annotations</artifactId> <version>${jackson.version}</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>${jackson.version}</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-core</artifactId> <version>${jackson.version}</version> </dependency> <!--protostuff--> <dependency> <groupId>io.protostuff</groupId> <artifactId>protostuff-core</artifactId> <version>${protostuff.version}</version> </dependency> <dependency> <groupId>io.protostuff</groupId> <artifactId>protostuff-runtime</artifactId> <version>${protostuff.version}</version> </dependency> <dependency> <groupId>net.bytebuddy</groupId> <artifactId>byte-buddy</artifactId> <version>${byte-buddy.version}</version> </dependency> <dependency> <groupId>org.reflections</groupId> <artifactId>reflections</artifactId> <version>${reflections.version}</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>${junit.version}</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>${mysql.connector.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>${h2.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.api.version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>${slf4j.log4j12.version}</version> </dependency> <dependency> <groupId>commons-collections</groupId> <artifactId>commons-collections</artifactId> <version>${commons.collections.version}</version> </dependency> <dependency> <groupId>commons-httpclient</groupId> <artifactId>commons-httpclient</artifactId> <version>${commons.httpclient}</version> </dependency> <dependency> <groupId>commons-beanutils</groupId> <artifactId>commons-beanutils</artifactId> <version>${commons.beanutils.version}</version> </dependency> <dependency> <groupId>commons-configuration</groupId> <artifactId>commons-configuration</artifactId> <version>${commons.configuration.version}</version> </dependency> <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>${commons.lang.version}</version> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>${logback.version}</version> </dependency> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-core</artifactId> <version>${logback.version}</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-email</artifactId> <version>${commons.email.version}</version> </dependency> <!--excel poi--> <dependency> <groupId>org.apache.poi</groupId> <artifactId>poi</artifactId> <version>${poi.version}</version> </dependency> <dependency> <groupId>org.apache.poi</groupId> <artifactId>poi-ooxml</artifactId> <version>${poi.version}</version> </dependency> <!-- hadoop --> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>${hadoop.version}</version> <exclusions> <exclusion> <artifactId>slf4j-log4j12</artifactId> <groupId>org.slf4j</groupId> </exclusion> <exclusion> <artifactId>com.sun.jersey</artifactId> <groupId>jersey-json</groupId> </exclusion> <exclusion> <groupId>junit</groupId> <artifactId>junit</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>${hadoop.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-hdfs</artifactId> <version>${hadoop.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-yarn-common</artifactId> <version>${hadoop.version}</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-aws</artifactId> <version>${hadoop.version}</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-collections4</artifactId> <version>${commons.collections4.version}</version> </dependency> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>${guava.version}</version> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>${postgresql.version}</version> </dependency> <dependency> <groupId>org.apache.hive</groupId> <artifactId>hive-jdbc</artifactId> <version>${hive.jdbc.version}</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>${commons.io.version}</version> </dependency> <dependency> <groupId>com.github.oshi</groupId> <artifactId>oshi-core</artifactId> <version>${oshi.core.version}</version> </dependency> <dependency> <groupId>ru.yandex.clickhouse</groupId> <artifactId>clickhouse-jdbc</artifactId> <version>${clickhouse.jdbc.version}</version> </dependency> <dependency> <groupId>com.microsoft.sqlserver</groupId> <artifactId>mssql-jdbc</artifactId> <version>${mssql.jdbc.version}</version> </dependency> <dependency> <groupId>com.facebook.presto</groupId> <artifactId>presto-jdbc</artifactId> <version>${presto.jdbc.version}</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>${servlet-api.version}</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>${javax.servlet.api.version}</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>${springfox.version}</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>${springfox.version}</version> </dependency> <dependency> <groupId>io.swagger</groupId> <artifactId>swagger-models</artifactId> <version>${swagger-models.version}</version> </dependency> <dependency> <groupId>com.github.xiaoymin</groupId> <artifactId>swagger-bootstrap-ui</artifactId> <version>${swagger.version}</version> </dependency> <dependency> <groupId>com.github.rholder</groupId> <artifactId>guava-retrying</artifactId> <version>${guava-retry.version}</version> </dependency> <dependency> <groupId>org.ow2.asm</groupId> <artifactId>asm</artifactId> <version>6.2.1</version> </dependency> <dependency> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <version>1.1</version> </dependency> <dependency> <groupId>com.sun.mail</groupId> <artifactId>javax.mail</artifactId> <version>1.6.2</version> </dependency> <dependency> <groupId>net.sf.py4j</groupId> <artifactId>py4j</artifactId> <version>${py4j.version}</version> </dependency> <dependency> <groupId>org.codehaus.janino</groupId> <artifactId>janino</artifactId> <version>${janino.version}</version> </dependency> <dependency> <groupId>com.google.code.findbugs</groupId> <artifactId>jsr305</artifactId> <version>${jsr305.version}</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-compress</artifactId> <version>${commons-compress.version}</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-math3</artifactId> <version>${commons-math3.version}</version> </dependency> <dependency> <groupId>com.google.errorprone</groupId> <artifactId>error_prone_annotations</artifactId> <version>${error_prone_annotations.version}</version> </dependency> </dependencies> </dependencyManagement> <build> <pluginManagement> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>rpm-maven-plugin</artifactId> <version>${rpm-maven-plugion.version}</version> <inherited>false</inherited> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${java.version}</source> <target>${java.version}</target> <testSource>${java.version}</testSource> <testTarget>${java.version}</testTarget> </configuration> <version>${maven-compiler-plugin.version}</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <version>${maven-release-plugin.version}</version> <configuration> <tagNameFormat>@{project.version}</tagNameFormat> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>${maven-assembly-plugin.version}</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <version>${maven-javadoc-plugin.version}</version> <configuration> <source>8</source> <failOnError>false</failOnError> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>${maven-dependency-plugin.version}</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>${maven-jar-plugin.version}</version> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>${exec-maven-plugin.version}</version> <executions> <execution> <id>docker-build</id> <phase>package</phase> <goals> <goal>exec</goal> </goals> <configuration> <environmentVariables> <DOCKER_BUILDKIT>1</DOCKER_BUILDKIT> </environmentVariables> <executable>docker</executable> <workingDirectory>${project.basedir}</workingDirectory> <arguments> <argument>build</argument> <argument>-t</argument> <argument>${docker.hub}/${docker.repo}:${docker.tag}</argument> <argument>-t</argument> <argument>${docker.hub}/${docker.repo}:latest</argument> <argument>.</argument> <argument>--file=src/main/docker/Dockerfile</argument> </arguments> </configuration> </execution> <execution> <id>docker-push</id> <phase>deploy</phase> <goals> <goal>exec</goal> </goals> <configuration> <environmentVariables> <DOCKER_BUILDKIT>1</DOCKER_BUILDKIT> </environmentVariables> <executable>docker</executable> <workingDirectory>${project.basedir}</workingDirectory> <arguments> <argument>buildx</argument> <argument>build</argument> <argument>--push</argument> <argument>-t</argument> <argument>${docker.hub}/${docker.repo}:${docker.tag}</argument> <argument>-t</argument> <argument>${docker.hub}/${docker.repo}:latest</argument> <argument>.</argument> <argument>--file=src/main/docker/Dockerfile</argument> </arguments> </configuration> </execution> </executions> </plugin> </plugins> </pluginManagement> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-javadoc-plugin</artifactId> <version>${maven-javadoc-plugin.version}</version> <executions> <execution> <id>attach-javadocs</id> <goals> <goal>jar</goal> </goals> </execution> </executions> <configuration> <aggregate>true</aggregate> <charset>${project.build.sourceEncoding}</charset> <encoding>${project.build.sourceEncoding}</encoding> <docencoding>${project.build.sourceEncoding}</docencoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <version>${maven-release-plugin.version}</version> <configuration> <autoVersionSubmodules>true</autoVersionSubmodules> <tagNameFormat>@{project.version}</tagNameFormat> <tagBase>${project.version}</tagBase> </configuration> <dependencies> <dependency> <groupId>org.apache.maven.scm</groupId> <artifactId>maven-scm-provider-jgit</artifactId> <version>1.9.5</version> </dependency> </dependencies> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>${maven-compiler-plugin.version}</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> <encoding>${project.build.sourceEncoding}</encoding> <skip>false</skip><!--not skip compile test classes--> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>${maven-surefire-plugin.version}</version> <dependencies> <dependency> <groupId>org.apache.maven.surefire</groupId> <artifactId>surefire-junit4</artifactId> <version>${maven-surefire-plugin.version}</version> </dependency> </dependencies> <configuration> <systemPropertyVariables> <jacoco-agent.destfile>${project.build.directory}/jacoco.exec</jacoco-agent.destfile> </systemPropertyVariables> </configuration> </plugin> <!-- jenkins plugin jacoco report--> <plugin> <groupId>org.jacoco</groupId> <artifactId>jacoco-maven-plugin</artifactId> <version>${jacoco.version}</version> <configuration> <skip>${jacoco.skip}</skip> <dataFile>${project.build.directory}/jacoco.exec</dataFile> </configuration> <executions> <execution> <id>default-instrument</id> <goals> <goal>instrument</goal> </goals> </execution> <execution> <id>default-restore-instrumented-classes</id> <goals> <goal>restore-instrumented-classes</goal> </goals> <configuration> <excludes>com/github/dreamhead/moco/*</excludes> </configuration> </execution> <execution> <id>default-report</id> <goals> <goal>report</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>com.github.spotbugs</groupId> <artifactId>spotbugs-maven-plugin</artifactId> <version>${spotbugs.version}</version> <configuration> <xmlOutput>true</xmlOutput> <threshold>medium</threshold> <effort>default</effort> <excludeFilterFile>dev-config/spotbugs-exclude.xml</excludeFilterFile> <failOnError>true</failOnError> </configuration> <dependencies> <dependency> <groupId>com.github.spotbugs</groupId> <artifactId>spotbugs</artifactId> <version>4.0.0-beta4</version> </dependency> </dependencies> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-checkstyle-plugin</artifactId> <version>${checkstyle.version}</version> <dependencies> <dependency> <groupId>com.puppycrawl.tools</groupId> <artifactId>checkstyle</artifactId> <version>8.45</version> </dependency> </dependencies> <configuration> <consoleOutput>true</consoleOutput> <encoding>UTF-8</encoding> <configLocation>style/checkstyle.xml</configLocation> <failOnViolation>true</failOnViolation> <violationSeverity>warning</violationSeverity> <includeTestSourceDirectory>true</includeTestSourceDirectory> <sourceDirectories> <sourceDirectory>${project.build.sourceDirectory}</sourceDirectory> </sourceDirectories> <excludes>**\/generated-sources\/</excludes> </configuration> <executions> <execution> <phase>compile</phase> <goals> <goal>check</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>${cobertura-maven-plugin.version}</version> <configuration> <check> </check> <aggregate>true</aggregate> <outputDirectory>./target/cobertura</outputDirectory> <encoding>${project.build.sourceEncoding}</encoding> <quiet>true</quiet> <format>xml</format> <instrumentation> <ignoreTrivial>true</ignoreTrivial> </instrumentation> </configuration> </plugin> <plugin> <artifactId>maven-source-plugin</artifactId> <version>${maven-source-plugin.version}</version> <executions> <execution> <id>attach-sources</id> <goals> <goal>jar</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <dependencies> <!-- NOTE: only development / test phase dependencies (scope = test / provided) that won't be packaged into final jar can be declared here. For example: annotation processors, test dependencies that are used by most of the submodules. --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.jacoco</groupId> <artifactId>org.jacoco.agent</artifactId> <version>${jacoco.version}</version> <classifier>runtime</classifier> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-configuration-processor</artifactId> <optional>true</optional> </dependency> <dependency> <groupId>com.google.auto.service</groupId> <artifactId>auto-service</artifactId> <version>${auto-service.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.powermock</groupId> <artifactId>powermock-api-mockito2</artifactId> <version>${powermock.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.powermock</groupId> <artifactId>powermock-module-junit4</artifactId> <version>${powermock.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.powermock</groupId> <artifactId>powermock-core</artifactId> <version>${powermock.version}</version> <scope>test</scope> </dependency> </dependencies> <modules> <module>dolphinscheduler-alert</module> <module>dolphinscheduler-spi</module> <module>dolphinscheduler-registry</module> <module>dolphinscheduler-task-plugin</module> <module>dolphinscheduler-ui</module> <module>dolphinscheduler-server</module> <module>dolphinscheduler-common</module> <module>dolphinscheduler-api</module> <module>dolphinscheduler-dao</module> <module>dolphinscheduler-dist</module> <module>dolphinscheduler-remote</module> <module>dolphinscheduler-service</module> <module>dolphinscheduler-microbench</module> <module>dolphinscheduler-standalone-server</module> <module>dolphinscheduler-datasource-plugin</module> <module>dolphinscheduler-python</module> <module>dolphinscheduler-meter</module> <module>dolphinscheduler-master</module> <module>dolphinscheduler-worker</module> <module>dolphinscheduler-log-server</module> <module>dolphinscheduler-tools</module> <module>dolphinscheduler-ui-next</module> </modules> </project>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,554
[Bug] [Standlone] H2 in standalone server will auto restart after several minutes
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened After starting the Standalone server several minutes, find the data in database lost. It seems the h2 has been restart. I guess this is related with `minimum-idle`, I will take a deep look at this. ### What you expected to happen No. ### How to reproduce Start Standalone server and wait, this may happen in several minutes. If you hope to reproduce fast, you can set minimum-idle=1/2/3, then you will reproduce in less than 3 minutes. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7554
https://github.com/apache/dolphinscheduler/pull/7556
146471eb487293684be81a0697b2bb56c86bb187
82075a4476c16e1ab3806d914e571e0cf48bebc0
"2021-12-22T12:43:19Z"
java
"2021-12-23T01:26:42Z"
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/aspect/AccessLogAnnotation.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.api.aspect; import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; @Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface AccessLogAnnotation { // ignore request args String[] ignoreRequestArgs() default {}; boolean ignoreRequest() default false; boolean ignoreResponse() default true; }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,554
[Bug] [Standlone] H2 in standalone server will auto restart after several minutes
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened After starting the Standalone server several minutes, find the data in database lost. It seems the h2 has been restart. I guess this is related with `minimum-idle`, I will take a deep look at this. ### What you expected to happen No. ### How to reproduce Start Standalone server and wait, this may happen in several minutes. If you hope to reproduce fast, you can set minimum-idle=1/2/3, then you will reproduce in less than 3 minutes. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7554
https://github.com/apache/dolphinscheduler/pull/7556
146471eb487293684be81a0697b2bb56c86bb187
82075a4476c16e1ab3806d914e571e0cf48bebc0
"2021-12-22T12:43:19Z"
java
"2021-12-23T01:26:42Z"
dolphinscheduler-standalone-server/src/main/resources/application.yaml
# # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # spring: application: name: standalone-server main: banner-mode: off cache: # default enable cache, you can disable by `type: none` type: none cache-names: - tenant - user - processDefinition - processTaskRelation - taskDefinition caffeine: spec: maximumSize=100,expireAfterWrite=300s,recordStats datasource: driver-class-name: org.h2.Driver url: jdbc:h2:mem:dolphinscheduler;MODE=MySQL;DB_CLOSE_DELAY=-1;DATABASE_TO_LOWER=true;INIT=runscript from 'classpath:sql/dolphinscheduler_h2.sql' username: sa password: "" hikari: connection-test-query: select 1 minimum-idle: 5 auto-commit: true validation-timeout: 3000 pool-name: DolphinScheduler maximum-pool-size: 50 connection-timeout: 30000 idle-timeout: 600000 leak-detection-threshold: 0 initialization-fail-timeout: 1 quartz: job-store-type: jdbc jdbc: initialize-schema: never properties: org.quartz.threadPool:threadPriority: 5 org.quartz.jobStore.isClustered: true org.quartz.jobStore.class: org.quartz.impl.jdbcjobstore.JobStoreTX org.quartz.scheduler.instanceId: AUTO org.quartz.jobStore.tablePrefix: QRTZ_ org.quartz.jobStore.acquireTriggersWithinLock: true org.quartz.scheduler.instanceName: DolphinScheduler org.quartz.threadPool.class: org.quartz.simpl.SimpleThreadPool org.quartz.jobStore.useProperties: false org.quartz.threadPool.makeThreadsDaemons: true org.quartz.threadPool.threadCount: 25 org.quartz.jobStore.misfireThreshold: 60000 org.quartz.scheduler.makeSchedulerThreadDaemon: true org.quartz.jobStore.driverDelegateClass: org.quartz.impl.jdbcjobstore.StdJDBCDelegate org.quartz.jobStore.clusterCheckinInterval: 5000 jackson: time-zone: GMT+8 servlet: multipart: max-file-size: 1024MB max-request-size: 1024MB messages: basename: i18n/messages registry: type: zookeeper zookeeper: namespace: dolphinscheduler connect-string: localhost:2181 retry-policy: base-sleep-time: 60ms max-sleep: 300ms max-retries: 5 session-timeout: 30s connection-timeout: 9s block-until-connected: 600ms digest: ~ master: listen-port: 5678 # master fetch command num fetch-command-num: 10 # master prepare execute thread number to limit handle commands in parallel pre-exec-threads: 10 # master execute thread number to limit process instances in parallel exec-threads: 100 # master dispatch task number per batch dispatch-task-number: 3 # master host selector to select a suitable worker, default value: LowerWeight. Optional values include random, round_robin, lower_weight host-selector: lower_weight # master heartbeat interval, the unit is second heartbeat-interval: 10 # master commit task retry times task-commit-retry-times: 5 # master commit task interval, the unit is millisecond task-commit-interval: 1000 state-wheel-interval: 5 # master max cpuload avg, only higher than the system cpu load average, master server can schedule. default value -1: the number of cpu cores * 2 max-cpu-load-avg: -1 # master reserved memory, only lower than system available memory, master server can schedule. default value 0.3, the unit is G reserved-memory: 0.3 # use task logger, default true; if true, it will create log for every task; if false, the task log will append to master log file task-logger: true worker: # worker listener port listen-port: 1234 # worker execute thread number to limit task instances in parallel exec-threads: 100 # worker heartbeat interval, the unit is second heartbeat-interval: 10 # worker host weight to dispatch tasks, default value 100 host-weight: 100 # worker tenant auto create tenant-auto-create: true # worker max cpuload avg, only higher than the system cpu load average, worker server can be dispatched tasks. default value -1: the number of cpu cores * 2 max-cpu-load-avg: -1 # worker reserved memory, only lower than system available memory, worker server can be dispatched tasks. default value 0.3, the unit is G reserved-memory: 0.3 # default worker groups separated by comma, like 'worker.groups=default,test' groups: - default # alert server listen host alert-listen-host: localhost alert-listen-port: 50052 alert: port: 50052 server: port: 12345 servlet: session: timeout: 120m context-path: /dolphinscheduler/ compression: enabled: true mime-types: text/html,text/xml,text/plain,text/css,text/javascript,application/javascript,application/json,application/xml jetty: max-http-form-post-size: 5000000 management: endpoints: web: exposure: include: '*' metrics: tags: application: ${spring.application.name} # Override by profile --- spring: config: activate: on-profile: postgresql quartz: properties: org.quartz.jobStore.driverDelegateClass: org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,534
[Bug] [Master] zookeeper failover error
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ``` [INFO] 2021-12-22 07:02:38.909 org.apache.zookeeper.ClientCnxn:[1158] - Unable to read additional data from server sessionid 0x200000aec62004e, likely server has closed socket, closing socket connection and attempting reconnect [INFO] 2021-12-22 07:02:39.010 org.apache.curator.framework.state.ConnectionStateManager:[251] - State change: SUSPENDED [WARN] 2021-12-22 07:02:39.011 org.apache.dolphinscheduler.plugin.registry.zookeeper.ZookeeperConnectionStateListener:[50] - Registry suspended [WARN] 2021-12-22 07:02:39.011 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[417] - registry connection state is SUSPENDED, ready to stop myself [INFO] 2021-12-22 07:02:39.011 org.apache.dolphinscheduler.server.master.MasterServer:[196] - master server is stopping ..., cause : registry connection state is SUSPENDED, stop myself [INFO] 2021-12-22 07:02:39.681 org.apache.zookeeper.ClientCnxn:[1025] - Opening socket connection to server z-3.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.22.252:2181. Will not attempt to authenticate using SASL (unknown error) [INFO] 2021-12-22 07:02:39.684 org.apache.zookeeper.ClientCnxn:[879] - Socket connection established to z-3.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.22.252:2181, initiating session [INFO] 2021-12-22 07:02:39.686 org.apache.zookeeper.ClientCnxn:[1158] - Unable to read additional data from server sessionid 0x200000aec62004e, likely server has closed socket, closing socket connection and attempting reconnect [INFO] 2021-12-22 07:02:39.811 org.apache.zookeeper.ClientCnxn:[1025] - Opening socket connection to server z-1.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.25.7:2181. Will not attempt to authenticate using SASL (unknown error) [INFO] 2021-12-22 07:02:39.811 org.apache.zookeeper.ClientCnxn:[879] - Socket connection established to z-1.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.25.7:2181, initiating session [INFO] 2021-12-22 07:02:39.812 org.apache.zookeeper.ClientCnxn:[1158] - Unable to read additional data from server sessionid 0x200000aec62004e, likely server has closed socket, closing socket connection and attempting reconnect [INFO] 2021-12-22 07:02:41.740 org.apache.zookeeper.ClientCnxn:[1025] - Opening socket connection to server z-2.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.20.121:2181. Will not attempt to authenticate using SASL (unknown error) [INFO] 2021-12-22 07:02:41.741 org.apache.zookeeper.ClientCnxn:[1162] - Socket error occurred: z-2.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.20.121:2181: Connection refused [INFO] 2021-12-22 07:02:42.098 org.apache.dolphinscheduler.remote.NettyRemotingClient:[390] - netty client closed [INFO] 2021-12-22 07:02:42.098 org.apache.dolphinscheduler.server.master.runner.MasterSchedulerService:[160] - master schedule service stopped... [INFO] 2021-12-22 07:02:42.104 org.apache.dolphinscheduler.remote.NettyRemotingServer:[243] - netty server closed [INFO] 2021-12-22 07:02:42.492 org.apache.zookeeper.ClientCnxn:[1025] - Opening socket connection to server z-3.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.22.252:2181. Will not attempt to authenticate using SASL (unknown error) [INFO] 2021-12-22 07:02:42.493 org.apache.zookeeper.ClientCnxn:[879] - Socket connection established to z-3.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.22.252:2181, initiating session [INFO] 2021-12-22 07:02:42.494 org.apache.zookeeper.ClientCnxn:[1299] - Session establishment complete on server z-3.bi-kafka.d797mt.c8.kafka.us-west-2.amazonaws.com/192.168.22.252:2181, sessionid = 0x200000aec62004e, negotiated timeout = 30000 [INFO] 2021-12-22 07:02:42.494 org.apache.curator.framework.state.ConnectionStateManager:[251] - State change: RECONNECTED [INFO] 2021-12-22 07:02:42.502 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[437] - master node : 192.168.25.200:15678 unRegistry to register center. [INFO] 2021-12-22 07:02:42.502 org.apache.dolphinscheduler.server.master.registry.ServerNodeManager:[286] - master node : /nodes/master/192.168.25.200:15678 down. [INFO] 2021-12-22 07:02:42.502 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[160] - MASTER node deleted : /nodes/master/192.168.25.200:15678 [INFO] 2021-12-22 07:02:42.502 org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient:[439] - heartbeat executor shutdown [INFO] 2021-12-22 07:02:42.505 org.apache.curator.framework.imps.CuratorFrameworkImpl:[955] - backgroundOperationsLoop exiting [INFO] 2021-12-22 07:02:42.505 org.apache.zookeeper.ClientCnxn:[522] - EventThread shut down for session: 0x200000aec62004e [INFO] 2021-12-22 07:02:42.505 org.apache.zookeeper.ZooKeeper:[693] - Session: 0x200000aec62004e closed [INFO] 2021-12-22 07:02:42.506 org.quartz.core.QuartzScheduler:[666] - Scheduler DolphinScheduler_$_ip-192-168-25-2001639723938667 shutting down. [INFO] 2021-12-22 07:02:42.506 org.quartz.core.QuartzScheduler:[585] - Scheduler DolphinScheduler_$_ip-192-168-25-2001639723938667 paused. [INFO] 2021-12-22 07:02:42.509 com.zaxxer.hikari.HikariDataSource:[350] - DolphinScheduler - Shutdown initiated... [INFO] 2021-12-22 07:02:42.512 com.zaxxer.hikari.HikariDataSource:[352] - DolphinScheduler - Shutdown completed. [INFO] 2021-12-22 07:02:42.513 org.quartz.core.QuartzScheduler:[740] - Scheduler DolphinScheduler_$_ip-192-168-25-2001639723938667 shutdown complete. [INFO] 2021-12-22 07:02:42.513 org.apache.dolphinscheduler.service.quartz.QuartzExecutors:[210] - Quartz service stopped, and halt all tasks [INFO] 2021-12-22 07:02:42.513 org.apache.dolphinscheduler.server.master.MasterServer:[214] - Quartz service stopped [INFO] 2021-12-22 07:02:42.515 org.quartz.core.QuartzScheduler:[585] - Scheduler quartzScheduler_$_NON_CLUSTERED paused. [INFO] 2021-12-22 07:02:42.517 org.apache.dolphinscheduler.remote.NettyRemotingClient:[390] - netty client closed [INFO] 2021-12-22 07:02:42.517 org.apache.dolphinscheduler.service.log.LogClientService:[74] - logger client closed [INFO] 2021-12-22 07:02:42.517 org.springframework.scheduling.quartz.SchedulerFactoryBean:[845] - Shutting down Quartz Scheduler [INFO] 2021-12-22 07:02:42.517 org.quartz.core.QuartzScheduler:[666] - Scheduler quartzScheduler_$_NON_CLUSTERED shutting down. [INFO] 2021-12-22 07:02:42.517 org.quartz.core.QuartzScheduler:[585] - Scheduler quartzScheduler_$_NON_CLUSTERED paused. [INFO] 2021-12-22 07:02:42.518 org.quartz.core.QuartzScheduler:[740] - Scheduler quartzScheduler_$_NON_CLUSTERED shutdown complete. [INFO] 2021-12-22 07:02:42.520 org.apache.dolphinscheduler.server.master.processor.queue.TaskResponseService:[139] - StateEventResponseWorker stopped [WARN] 2021-12-22 07:02:42.522 org.apache.dolphinscheduler.server.master.processor.queue.StateEventResponseService:[115] - persist task error java.lang.InterruptedException: null at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.dolphinscheduler.server.master.processor.queue.StateEventResponseService$StateEventResponseWorker.run(StateEventResponseService.java:112) [INFO] 2021-12-22 07:02:42.522 org.apache.dolphinscheduler.server.master.processor.queue.StateEventResponseService:[120] - StateEventResponseWorker stopped [ERROR] 2021-12-22 07:02:50.169 org.apache.curator.framework.imps.CuratorFrameworkImpl:[703] - Background exception was not retry-able or retry gave up java.lang.IllegalStateException: Client is not started at org.apache.curator.shaded.com.google.common.base.Preconditions.checkState(Preconditions.java:507) at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:162) at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:969) at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:638) at org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152) at org.apache.curator.framework.imps.FindAndDeleteProtectedNodeInBackground.execute(FindAndDeleteProtectedNodeInBackground.java:60) at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:619) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:597) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:575) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:51) at org.apache.curator.framework.recipes.locks.StandardLockInternalsDriver.createsTheLock(StandardLockInternalsDriver.java:54) at org.apache.curator.framework.recipes.locks.LockInternals.attemptLock(LockInternals.java:225) at org.apache.curator.framework.recipes.locks.InterProcessMutex.internalLock(InterProcessMutex.java:237) at org.apache.curator.framework.recipes.locks.InterProcessMutex.acquire(InterProcessMutex.java:89) at org.apache.dolphinscheduler.plugin.registry.zookeeper.ZookeeperRegistry.acquireLock(ZookeeperRegistry.java:203) at org.apache.dolphinscheduler.service.registry.RegistryClient.getLock(RegistryClient.java:237) at org.apache.dolphinscheduler.server.master.registry.MasterRegistryClient.removeNodePath(MasterRegistryClient.java:163) at org.apache.dolphinscheduler.server.master.registry.MasterRegistryDataListener.handleMasterEvent(MasterRegistryDataListener.java:66) at org.apache.dolphinscheduler.server.master.registry.MasterRegistryDataListener.notify(MasterRegistryDataListener.java:52) at org.apache.dolphinscheduler.plugin.registry.zookeeper.ZookeeperRegistry.lambda$subscribe$1(ZookeeperRegistry.java:127) at org.apache.curator.framework.recipes.cache.TreeCache$2.apply(TreeCache.java:760) at org.apache.curator.framework.recipes.cache.TreeCache$2.apply(TreeCache.java:754) at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:100) at org.apache.curator.shaded.com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30) at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:92) at org.apache.curator.framework.recipes.cache.TreeCache.callListeners(TreeCache.java:753) at org.apache.curator.framework.recipes.cache.TreeCache.access$1900(TreeCache.java:75) at org.apache.curator.framework.recipes.cache.TreeCache$4.run(TreeCache.java:865) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### What you expected to happen failover success. ### How to reproduce zookeeper failover ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7534
https://github.com/apache/dolphinscheduler/pull/7562
97aecb40a6a265a9dcbd0e6082d5fe6afb40a347
1efa85ca27209772ebbb5efbaead036fb33815c0
"2021-12-22T02:10:08Z"
java
"2021-12-24T03:40:21Z"
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistryClient.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.server.master.registry; import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_MASTERS; import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_NODE; import static org.apache.dolphinscheduler.common.Constants.SLEEP_TIME_MILLIS; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.IStoppable; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.NodeType; import org.apache.dolphinscheduler.common.enums.StateEvent; import org.apache.dolphinscheduler.common.enums.StateEventType; import org.apache.dolphinscheduler.common.model.Server; import org.apache.dolphinscheduler.common.thread.ThreadUtils; import org.apache.dolphinscheduler.common.utils.NetUtils; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.TaskInstance; import org.apache.dolphinscheduler.registry.api.ConnectionState; import org.apache.dolphinscheduler.remote.utils.NamedThreadFactory; import org.apache.dolphinscheduler.server.builder.TaskExecutionContextBuilder; import org.apache.dolphinscheduler.server.master.config.MasterConfig; import org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThreadPool; import org.apache.dolphinscheduler.server.registry.HeartBeatTask; import org.apache.dolphinscheduler.server.utils.ProcessUtils; import org.apache.dolphinscheduler.service.process.ProcessService; import org.apache.dolphinscheduler.service.queue.entity.TaskExecutionContext; import org.apache.dolphinscheduler.service.registry.RegistryClient; import org.apache.commons.lang.StringUtils; import java.util.Collections; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import com.google.common.collect.Sets; /** * zookeeper master client * <p> * single instance */ @Component public class MasterRegistryClient { /** * logger */ private static final Logger logger = LoggerFactory.getLogger(MasterRegistryClient.class); /** * process service */ @Autowired private ProcessService processService; @Autowired private RegistryClient registryClient; /** * master config */ @Autowired private MasterConfig masterConfig; /** * heartbeat executor */ private ScheduledExecutorService heartBeatExecutor; @Autowired private WorkflowExecuteThreadPool workflowExecuteThreadPool; /** * master startup time, ms */ private long startupTime; private String localNodePath; public void init() { this.startupTime = System.currentTimeMillis(); this.heartBeatExecutor = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("HeartBeatExecutor")); } public void start() { String nodeLock = Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS; try { // create distributed lock with the root node path of the lock space as /dolphinscheduler/lock/failover/startup-masters registryClient.getLock(nodeLock); // master registry registry(); String registryPath = getMasterPath(); registryClient.handleDeadServer(Collections.singleton(registryPath), NodeType.MASTER, Constants.DELETE_OP); // init system node while (!registryClient.checkNodeExists(NetUtils.getHost(), NodeType.MASTER)) { ThreadUtils.sleep(SLEEP_TIME_MILLIS); } // self tolerant if (registryClient.getActiveMasterNum() == 1) { removeNodePath(null, NodeType.MASTER, true); removeNodePath(null, NodeType.WORKER, true); } registryClient.subscribe(REGISTRY_DOLPHINSCHEDULER_NODE, new MasterRegistryDataListener()); } catch (Exception e) { logger.error("master start up exception", e); } finally { registryClient.releaseLock(nodeLock); } } public void setRegistryStoppable(IStoppable stoppable) { registryClient.setStoppable(stoppable); } public void closeRegistry() { // TODO unsubscribe MasterRegistryDataListener deregister(); } /** * remove zookeeper node path * * @param path zookeeper node path * @param nodeType zookeeper node type * @param failover is failover */ public void removeNodePath(String path, NodeType nodeType, boolean failover) { logger.info("{} node deleted : {}", nodeType, path); String failoverPath = getFailoverLockPath(nodeType); try { registryClient.getLock(failoverPath); String serverHost = null; if (!StringUtils.isEmpty(path)) { serverHost = registryClient.getHostByEventDataPath(path); if (StringUtils.isEmpty(serverHost)) { logger.error("server down error: unknown path: {}", path); return; } // handle dead server registryClient.handleDeadServer(Collections.singleton(path), nodeType, Constants.ADD_OP); } //failover server if (failover) { failoverServerWhenDown(serverHost, nodeType); } } catch (Exception e) { logger.error("{} server failover failed.", nodeType); logger.error("failover exception ", e); } finally { registryClient.releaseLock(failoverPath); } } /** * failover server when server down * * @param serverHost server host * @param nodeType zookeeper node type */ private void failoverServerWhenDown(String serverHost, NodeType nodeType) { switch (nodeType) { case MASTER: failoverMaster(serverHost); break; case WORKER: failoverWorker(serverHost); break; default: break; } } /** * get failover lock path * * @param nodeType zookeeper node type * @return fail over lock path */ private String getFailoverLockPath(NodeType nodeType) { switch (nodeType) { case MASTER: return Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS; case WORKER: return Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS; default: return ""; } } /** * task needs failover if task start before worker starts * * @param taskInstance task instance * @return true if task instance need fail over */ private boolean checkTaskInstanceNeedFailover(TaskInstance taskInstance) { boolean taskNeedFailover = true; //now no host will execute this task instance,so no need to failover the task if (taskInstance.getHost() == null) { return false; } // if the worker node exists in zookeeper, we must check the task starts after the worker if (registryClient.checkNodeExists(taskInstance.getHost(), NodeType.WORKER)) { //if task start after worker starts, there is no need to failover the task. if (checkTaskAfterWorkerStart(taskInstance)) { taskNeedFailover = false; } } return taskNeedFailover; } /** * check task start after the worker server starts. * * @param taskInstance task instance * @return true if task instance start time after worker server start date */ private boolean checkTaskAfterWorkerStart(TaskInstance taskInstance) { if (StringUtils.isEmpty(taskInstance.getHost())) { return false; } Date workerServerStartDate = null; List<Server> workerServers = registryClient.getServerList(NodeType.WORKER); for (Server workerServer : workerServers) { if (taskInstance.getHost().equals(workerServer.getHost() + Constants.COLON + workerServer.getPort())) { workerServerStartDate = workerServer.getCreateTime(); break; } } if (workerServerStartDate != null) { return taskInstance.getStartTime().after(workerServerStartDate); } return false; } /** * failover worker tasks * <p> * 1. kill yarn job if there are yarn jobs in tasks. * 2. change task state from running to need failover. * 3. failover all tasks when workerHost is null * * @param workerHost worker host */ private void failoverWorker(String workerHost) { if (StringUtils.isEmpty(workerHost)) { return; } long startTime = System.currentTimeMillis(); List<TaskInstance> needFailoverTaskInstanceList = processService.queryNeedFailoverTaskInstances(workerHost); Map<Integer, ProcessInstance> processInstanceCacheMap = new HashMap<>(); logger.info("start worker[{}] failover, task list size:{}", workerHost, needFailoverTaskInstanceList.size()); for (TaskInstance taskInstance : needFailoverTaskInstanceList) { ProcessInstance processInstance = processInstanceCacheMap.get(taskInstance.getProcessInstanceId()); if (processInstance == null) { processInstance = processService.findProcessInstanceDetailById(taskInstance.getProcessInstanceId()); if (processInstance == null) { logger.error("failover task instance error, processInstance {} of taskInstance {} is null", taskInstance.getProcessInstanceId(), taskInstance.getId()); continue; } processInstanceCacheMap.put(processInstance.getId(), processInstance); taskInstance.setProcessInstance(processInstance); TaskExecutionContext taskExecutionContext = TaskExecutionContextBuilder.get() .buildTaskInstanceRelatedInfo(taskInstance) .buildProcessInstanceRelatedInfo(processInstance) .create(); // only kill yarn job if exists , the local thread has exited ProcessUtils.killYarnJob(taskExecutionContext); taskInstance.setState(ExecutionStatus.NEED_FAULT_TOLERANCE); processService.saveTaskInstance(taskInstance); StateEvent stateEvent = new StateEvent(); stateEvent.setTaskInstanceId(taskInstance.getId()); stateEvent.setType(StateEventType.TASK_STATE_CHANGE); stateEvent.setProcessInstanceId(processInstance.getId()); stateEvent.setExecutionStatus(taskInstance.getState()); workflowExecuteThreadPool.submitStateEvent(stateEvent); } // only failover the task owned myself if worker down. if (processInstance.getHost().equalsIgnoreCase(getLocalAddress())) { logger.info("failover task instance id: {}, process instance id: {}", taskInstance.getId(), taskInstance.getProcessInstanceId()); failoverTaskInstance(processInstance, taskInstance); } } logger.info("end worker[{}] failover, useTime:{}ms", workerHost, System.currentTimeMillis() - startTime); } /** * failover master * <p> * failover process instance and associated task instance * * @param masterHost master host */ private void failoverMaster(String masterHost) { if (StringUtils.isEmpty(masterHost)) { return; } long startTime = System.currentTimeMillis(); List<ProcessInstance> needFailoverProcessInstanceList = processService.queryNeedFailoverProcessInstances(masterHost); logger.info("start master[{}] failover, process list size:{}", masterHost, needFailoverProcessInstanceList.size()); for (ProcessInstance processInstance : needFailoverProcessInstanceList) { if (Constants.NULL.equals(processInstance.getHost())) { continue; } logger.info("failover process instance id: {}", processInstance.getId()); List<TaskInstance> validTaskInstanceList = processService.findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance taskInstance : validTaskInstanceList) { if (Constants.NULL.equals(taskInstance.getHost())) { continue; } logger.info("failover task instance id: {}, process instance id: {}", taskInstance.getId(), taskInstance.getProcessInstanceId()); failoverTaskInstance(processInstance, taskInstance); } //updateProcessInstance host is null and insert into command processService.processNeedFailoverProcessInstances(processInstance); } logger.info("master[{}] failover end, useTime:{}ms", masterHost, System.currentTimeMillis() - startTime); } private void failoverTaskInstance(ProcessInstance processInstance, TaskInstance taskInstance) { if (taskInstance == null) { logger.error("failover task instance error, taskInstance is null"); return; } if (processInstance == null) { logger.error("failover task instance error, processInstance {} of taskInstance {} is null", taskInstance.getProcessInstanceId(), taskInstance.getId()); return; } if (!checkTaskInstanceNeedFailover(taskInstance)) { return; } taskInstance.setProcessInstance(processInstance); TaskExecutionContext taskExecutionContext = TaskExecutionContextBuilder.get() .buildTaskInstanceRelatedInfo(taskInstance) .buildProcessInstanceRelatedInfo(processInstance) .create(); // only kill yarn job if exists , the local thread has exited ProcessUtils.killYarnJob(taskExecutionContext); taskInstance.setState(ExecutionStatus.NEED_FAULT_TOLERANCE); processService.saveTaskInstance(taskInstance); StateEvent stateEvent = new StateEvent(); stateEvent.setTaskInstanceId(taskInstance.getId()); stateEvent.setType(StateEventType.TASK_STATE_CHANGE); stateEvent.setProcessInstanceId(processInstance.getId()); stateEvent.setExecutionStatus(taskInstance.getState()); workflowExecuteThreadPool.submitStateEvent(stateEvent); } /** * registry */ public void registry() { String address = NetUtils.getAddr(masterConfig.getListenPort()); localNodePath = getMasterPath(); int masterHeartbeatInterval = masterConfig.getHeartbeatInterval(); HeartBeatTask heartBeatTask = new HeartBeatTask(startupTime, masterConfig.getMaxCpuLoadAvg(), masterConfig.getReservedMemory(), Sets.newHashSet(getMasterPath()), Constants.MASTER_TYPE, registryClient); registryClient.persistEphemeral(localNodePath, heartBeatTask.getHeartBeatInfo()); registryClient.addConnectionStateListener(this::handleConnectionState); this.heartBeatExecutor.scheduleAtFixedRate(heartBeatTask, masterHeartbeatInterval, masterHeartbeatInterval, TimeUnit.SECONDS); logger.info("master node : {} registry to ZK successfully with heartBeatInterval : {}s", address, masterHeartbeatInterval); } public void handleConnectionState(ConnectionState state) { switch (state) { case CONNECTED: logger.debug("registry connection state is {}", state); break; case SUSPENDED: logger.warn("registry connection state is {}, ready to stop myself", state); registryClient.getStoppable().stop("registry connection state is SUSPENDED, stop myself"); break; case RECONNECTED: logger.debug("registry connection state is {}, clean the node info", state); registryClient.persistEphemeral(localNodePath, ""); break; case DISCONNECTED: logger.warn("registry connection state is {}, ready to stop myself", state); registryClient.getStoppable().stop("registry connection state is DISCONNECTED, stop myself"); break; default: } } public void deregister() { try { String address = getLocalAddress(); String localNodePath = getMasterPath(); registryClient.remove(localNodePath); logger.info("master node : {} unRegistry to register center.", address); heartBeatExecutor.shutdown(); logger.info("heartbeat executor shutdown"); registryClient.close(); } catch (Exception e) { logger.error("remove registry path exception ", e); } } /** * get master path */ public String getMasterPath() { String address = getLocalAddress(); return REGISTRY_DOLPHINSCHEDULER_MASTERS + "/" + address; } /** * get local address */ private String getLocalAddress() { return NetUtils.getAddr(masterConfig.getListenPort()); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,615
[Bug] [python] Example have some package import error
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened run `python ./example/task_dependent_example.py` and `python ./example/task_switch_example.py` raise error due to some package import error. ### What you expected to happen should not error ### How to reproduce run `python ./example/task_dependent_example.py` and `python ./example/task_switch_example.py` ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7615
https://github.com/apache/dolphinscheduler/pull/7617
120c1755a15fcf78df303a67edfa87db0a7ff495
fd6eb1f830dd60c5363971e735135afc06925380
"2021-12-24T11:38:54Z"
java
"2021-12-24T12:07:49Z"
dolphinscheduler-python/pydolphinscheduler/examples/task_dependent_example.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. r""" A example workflow for task dependent. This example will create two workflows named `task_dependent` and `task_dependent_external`. `task_dependent` is true workflow define and run task dependent, while `task_dependent_external` define outside workflow and task from dependent. After this script submit, we would get workflow as below: task_dependent_external: task_1 task_2 task_3 task_dependent: task_dependent(this task dependent on task_dependent_external.task_1 and task_dependent_external.task_2). """ from constants import ProcessDefinitionDefault from pydolphinscheduler.core.process_definition import ProcessDefinition from pydolphinscheduler.tasks.dependent import And, Dependent, DependentItem, Or from pydolphinscheduler.tasks.shell import Shell with ProcessDefinition( name="task_dependent_external", tenant="tenant_exists", ) as pd: task_1 = Shell(name="task_1", command="echo task 1") task_2 = Shell(name="task_2", command="echo task 2") task_3 = Shell(name="task_3", command="echo task 3") pd.submit() with ProcessDefinition( name="task_dependent", tenant="tenant_exists", ) as pd: task = Dependent( name="task_dependent", dependence=And( Or( DependentItem( project_name=ProcessDefinitionDefault.PROJECT, process_definition_name="task_dependent_external", dependent_task_name="task_1", ), DependentItem( project_name=ProcessDefinitionDefault.PROJECT, process_definition_name="task_dependent_external", dependent_task_name="task_2", ), ) ), ) pd.submit()
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,615
[Bug] [python] Example have some package import error
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened run `python ./example/task_dependent_example.py` and `python ./example/task_switch_example.py` raise error due to some package import error. ### What you expected to happen should not error ### How to reproduce run `python ./example/task_dependent_example.py` and `python ./example/task_switch_example.py` ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7615
https://github.com/apache/dolphinscheduler/pull/7617
120c1755a15fcf78df303a67edfa87db0a7ff495
fd6eb1f830dd60c5363971e735135afc06925380
"2021-12-24T11:38:54Z"
java
"2021-12-24T12:07:49Z"
dolphinscheduler-python/pydolphinscheduler/examples/task_switch_example.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. r""" A example workflow for task switch. This example will create four task in single workflow, with three shell task and one switch task. Task switch have one upstream which we declare explicit with syntax `parent >> switch`, and two downstream automatically set dependence by switch task by passing parameter `condition`. The graph of this workflow like: --> switch_child_1 / parent -> switch -> \ --> switch_child_2 . """ from tasks.switch import Branch, Default, Switch, SwitchCondition from pydolphinscheduler.core.process_definition import ProcessDefinition from pydolphinscheduler.tasks.shell import Shell with ProcessDefinition( name="task_dependent_external", tenant="tenant_exists", ) as pd: parent = Shell(name="parent", command="echo parent") switch_child_1 = Shell(name="switch_child_1", command="echo switch_child_1") switch_child_2 = Shell(name="switch_child_2", command="echo switch_child_2") switch_condition = SwitchCondition( Branch(condition="${var} > 1", task=switch_child_1), Default(task=switch_child_2), ) switch = Switch(name="switch", condition=switch_condition) parent >> switch pd.submit()
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,520
[Bug] [Master] Data too long for column 'task_params' at row 1
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ``` [INFO] 2021-12-20 12:44:29.923 org.apache.dolphinscheduler.service.process.ProcessService:[1060] - start submit task : 依赖检查, instance id:176379, state: RUNNING_EXECUTION [ERROR] 2021-12-20 12:44:29.933 org.apache.dolphinscheduler.service.process.ProcessService:[1043] - task commit to mysql failed org.springframework.dao.DataIntegrityViolationException: ### Error updating database. Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ### The error may exist in org/apache/dolphinscheduler/dao/mapper/TaskInstanceMapper.java (best guess) ### The error may involve org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper.insert-Inline ### The error occurred while setting parameters ### SQL: INSERT INTO t_ds_task_instance ( dry_run, flag, environment_code, pid, task_params, task_type, task_instance_priority, task_code, worker_group, state, process_instance_id, executor_id, alert_flag, first_submit_time, max_retry_times, retry_times, submit_time, name, task_definition_version, delay_time, retry_interval ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) ### Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ; Data truncation: Data too long for column 'task_params' at row 1; nested exception is com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:104) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:74) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:440) at com.sun.proxy.$Proxy83.insert(Unknown Source) at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:271) at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:58) at com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:61) at com.sun.proxy.$Proxy90.insert(Unknown Source) at org.apache.dolphinscheduler.service.process.ProcessService.createTaskInstance(ProcessService.java:1445) at org.apache.dolphinscheduler.service.process.ProcessService.saveTaskInstance(ProcessService.java:1434) at org.apache.dolphinscheduler.service.process.ProcessService.submitTaskInstanceToDB(ProcessService.java:1326) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1063) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1032) at org.apache.dolphinscheduler.service.process.ProcessService$$FastClassBySpringCGLIB$$ed138739.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.service.process.ProcessService$$EnhancerBySpringCGLIB$$b8f3698c.submitTask(<generated>) at org.apache.dolphinscheduler.server.master.runner.task.DependentTaskProcessor.submit(DependentTaskProcessor.java:86) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:620) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1273) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:888) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.startProcess(WorkflowExecuteThread.java:505) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### What you expected to happen run job successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7520
https://github.com/apache/dolphinscheduler/pull/7521
7a888c544c562b9320d786ad9700b2754746f2d3
c7d7eec67931009a9713bd370ab8b39c57f50219
"2021-12-21T06:10:57Z"
java
"2021-12-24T14:51:18Z"
dolphinscheduler-dao/src/main/resources/sql/dolphinscheduler_h2.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ SET FOREIGN_KEY_CHECKS=0; SET REFERENTIAL_INTEGRITY FALSE; -- ---------------------------- -- Table structure for QRTZ_JOB_DETAILS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_JOB_DETAILS CASCADE; CREATE TABLE QRTZ_JOB_DETAILS ( SCHED_NAME varchar(120) NOT NULL, JOB_NAME varchar(200) NOT NULL, JOB_GROUP varchar(200) NOT NULL, DESCRIPTION varchar(250) DEFAULT NULL, JOB_CLASS_NAME varchar(250) NOT NULL, IS_DURABLE boolean NOT NULL, IS_NONCONCURRENT boolean NOT NULL, IS_UPDATE_DATA boolean NOT NULL, REQUESTS_RECOVERY boolean NOT NULL, JOB_DATA blob, PRIMARY KEY (SCHED_NAME, JOB_NAME, JOB_GROUP) ); -- ---------------------------- -- Table structure for QRTZ_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_TRIGGERS CASCADE; CREATE TABLE QRTZ_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, JOB_NAME varchar(200) NOT NULL, JOB_GROUP varchar(200) NOT NULL, DESCRIPTION varchar(250) DEFAULT NULL, NEXT_FIRE_TIME bigint(13) DEFAULT NULL, PREV_FIRE_TIME bigint(13) DEFAULT NULL, PRIORITY int(11) DEFAULT NULL, TRIGGER_STATE varchar(16) NOT NULL, TRIGGER_TYPE varchar(8) NOT NULL, START_TIME bigint(13) NOT NULL, END_TIME bigint(13) DEFAULT NULL, CALENDAR_NAME varchar(200) DEFAULT NULL, MISFIRE_INSTR smallint(2) DEFAULT NULL, JOB_DATA blob, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), CONSTRAINT QRTZ_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, JOB_NAME, JOB_GROUP) REFERENCES QRTZ_JOB_DETAILS (SCHED_NAME, JOB_NAME, JOB_GROUP) ); -- ---------------------------- -- Table structure for QRTZ_BLOB_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_BLOB_TRIGGERS CASCADE; CREATE TABLE QRTZ_BLOB_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, BLOB_DATA blob, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_BLOB_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_CALENDARS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_CALENDARS CASCADE; CREATE TABLE QRTZ_CALENDARS ( SCHED_NAME varchar(120) NOT NULL, CALENDAR_NAME varchar(200) NOT NULL, CALENDAR blob NOT NULL, PRIMARY KEY (SCHED_NAME, CALENDAR_NAME) ); -- ---------------------------- -- Records of QRTZ_CALENDARS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_CRON_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_CRON_TRIGGERS CASCADE; CREATE TABLE QRTZ_CRON_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, CRON_EXPRESSION varchar(120) NOT NULL, TIME_ZONE_ID varchar(80) DEFAULT NULL, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), CONSTRAINT QRTZ_CRON_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_CRON_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_FIRED_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_FIRED_TRIGGERS CASCADE; CREATE TABLE QRTZ_FIRED_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, ENTRY_ID varchar(200) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, INSTANCE_NAME varchar(200) NOT NULL, FIRED_TIME bigint(13) NOT NULL, SCHED_TIME bigint(13) NOT NULL, PRIORITY int(11) NOT NULL, STATE varchar(16) NOT NULL, JOB_NAME varchar(200) DEFAULT NULL, JOB_GROUP varchar(200) DEFAULT NULL, IS_NONCONCURRENT boolean DEFAULT NULL, REQUESTS_RECOVERY boolean DEFAULT NULL, PRIMARY KEY (SCHED_NAME, ENTRY_ID) ); -- ---------------------------- -- Records of QRTZ_FIRED_TRIGGERS -- ---------------------------- -- ---------------------------- -- Records of QRTZ_JOB_DETAILS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_LOCKS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_LOCKS CASCADE; CREATE TABLE QRTZ_LOCKS ( SCHED_NAME varchar(120) NOT NULL, LOCK_NAME varchar(40) NOT NULL, PRIMARY KEY (SCHED_NAME, LOCK_NAME) ); -- ---------------------------- -- Records of QRTZ_LOCKS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_PAUSED_TRIGGER_GRPS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_PAUSED_TRIGGER_GRPS CASCADE; CREATE TABLE QRTZ_PAUSED_TRIGGER_GRPS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, PRIMARY KEY (SCHED_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_PAUSED_TRIGGER_GRPS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SCHEDULER_STATE -- ---------------------------- DROP TABLE IF EXISTS QRTZ_SCHEDULER_STATE CASCADE; CREATE TABLE QRTZ_SCHEDULER_STATE ( SCHED_NAME varchar(120) NOT NULL, INSTANCE_NAME varchar(200) NOT NULL, LAST_CHECKIN_TIME bigint(13) NOT NULL, CHECKIN_INTERVAL bigint(13) NOT NULL, PRIMARY KEY (SCHED_NAME, INSTANCE_NAME) ); -- ---------------------------- -- Records of QRTZ_SCHEDULER_STATE -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SIMPLE_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_SIMPLE_TRIGGERS CASCADE; CREATE TABLE QRTZ_SIMPLE_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, REPEAT_COUNT bigint(7) NOT NULL, REPEAT_INTERVAL bigint(12) NOT NULL, TIMES_TRIGGERED bigint(10) NOT NULL, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), CONSTRAINT QRTZ_SIMPLE_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_SIMPLE_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SIMPROP_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_SIMPROP_TRIGGERS CASCADE; CREATE TABLE QRTZ_SIMPROP_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, STR_PROP_1 varchar(512) DEFAULT NULL, STR_PROP_2 varchar(512) DEFAULT NULL, STR_PROP_3 varchar(512) DEFAULT NULL, INT_PROP_1 int(11) DEFAULT NULL, INT_PROP_2 int(11) DEFAULT NULL, LONG_PROP_1 bigint(20) DEFAULT NULL, LONG_PROP_2 bigint(20) DEFAULT NULL, DEC_PROP_1 decimal(13, 4) DEFAULT NULL, DEC_PROP_2 decimal(13, 4) DEFAULT NULL, BOOL_PROP_1 boolean DEFAULT NULL, BOOL_PROP_2 boolean DEFAULT NULL, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), CONSTRAINT QRTZ_SIMPROP_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_SIMPROP_TRIGGERS -- ---------------------------- -- ---------------------------- -- Records of QRTZ_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_access_token -- ---------------------------- DROP TABLE IF EXISTS t_ds_access_token CASCADE; CREATE TABLE t_ds_access_token ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) DEFAULT NULL, token varchar(64) DEFAULT NULL, expire_time datetime DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_access_token -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_alert -- ---------------------------- DROP TABLE IF EXISTS t_ds_alert CASCADE; CREATE TABLE t_ds_alert ( id int(11) NOT NULL AUTO_INCREMENT, title varchar(64) DEFAULT NULL, content text, alert_status tinyint(4) DEFAULT '0', log text, alertgroup_id int(11) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_alert -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_alertgroup -- ---------------------------- DROP TABLE IF EXISTS t_ds_alertgroup CASCADE; CREATE TABLE t_ds_alertgroup ( id int(11) NOT NULL AUTO_INCREMENT, alert_instance_ids varchar(255) DEFAULT NULL, create_user_id int(11) DEFAULT NULL, group_name varchar(255) DEFAULT NULL, description varchar(255) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY t_ds_alertgroup_name_un (group_name) ); -- ---------------------------- -- Records of t_ds_alertgroup -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_command -- ---------------------------- DROP TABLE IF EXISTS t_ds_command CASCADE; CREATE TABLE t_ds_command ( id int(11) NOT NULL AUTO_INCREMENT, command_type tinyint(4) DEFAULT NULL, process_definition_code bigint(20) DEFAULT NULL, command_param text, task_depend_type tinyint(4) DEFAULT NULL, failure_strategy tinyint(4) DEFAULT '0', warning_type tinyint(4) DEFAULT '0', warning_group_id int(11) DEFAULT NULL, schedule_time datetime DEFAULT NULL, start_time datetime DEFAULT NULL, executor_id int(11) DEFAULT NULL, update_time datetime DEFAULT NULL, process_instance_priority int(11) DEFAULT NULL, worker_group varchar(64), environment_code bigint(20) DEFAULT '-1', dry_run int NULL DEFAULT 0, process_instance_id int(11) DEFAULT 0, process_definition_version int(11) DEFAULT 0, PRIMARY KEY (id), KEY priority_id_index (process_instance_priority, id) ); -- ---------------------------- -- Records of t_ds_command -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_datasource -- ---------------------------- DROP TABLE IF EXISTS t_ds_datasource CASCADE; CREATE TABLE t_ds_datasource ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(64) NOT NULL, note varchar(255) DEFAULT NULL, type tinyint(4) NOT NULL, user_id int(11) NOT NULL, connection_params text NOT NULL, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY t_ds_datasource_name_un (name, type) ); -- ---------------------------- -- Records of t_ds_datasource -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_error_command -- ---------------------------- DROP TABLE IF EXISTS t_ds_error_command CASCADE; CREATE TABLE t_ds_error_command ( id int(11) NOT NULL, command_type tinyint(4) DEFAULT NULL, executor_id int(11) DEFAULT NULL, process_definition_code bigint(20) DEFAULT NULL, command_param text, task_depend_type tinyint(4) DEFAULT NULL, failure_strategy tinyint(4) DEFAULT '0', warning_type tinyint(4) DEFAULT '0', warning_group_id int(11) DEFAULT NULL, schedule_time datetime DEFAULT NULL, start_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, process_instance_priority int(11) DEFAULT NULL, worker_group varchar(64), environment_code bigint(20) DEFAULT '-1', message text, dry_run int NULL DEFAULT 0, process_instance_id int(11) DEFAULT 0, process_definition_version int(11) DEFAULT 0, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_error_command -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_process_definition -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_definition CASCADE; CREATE TABLE t_ds_process_definition ( id int(11) NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(255) DEFAULT NULL, version int(11) DEFAULT NULL, description text, project_code bigint(20) NOT NULL, release_state tinyint(4) DEFAULT NULL, user_id int(11) DEFAULT NULL, global_params text, flag tinyint(4) DEFAULT NULL, locations text, warning_group_id int(11) DEFAULT NULL, timeout int(11) DEFAULT '0', tenant_id int(11) NOT NULL DEFAULT '-1', execution_type tinyint(4) DEFAULT '0', create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY process_unique (name,project_code) USING BTREE, UNIQUE KEY code_unique (code) ); -- ---------------------------- -- Records of t_ds_process_definition -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_process_definition_log -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_definition_log CASCADE; CREATE TABLE t_ds_process_definition_log ( id int(11) NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(200) DEFAULT NULL, version int(11) DEFAULT NULL, description text, project_code bigint(20) NOT NULL, release_state tinyint(4) DEFAULT NULL, user_id int(11) DEFAULT NULL, global_params text, flag tinyint(4) DEFAULT NULL, locations text, warning_group_id int(11) DEFAULT NULL, timeout int(11) DEFAULT '0', tenant_id int(11) NOT NULL DEFAULT '-1', execution_type tinyint(4) DEFAULT '0', operator int(11) DEFAULT NULL, operate_time datetime DEFAULT NULL, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_task_definition -- ---------------------------- DROP TABLE IF EXISTS t_ds_task_definition CASCADE; CREATE TABLE t_ds_task_definition ( id int(11) NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(200) DEFAULT NULL, version int(11) DEFAULT NULL, description text, project_code bigint(20) NOT NULL, user_id int(11) DEFAULT NULL, task_type varchar(50) NOT NULL, task_params longtext, flag tinyint(2) DEFAULT NULL, task_priority tinyint(4) DEFAULT NULL, worker_group varchar(200) DEFAULT NULL, environment_code bigint(20) DEFAULT '-1', fail_retry_times int(11) DEFAULT NULL, fail_retry_interval int(11) DEFAULT NULL, timeout_flag tinyint(2) DEFAULT '0', timeout_notify_strategy tinyint(4) DEFAULT NULL, timeout int(11) DEFAULT '0', delay_time int(11) DEFAULT '0', task_group_id int(11) DEFAULT NULL, task_group_priority tinyint(4) DEFAULT '0', resource_ids text, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id, code) ); -- ---------------------------- -- Table structure for t_ds_task_definition_log -- ---------------------------- DROP TABLE IF EXISTS t_ds_task_definition_log CASCADE; CREATE TABLE t_ds_task_definition_log ( id int(11) NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(200) DEFAULT NULL, version int(11) DEFAULT NULL, description text, project_code bigint(20) NOT NULL, user_id int(11) DEFAULT NULL, task_type varchar(50) NOT NULL, task_params text, flag tinyint(2) DEFAULT NULL, task_priority tinyint(4) DEFAULT NULL, worker_group varchar(200) DEFAULT NULL, environment_code bigint(20) DEFAULT '-1', fail_retry_times int(11) DEFAULT NULL, fail_retry_interval int(11) DEFAULT NULL, timeout_flag tinyint(2) DEFAULT '0', timeout_notify_strategy tinyint(4) DEFAULT NULL, timeout int(11) DEFAULT '0', delay_time int(11) DEFAULT '0', resource_ids text, operator int(11) DEFAULT NULL, task_group_id int(11) DEFAULT NULL, task_group_priority tinyint(4) DEFAULT '0', operate_time datetime DEFAULT NULL, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_process_task_relation -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_task_relation CASCADE; CREATE TABLE t_ds_process_task_relation ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(200) DEFAULT NULL, process_definition_version int(11) DEFAULT NULL, project_code bigint(20) NOT NULL, process_definition_code bigint(20) NOT NULL, pre_task_code bigint(20) NOT NULL, pre_task_version int(11) NOT NULL, post_task_code bigint(20) NOT NULL, post_task_version int(11) NOT NULL, condition_type tinyint(2) DEFAULT NULL, condition_params text, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_process_task_relation_log -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_task_relation_log CASCADE; CREATE TABLE t_ds_process_task_relation_log ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(200) DEFAULT NULL, process_definition_version int(11) DEFAULT NULL, project_code bigint(20) NOT NULL, process_definition_code bigint(20) NOT NULL, pre_task_code bigint(20) NOT NULL, pre_task_version int(11) NOT NULL, post_task_code bigint(20) NOT NULL, post_task_version int(11) NOT NULL, condition_type tinyint(2) DEFAULT NULL, condition_params text, operator int(11) DEFAULT NULL, operate_time datetime DEFAULT NULL, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_process_instance -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_instance CASCADE; CREATE TABLE t_ds_process_instance ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) DEFAULT NULL, process_definition_version int(11) DEFAULT NULL, process_definition_code bigint(20) not NULL, state tinyint(4) DEFAULT NULL, recovery tinyint(4) DEFAULT NULL, start_time datetime DEFAULT NULL, end_time datetime DEFAULT NULL, run_times int(11) DEFAULT NULL, host varchar(135) DEFAULT NULL, command_type tinyint(4) DEFAULT NULL, command_param text, task_depend_type tinyint(4) DEFAULT NULL, max_try_times tinyint(4) DEFAULT '0', failure_strategy tinyint(4) DEFAULT '0', warning_type tinyint(4) DEFAULT '0', warning_group_id int(11) DEFAULT NULL, schedule_time datetime DEFAULT NULL, command_start_time datetime DEFAULT NULL, global_params text, flag tinyint(4) DEFAULT '1', update_time timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, is_sub_process int(11) DEFAULT '0', executor_id int(11) NOT NULL, history_cmd text, process_instance_priority int(11) DEFAULT NULL, worker_group varchar(64) DEFAULT NULL, environment_code bigint(20) DEFAULT '-1', timeout int(11) DEFAULT '0', next_process_instance_id int(11) DEFAULT '0', tenant_id int(11) NOT NULL DEFAULT '-1', var_pool longtext, dry_run int NULL DEFAULT 0, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_process_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_project -- ---------------------------- DROP TABLE IF EXISTS t_ds_project CASCADE; CREATE TABLE t_ds_project ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(100) DEFAULT NULL, code bigint(20) NOT NULL, description varchar(200) DEFAULT NULL, user_id int(11) DEFAULT NULL, flag tinyint(4) DEFAULT '1', create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_project -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_queue -- ---------------------------- DROP TABLE IF EXISTS t_ds_queue CASCADE; CREATE TABLE t_ds_queue ( id int(11) NOT NULL AUTO_INCREMENT, queue_name varchar(64) DEFAULT NULL, queue varchar(64) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_queue -- ---------------------------- INSERT INTO t_ds_queue VALUES ('1', 'default', 'default', null, null); -- ---------------------------- -- Table structure for t_ds_relation_datasource_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_datasource_user CASCADE; CREATE TABLE t_ds_relation_datasource_user ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, datasource_id int(11) DEFAULT NULL, perm int(11) DEFAULT '1', create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_relation_datasource_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_process_instance -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_process_instance CASCADE; CREATE TABLE t_ds_relation_process_instance ( id int(11) NOT NULL AUTO_INCREMENT, parent_process_instance_id int(11) DEFAULT NULL, parent_task_instance_id int(11) DEFAULT NULL, process_instance_id int(11) DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_relation_process_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_project_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_project_user CASCADE; CREATE TABLE t_ds_relation_project_user ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, project_id int(11) DEFAULT NULL, perm int(11) DEFAULT '1', create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_relation_project_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_resources_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_resources_user CASCADE; CREATE TABLE t_ds_relation_resources_user ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, resources_id int(11) DEFAULT NULL, perm int(11) DEFAULT '1', create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_relation_resources_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_udfs_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_udfs_user CASCADE; CREATE TABLE t_ds_relation_udfs_user ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, udf_id int(11) DEFAULT NULL, perm int(11) DEFAULT '1', create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_resources -- ---------------------------- DROP TABLE IF EXISTS t_ds_resources CASCADE; CREATE TABLE t_ds_resources ( id int(11) NOT NULL AUTO_INCREMENT, alias varchar(64) DEFAULT NULL, file_name varchar(64) DEFAULT NULL, description varchar(255) DEFAULT NULL, user_id int(11) DEFAULT NULL, type tinyint(4) DEFAULT NULL, size bigint(20) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, pid int(11) DEFAULT NULL, full_name varchar(64) DEFAULT NULL, is_directory tinyint(4) DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY t_ds_resources_un (full_name, type) ); -- ---------------------------- -- Records of t_ds_resources -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_schedules -- ---------------------------- DROP TABLE IF EXISTS t_ds_schedules CASCADE; CREATE TABLE t_ds_schedules ( id int(11) NOT NULL AUTO_INCREMENT, process_definition_code bigint(20) NOT NULL, start_time datetime NOT NULL, end_time datetime NOT NULL, timezone_id varchar(40) DEFAULT NULL, crontab varchar(255) NOT NULL, failure_strategy tinyint(4) NOT NULL, user_id int(11) NOT NULL, release_state tinyint(4) NOT NULL, warning_type tinyint(4) NOT NULL, warning_group_id int(11) DEFAULT NULL, process_instance_priority int(11) DEFAULT NULL, worker_group varchar(64) DEFAULT '', environment_code bigint(20) DEFAULT '-1', create_time datetime NOT NULL, update_time datetime NOT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_schedules -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_session -- ---------------------------- DROP TABLE IF EXISTS t_ds_session CASCADE; CREATE TABLE t_ds_session ( id varchar(64) NOT NULL, user_id int(11) DEFAULT NULL, ip varchar(45) DEFAULT NULL, last_login_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_session -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_task_instance -- ---------------------------- DROP TABLE IF EXISTS t_ds_task_instance CASCADE; CREATE TABLE t_ds_task_instance ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) DEFAULT NULL, task_type varchar(50) NOT NULL, task_code bigint(20) NOT NULL, task_definition_version int(11) DEFAULT NULL, process_instance_id int(11) DEFAULT NULL, state tinyint(4) DEFAULT NULL, submit_time datetime DEFAULT NULL, start_time datetime DEFAULT NULL, end_time datetime DEFAULT NULL, host varchar(135) DEFAULT NULL, execute_path varchar(200) DEFAULT NULL, log_path varchar(200) DEFAULT NULL, alert_flag tinyint(4) DEFAULT NULL, retry_times int(4) DEFAULT '0', pid int(4) DEFAULT NULL, app_link text, task_params text, flag tinyint(4) DEFAULT '1', retry_interval int(4) DEFAULT NULL, max_retry_times int(2) DEFAULT NULL, task_instance_priority int(11) DEFAULT NULL, worker_group varchar(64) DEFAULT NULL, environment_code bigint(20) DEFAULT '-1', environment_config text DEFAULT '', executor_id int(11) DEFAULT NULL, first_submit_time datetime DEFAULT NULL, delay_time int(4) DEFAULT '0', task_group_id int(11) DEFAULT NULL, var_pool longtext, dry_run int NULL DEFAULT 0, PRIMARY KEY (id), FOREIGN KEY (process_instance_id) REFERENCES t_ds_process_instance (id) ON DELETE CASCADE ); -- ---------------------------- -- Records of t_ds_task_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_tenant -- ---------------------------- DROP TABLE IF EXISTS t_ds_tenant CASCADE; CREATE TABLE t_ds_tenant ( id int(11) NOT NULL AUTO_INCREMENT, tenant_code varchar(64) DEFAULT NULL, description varchar(255) DEFAULT NULL, queue_id int(11) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_tenant -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_udfs -- ---------------------------- DROP TABLE IF EXISTS t_ds_udfs CASCADE; CREATE TABLE t_ds_udfs ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, func_name varchar(100) NOT NULL, class_name varchar(255) NOT NULL, type tinyint(4) NOT NULL, arg_types varchar(255) DEFAULT NULL, database varchar(255) DEFAULT NULL, description varchar(255) DEFAULT NULL, resource_id int(11) NOT NULL, resource_name varchar(255) NOT NULL, create_time datetime NOT NULL, update_time datetime NOT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_udfs -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_user CASCADE; CREATE TABLE t_ds_user ( id int(11) NOT NULL AUTO_INCREMENT, user_name varchar(64) DEFAULT NULL, user_password varchar(64) DEFAULT NULL, user_type tinyint(4) DEFAULT NULL, email varchar(64) DEFAULT NULL, phone varchar(11) DEFAULT NULL, tenant_id int(11) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, queue varchar(64) DEFAULT NULL, state int(1) DEFAULT 1, PRIMARY KEY (id), UNIQUE KEY user_name_unique (user_name) ); -- ---------------------------- -- Records of t_ds_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_worker_group -- ---------------------------- DROP TABLE IF EXISTS t_ds_worker_group CASCADE; CREATE TABLE t_ds_worker_group ( id bigint(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, addr_list text NULL DEFAULT NULL, create_time datetime NULL DEFAULT NULL, update_time datetime NULL DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY name_unique (name) ); -- ---------------------------- -- Records of t_ds_worker_group -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_version -- ---------------------------- DROP TABLE IF EXISTS t_ds_version CASCADE; CREATE TABLE t_ds_version ( id int(11) NOT NULL AUTO_INCREMENT, version varchar(200) NOT NULL, PRIMARY KEY (id), UNIQUE KEY version_UNIQUE (version) ); -- ---------------------------- -- Records of t_ds_version -- ---------------------------- INSERT INTO t_ds_version VALUES ('1', '1.4.0'); -- ---------------------------- -- Records of t_ds_alertgroup -- ---------------------------- INSERT INTO t_ds_alertgroup(alert_instance_ids, create_user_id, group_name, description, create_time, update_time) VALUES ('1,2', 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39'); -- ---------------------------- -- Records of t_ds_user -- ---------------------------- INSERT INTO t_ds_user VALUES ('1', 'admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22', null, 1); -- ---------------------------- -- Table structure for t_ds_plugin_define -- ---------------------------- DROP TABLE IF EXISTS t_ds_plugin_define CASCADE; CREATE TABLE t_ds_plugin_define ( id int NOT NULL AUTO_INCREMENT, plugin_name varchar(100) NOT NULL, plugin_type varchar(100) NOT NULL, plugin_params text, create_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, update_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id), UNIQUE KEY t_ds_plugin_define_UN (plugin_name,plugin_type) ); -- ---------------------------- -- Table structure for t_ds_alert_plugin_instance -- ---------------------------- DROP TABLE IF EXISTS t_ds_alert_plugin_instance CASCADE; CREATE TABLE t_ds_alert_plugin_instance ( id int NOT NULL AUTO_INCREMENT, plugin_define_id int NOT NULL, plugin_instance_params text, create_time timestamp NULL DEFAULT CURRENT_TIMESTAMP, update_time timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, instance_name varchar(200) DEFAULT NULL, PRIMARY KEY (id) ); -- -- Table structure for table t_ds_environment -- DROP TABLE IF EXISTS t_ds_environment CASCADE; CREATE TABLE t_ds_environment ( id int NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(100) DEFAULT NULL, config text DEFAULT NULL, description text, operator int DEFAULT NULL, create_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, update_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id), UNIQUE KEY environment_name_unique (name), UNIQUE KEY environment_code_unique (code) ); -- -- Table structure for table t_ds_environment_worker_group_relation -- DROP TABLE IF EXISTS t_ds_environment_worker_group_relation CASCADE; CREATE TABLE t_ds_environment_worker_group_relation ( id int NOT NULL AUTO_INCREMENT, environment_code bigint(20) NOT NULL, worker_group varchar(255) NOT NULL, operator int DEFAULT NULL, create_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, update_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id), UNIQUE KEY environment_worker_group_unique (environment_code,worker_group) ); DROP TABLE IF EXISTS t_ds_task_group_queue; CREATE TABLE t_ds_task_group_queue ( id int(11) NOT NULL AUTO_INCREMENT , task_id int(11) DEFAULT NULL , task_name VARCHAR(100) DEFAULT NULL , group_id int(11) DEFAULT NULL , process_id int(11) DEFAULT NULL , priority int(8) DEFAULT '0' , status int(4) DEFAULT '-1' , force_start int(4) DEFAULT '0' , in_queue int(4) DEFAULT '0' , create_time datetime DEFAULT NULL , update_time datetime DEFAULT NULL , PRIMARY KEY (id) ); DROP TABLE IF EXISTS t_ds_task_group; CREATE TABLE t_ds_task_group ( id int(11) NOT NULL AUTO_INCREMENT , name varchar(100) DEFAULT NULL , description varchar(200) DEFAULT NULL , group_size int(11) NOT NULL , project_code bigint(20) DEFAULT '0', use_size int(11) DEFAULT '0' , user_id int(11) DEFAULT NULL , project_id int(11) DEFAULT NULL , status int(4) DEFAULT '1' , create_time datetime DEFAULT NULL , update_time datetime DEFAULT NULL , PRIMARY KEY(id) );
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,520
[Bug] [Master] Data too long for column 'task_params' at row 1
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ``` [INFO] 2021-12-20 12:44:29.923 org.apache.dolphinscheduler.service.process.ProcessService:[1060] - start submit task : 依赖检查, instance id:176379, state: RUNNING_EXECUTION [ERROR] 2021-12-20 12:44:29.933 org.apache.dolphinscheduler.service.process.ProcessService:[1043] - task commit to mysql failed org.springframework.dao.DataIntegrityViolationException: ### Error updating database. Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ### The error may exist in org/apache/dolphinscheduler/dao/mapper/TaskInstanceMapper.java (best guess) ### The error may involve org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper.insert-Inline ### The error occurred while setting parameters ### SQL: INSERT INTO t_ds_task_instance ( dry_run, flag, environment_code, pid, task_params, task_type, task_instance_priority, task_code, worker_group, state, process_instance_id, executor_id, alert_flag, first_submit_time, max_retry_times, retry_times, submit_time, name, task_definition_version, delay_time, retry_interval ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) ### Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ; Data truncation: Data too long for column 'task_params' at row 1; nested exception is com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:104) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:74) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:440) at com.sun.proxy.$Proxy83.insert(Unknown Source) at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:271) at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:58) at com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:61) at com.sun.proxy.$Proxy90.insert(Unknown Source) at org.apache.dolphinscheduler.service.process.ProcessService.createTaskInstance(ProcessService.java:1445) at org.apache.dolphinscheduler.service.process.ProcessService.saveTaskInstance(ProcessService.java:1434) at org.apache.dolphinscheduler.service.process.ProcessService.submitTaskInstanceToDB(ProcessService.java:1326) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1063) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1032) at org.apache.dolphinscheduler.service.process.ProcessService$$FastClassBySpringCGLIB$$ed138739.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.service.process.ProcessService$$EnhancerBySpringCGLIB$$b8f3698c.submitTask(<generated>) at org.apache.dolphinscheduler.server.master.runner.task.DependentTaskProcessor.submit(DependentTaskProcessor.java:86) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:620) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1273) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:888) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.startProcess(WorkflowExecuteThread.java:505) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### What you expected to happen run job successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7520
https://github.com/apache/dolphinscheduler/pull/7521
7a888c544c562b9320d786ad9700b2754746f2d3
c7d7eec67931009a9713bd370ab8b39c57f50219
"2021-12-21T06:10:57Z"
java
"2021-12-24T14:51:18Z"
dolphinscheduler-dao/src/main/resources/sql/dolphinscheduler_mysql.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ SET FOREIGN_KEY_CHECKS=0; -- ---------------------------- -- Table structure for QRTZ_BLOB_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_BLOB_TRIGGERS`; CREATE TABLE `QRTZ_BLOB_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `BLOB_DATA` blob, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), KEY `SCHED_NAME` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), CONSTRAINT `QRTZ_BLOB_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_BLOB_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_CALENDARS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_CALENDARS`; CREATE TABLE `QRTZ_CALENDARS` ( `SCHED_NAME` varchar(120) NOT NULL, `CALENDAR_NAME` varchar(200) NOT NULL, `CALENDAR` blob NOT NULL, PRIMARY KEY (`SCHED_NAME`,`CALENDAR_NAME`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_CALENDARS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_CRON_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_CRON_TRIGGERS`; CREATE TABLE `QRTZ_CRON_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `CRON_EXPRESSION` varchar(120) NOT NULL, `TIME_ZONE_ID` varchar(80) DEFAULT NULL, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), CONSTRAINT `QRTZ_CRON_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_CRON_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_FIRED_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_FIRED_TRIGGERS`; CREATE TABLE `QRTZ_FIRED_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `ENTRY_ID` varchar(200) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `INSTANCE_NAME` varchar(200) NOT NULL, `FIRED_TIME` bigint(13) NOT NULL, `SCHED_TIME` bigint(13) NOT NULL, `PRIORITY` int(11) NOT NULL, `STATE` varchar(16) NOT NULL, `JOB_NAME` varchar(200) DEFAULT NULL, `JOB_GROUP` varchar(200) DEFAULT NULL, `IS_NONCONCURRENT` varchar(1) DEFAULT NULL, `REQUESTS_RECOVERY` varchar(1) DEFAULT NULL, PRIMARY KEY (`SCHED_NAME`,`ENTRY_ID`), KEY `IDX_QRTZ_FT_TRIG_INST_NAME` (`SCHED_NAME`,`INSTANCE_NAME`), KEY `IDX_QRTZ_FT_INST_JOB_REQ_RCVRY` (`SCHED_NAME`,`INSTANCE_NAME`,`REQUESTS_RECOVERY`), KEY `IDX_QRTZ_FT_J_G` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_FT_JG` (`SCHED_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_FT_T_G` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), KEY `IDX_QRTZ_FT_TG` (`SCHED_NAME`,`TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_FIRED_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_JOB_DETAILS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_JOB_DETAILS`; CREATE TABLE `QRTZ_JOB_DETAILS` ( `SCHED_NAME` varchar(120) NOT NULL, `JOB_NAME` varchar(200) NOT NULL, `JOB_GROUP` varchar(200) NOT NULL, `DESCRIPTION` varchar(250) DEFAULT NULL, `JOB_CLASS_NAME` varchar(250) NOT NULL, `IS_DURABLE` varchar(1) NOT NULL, `IS_NONCONCURRENT` varchar(1) NOT NULL, `IS_UPDATE_DATA` varchar(1) NOT NULL, `REQUESTS_RECOVERY` varchar(1) NOT NULL, `JOB_DATA` blob, PRIMARY KEY (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_J_REQ_RECOVERY` (`SCHED_NAME`,`REQUESTS_RECOVERY`), KEY `IDX_QRTZ_J_GRP` (`SCHED_NAME`,`JOB_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_JOB_DETAILS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_LOCKS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_LOCKS`; CREATE TABLE `QRTZ_LOCKS` ( `SCHED_NAME` varchar(120) NOT NULL, `LOCK_NAME` varchar(40) NOT NULL, PRIMARY KEY (`SCHED_NAME`,`LOCK_NAME`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_LOCKS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_PAUSED_TRIGGER_GRPS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_PAUSED_TRIGGER_GRPS`; CREATE TABLE `QRTZ_PAUSED_TRIGGER_GRPS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_PAUSED_TRIGGER_GRPS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SCHEDULER_STATE -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_SCHEDULER_STATE`; CREATE TABLE `QRTZ_SCHEDULER_STATE` ( `SCHED_NAME` varchar(120) NOT NULL, `INSTANCE_NAME` varchar(200) NOT NULL, `LAST_CHECKIN_TIME` bigint(13) NOT NULL, `CHECKIN_INTERVAL` bigint(13) NOT NULL, PRIMARY KEY (`SCHED_NAME`,`INSTANCE_NAME`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_SCHEDULER_STATE -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SIMPLE_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_SIMPLE_TRIGGERS`; CREATE TABLE `QRTZ_SIMPLE_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `REPEAT_COUNT` bigint(7) NOT NULL, `REPEAT_INTERVAL` bigint(12) NOT NULL, `TIMES_TRIGGERED` bigint(10) NOT NULL, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), CONSTRAINT `QRTZ_SIMPLE_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_SIMPLE_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SIMPROP_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_SIMPROP_TRIGGERS`; CREATE TABLE `QRTZ_SIMPROP_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `STR_PROP_1` varchar(512) DEFAULT NULL, `STR_PROP_2` varchar(512) DEFAULT NULL, `STR_PROP_3` varchar(512) DEFAULT NULL, `INT_PROP_1` int(11) DEFAULT NULL, `INT_PROP_2` int(11) DEFAULT NULL, `LONG_PROP_1` bigint(20) DEFAULT NULL, `LONG_PROP_2` bigint(20) DEFAULT NULL, `DEC_PROP_1` decimal(13,4) DEFAULT NULL, `DEC_PROP_2` decimal(13,4) DEFAULT NULL, `BOOL_PROP_1` varchar(1) DEFAULT NULL, `BOOL_PROP_2` varchar(1) DEFAULT NULL, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), CONSTRAINT `QRTZ_SIMPROP_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_SIMPROP_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_TRIGGERS`; CREATE TABLE `QRTZ_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `JOB_NAME` varchar(200) NOT NULL, `JOB_GROUP` varchar(200) NOT NULL, `DESCRIPTION` varchar(250) DEFAULT NULL, `NEXT_FIRE_TIME` bigint(13) DEFAULT NULL, `PREV_FIRE_TIME` bigint(13) DEFAULT NULL, `PRIORITY` int(11) DEFAULT NULL, `TRIGGER_STATE` varchar(16) NOT NULL, `TRIGGER_TYPE` varchar(8) NOT NULL, `START_TIME` bigint(13) NOT NULL, `END_TIME` bigint(13) DEFAULT NULL, `CALENDAR_NAME` varchar(200) DEFAULT NULL, `MISFIRE_INSTR` smallint(2) DEFAULT NULL, `JOB_DATA` blob, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), KEY `IDX_QRTZ_T_J` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_T_JG` (`SCHED_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_T_C` (`SCHED_NAME`,`CALENDAR_NAME`), KEY `IDX_QRTZ_T_G` (`SCHED_NAME`,`TRIGGER_GROUP`), KEY `IDX_QRTZ_T_STATE` (`SCHED_NAME`,`TRIGGER_STATE`), KEY `IDX_QRTZ_T_N_STATE` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`), KEY `IDX_QRTZ_T_N_G_STATE` (`SCHED_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`), KEY `IDX_QRTZ_T_NEXT_FIRE_TIME` (`SCHED_NAME`,`NEXT_FIRE_TIME`), KEY `IDX_QRTZ_T_NFT_ST` (`SCHED_NAME`,`TRIGGER_STATE`,`NEXT_FIRE_TIME`), KEY `IDX_QRTZ_T_NFT_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`), KEY `IDX_QRTZ_T_NFT_ST_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_STATE`), KEY `IDX_QRTZ_T_NFT_ST_MISFIRE_GRP` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_GROUP`,`TRIGGER_STATE`), CONSTRAINT `QRTZ_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`) REFERENCES `QRTZ_JOB_DETAILS` (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_access_token -- ---------------------------- DROP TABLE IF EXISTS `t_ds_access_token`; CREATE TABLE `t_ds_access_token` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) DEFAULT NULL COMMENT 'user id', `token` varchar(64) DEFAULT NULL COMMENT 'token', `expire_time` datetime DEFAULT NULL COMMENT 'end time of token ', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_access_token -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_alert -- ---------------------------- DROP TABLE IF EXISTS `t_ds_alert`; CREATE TABLE `t_ds_alert` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `title` varchar(64) DEFAULT NULL COMMENT 'title', `content` text COMMENT 'Message content (can be email, can be SMS. Mail is stored in JSON map, and SMS is string)', `alert_status` tinyint(4) DEFAULT '0' COMMENT '0:wait running,1:success,2:failed', `log` text COMMENT 'log', `alertgroup_id` int(11) DEFAULT NULL COMMENT 'alert group id', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_alert -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_alertgroup -- ---------------------------- DROP TABLE IF EXISTS `t_ds_alertgroup`; CREATE TABLE `t_ds_alertgroup`( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `alert_instance_ids` varchar (255) DEFAULT NULL COMMENT 'alert instance ids', `create_user_id` int(11) DEFAULT NULL COMMENT 'create user id', `group_name` varchar(255) DEFAULT NULL COMMENT 'group name', `description` varchar(255) DEFAULT NULL, `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), UNIQUE KEY `t_ds_alertgroup_name_un` (`group_name`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_alertgroup -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_command -- ---------------------------- DROP TABLE IF EXISTS `t_ds_command`; CREATE TABLE `t_ds_command` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `command_type` tinyint(4) DEFAULT NULL COMMENT 'Command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 stop, 10 resume waiting thread', `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code', `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version', `process_instance_id` int(11) DEFAULT '0' COMMENT 'process instance id', `command_param` text COMMENT 'json command parameters', `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'Node dependency type: 0 current node, 1 forward, 2 backward', `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'Failed policy: 0 end, 1 continue', `warning_type` tinyint(4) DEFAULT '0' COMMENT 'Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent', `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group', `schedule_time` datetime DEFAULT NULL COMMENT 'schedule time', `start_time` datetime DEFAULT NULL COMMENT 'start time', `executor_id` int(11) DEFAULT NULL COMMENT 'executor id', `update_time` datetime DEFAULT NULL COMMENT 'update time', `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority: 0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) COMMENT 'worker group', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run', PRIMARY KEY (`id`), KEY `priority_id_index` (`process_instance_priority`,`id`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_command -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_datasource -- ---------------------------- DROP TABLE IF EXISTS `t_ds_datasource`; CREATE TABLE `t_ds_datasource` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `name` varchar(64) NOT NULL COMMENT 'data source name', `note` varchar(255) DEFAULT NULL COMMENT 'description', `type` tinyint(4) NOT NULL COMMENT 'data source type: 0:mysql,1:postgresql,2:hive,3:spark', `user_id` int(11) NOT NULL COMMENT 'the creator id', `connection_params` text NOT NULL COMMENT 'json connection params', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), UNIQUE KEY `t_ds_datasource_name_un` (`name`, `type`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_datasource -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_error_command -- ---------------------------- DROP TABLE IF EXISTS `t_ds_error_command`; CREATE TABLE `t_ds_error_command` ( `id` int(11) NOT NULL COMMENT 'key', `command_type` tinyint(4) DEFAULT NULL COMMENT 'command type', `executor_id` int(11) DEFAULT NULL COMMENT 'executor id', `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code', `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version', `process_instance_id` int(11) DEFAULT '0' COMMENT 'process instance id: 0', `command_param` text COMMENT 'json command parameters', `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'task depend type', `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'failure strategy', `warning_type` tinyint(4) DEFAULT '0' COMMENT 'warning type', `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group id', `schedule_time` datetime DEFAULT NULL COMMENT 'scheduler time', `start_time` datetime DEFAULT NULL COMMENT 'start time', `update_time` datetime DEFAULT NULL COMMENT 'update time', `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority, 0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) COMMENT 'worker group', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `message` text COMMENT 'message', `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag: 0 normal, 1 dry run', PRIMARY KEY (`id`) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC; -- ---------------------------- -- Records of t_ds_error_command -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_process_definition -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_definition`; CREATE TABLE `t_ds_process_definition` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(255) DEFAULT NULL COMMENT 'process definition name', `version` int(11) DEFAULT '0' COMMENT 'process definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online', `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id', `global_params` text COMMENT 'global parameters', `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available', `locations` text COMMENT 'Node location information', `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id', `timeout` int(11) DEFAULT '0' COMMENT 'time out, unit: minute', `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id', `execution_type` tinyint(4) DEFAULT '0' COMMENT 'execution_type 0:parallel,1:serial wait,2:serial discard,3:serial priority', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`,`code`), UNIQUE KEY `process_unique` (`name`,`project_code`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_process_definition -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_process_definition_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_definition_log`; CREATE TABLE `t_ds_process_definition_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'process definition name', `version` int(11) DEFAULT '0' COMMENT 'process definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online', `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id', `global_params` text COMMENT 'global parameters', `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available', `locations` text COMMENT 'Node location information', `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id', `timeout` int(11) DEFAULT '0' COMMENT 'time out,unit: minute', `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id', `execution_type` tinyint(4) DEFAULT '0' COMMENT 'execution_type 0:parallel,1:serial wait,2:serial discard,3:serial priority', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_task_definition -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_definition`; CREATE TABLE `t_ds_task_definition` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'task definition name', `version` int(11) DEFAULT '0' COMMENT 'task definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id', `task_type` varchar(50) NOT NULL COMMENT 'task type', `task_params` longtext COMMENT 'job custom parameters', `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available', `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority', `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries', `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval', `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open', `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail', `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute', `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute', `resource_ids` text COMMENT 'resource id, separated by comma', `task_group_id` int(11) DEFAULT NULL COMMENT 'task group id', `task_group_priority` tinyint(4) DEFAULT 1 COMMENT 'task group priority', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`,`code`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_task_definition_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_definition_log`; CREATE TABLE `t_ds_task_definition_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'task definition name', `version` int(11) DEFAULT '0' COMMENT 'task definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id', `task_type` varchar(50) NOT NULL COMMENT 'task type', `task_params` longtext COMMENT 'job custom parameters', `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available', `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority', `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries', `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval', `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open', `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail', `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute', `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute', `resource_ids` text DEFAULT NULL COMMENT 'resource id, separated by comma', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `task_group_id` int(11) DEFAULT NULL COMMENT 'task group id', `task_group_priority` tinyint(4) DEFAULT 1 COMMENT 'task group priority', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_task_relation -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_task_relation`; CREATE TABLE `t_ds_process_task_relation` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `name` varchar(200) DEFAULT NULL COMMENT 'relation name', `project_code` bigint(20) NOT NULL COMMENT 'project code', `process_definition_code` bigint(20) NOT NULL COMMENT 'process code', `process_definition_version` int(11) NOT NULL COMMENT 'process version', `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code', `pre_task_version` int(11) NOT NULL COMMENT 'pre task version', `post_task_code` bigint(20) NOT NULL COMMENT 'post task code', `post_task_version` int(11) NOT NULL COMMENT 'post task version', `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay', `condition_params` text COMMENT 'condition params(json)', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_task_relation_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_task_relation_log`; CREATE TABLE `t_ds_process_task_relation_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `name` varchar(200) DEFAULT NULL COMMENT 'relation name', `project_code` bigint(20) NOT NULL COMMENT 'project code', `process_definition_code` bigint(20) NOT NULL COMMENT 'process code', `process_definition_version` int(11) NOT NULL COMMENT 'process version', `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code', `pre_task_version` int(11) NOT NULL COMMENT 'pre task version', `post_task_code` bigint(20) NOT NULL COMMENT 'post task code', `post_task_version` int(11) NOT NULL COMMENT 'post task version', `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay', `condition_params` text COMMENT 'condition params(json)', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_instance`; CREATE TABLE `t_ds_process_instance` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `name` varchar(255) DEFAULT NULL COMMENT 'process instance name', `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code', `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version', `state` tinyint(4) DEFAULT NULL COMMENT 'process instance Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete', `recovery` tinyint(4) DEFAULT NULL COMMENT 'process instance failover flag:0:normal,1:failover instance', `start_time` datetime DEFAULT NULL COMMENT 'process instance start time', `end_time` datetime DEFAULT NULL COMMENT 'process instance end time', `run_times` int(11) DEFAULT NULL COMMENT 'process instance run times', `host` varchar(135) DEFAULT NULL COMMENT 'process instance host', `command_type` tinyint(4) DEFAULT NULL COMMENT 'command type', `command_param` text COMMENT 'json command parameters', `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'task depend type. 0: only current node,1:before the node,2:later nodes', `max_try_times` tinyint(4) DEFAULT '0' COMMENT 'max try times', `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'failure strategy. 0:end the process when node failed,1:continue running the other nodes when node failed', `warning_type` tinyint(4) DEFAULT '0' COMMENT 'warning type. 0:no warning,1:warning if process success,2:warning if process failed,3:warning if success', `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group id', `schedule_time` datetime DEFAULT NULL COMMENT 'schedule time', `command_start_time` datetime DEFAULT NULL COMMENT 'command start time', `global_params` text COMMENT 'global parameters', `flag` tinyint(4) DEFAULT '1' COMMENT 'flag', `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `is_sub_process` int(11) DEFAULT '0' COMMENT 'flag, whether the process is sub process', `executor_id` int(11) NOT NULL COMMENT 'executor id', `history_cmd` text COMMENT 'history commands of process instance operation', `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority. 0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) DEFAULT NULL COMMENT 'worker group id', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `timeout` int(11) DEFAULT '0' COMMENT 'time out', `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id', `var_pool` longtext COMMENT 'var_pool', `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run', `next_process_instance_id` int(11) DEFAULT '0' COMMENT 'serial queue next processInstanceId', PRIMARY KEY (`id`), KEY `process_instance_index` (`process_definition_code`,`id`) USING BTREE, KEY `start_time_index` (`start_time`,`end_time`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_process_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_project -- ---------------------------- DROP TABLE IF EXISTS `t_ds_project`; CREATE TABLE `t_ds_project` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `name` varchar(100) DEFAULT NULL COMMENT 'project name', `code` bigint(20) NOT NULL COMMENT 'encoding', `description` varchar(200) DEFAULT NULL, `user_id` int(11) DEFAULT NULL COMMENT 'creator id', `flag` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), KEY `user_id_index` (`user_id`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_project -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_queue -- ---------------------------- DROP TABLE IF EXISTS `t_ds_queue`; CREATE TABLE `t_ds_queue` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `queue_name` varchar(64) DEFAULT NULL COMMENT 'queue name', `queue` varchar(64) DEFAULT NULL COMMENT 'yarn queue name', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_queue -- ---------------------------- INSERT INTO `t_ds_queue` VALUES ('1', 'default', 'default', null, null); -- ---------------------------- -- Table structure for t_ds_relation_datasource_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_datasource_user`; CREATE TABLE `t_ds_relation_datasource_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'user id', `datasource_id` int(11) DEFAULT NULL COMMENT 'data source id', `perm` int(11) DEFAULT '1' COMMENT 'limits of authority', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_relation_datasource_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_process_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_process_instance`; CREATE TABLE `t_ds_relation_process_instance` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `parent_process_instance_id` int(11) DEFAULT NULL COMMENT 'parent process instance id', `parent_task_instance_id` int(11) DEFAULT NULL COMMENT 'parent process instance id', `process_instance_id` int(11) DEFAULT NULL COMMENT 'child process instance id', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_relation_process_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_project_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_project_user`; CREATE TABLE `t_ds_relation_project_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'user id', `project_id` int(11) DEFAULT NULL COMMENT 'project id', `perm` int(11) DEFAULT '1' COMMENT 'limits of authority', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), KEY `user_id_index` (`user_id`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_relation_project_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_resources_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_resources_user`; CREATE TABLE `t_ds_relation_resources_user` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` int(11) NOT NULL COMMENT 'user id', `resources_id` int(11) DEFAULT NULL COMMENT 'resource id', `perm` int(11) DEFAULT '1' COMMENT 'limits of authority', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_relation_resources_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_udfs_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_udfs_user`; CREATE TABLE `t_ds_relation_udfs_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'userid', `udf_id` int(11) DEFAULT NULL COMMENT 'udf id', `perm` int(11) DEFAULT '1' COMMENT 'limits of authority', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_resources -- ---------------------------- DROP TABLE IF EXISTS `t_ds_resources`; CREATE TABLE `t_ds_resources` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `alias` varchar(64) DEFAULT NULL COMMENT 'alias', `file_name` varchar(64) DEFAULT NULL COMMENT 'file name', `description` varchar(255) DEFAULT NULL, `user_id` int(11) DEFAULT NULL COMMENT 'user id', `type` tinyint(4) DEFAULT NULL COMMENT 'resource type,0:FILE,1:UDF', `size` bigint(20) DEFAULT NULL COMMENT 'resource size', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', `pid` int(11) DEFAULT NULL, `full_name` varchar(64) DEFAULT NULL, `is_directory` tinyint(4) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `t_ds_resources_un` (`full_name`,`type`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_resources -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_schedules -- ---------------------------- DROP TABLE IF EXISTS `t_ds_schedules`; CREATE TABLE `t_ds_schedules` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code', `start_time` datetime NOT NULL COMMENT 'start time', `end_time` datetime NOT NULL COMMENT 'end time', `timezone_id` varchar(40) DEFAULT NULL COMMENT 'schedule timezone id', `crontab` varchar(255) NOT NULL COMMENT 'crontab description', `failure_strategy` tinyint(4) NOT NULL COMMENT 'failure strategy. 0:end,1:continue', `user_id` int(11) NOT NULL COMMENT 'user id', `release_state` tinyint(4) NOT NULL COMMENT 'release state. 0:offline,1:online ', `warning_type` tinyint(4) NOT NULL COMMENT 'Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent', `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id', `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) DEFAULT '' COMMENT 'worker group id', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_schedules -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_session -- ---------------------------- DROP TABLE IF EXISTS `t_ds_session`; CREATE TABLE `t_ds_session` ( `id` varchar(64) NOT NULL COMMENT 'key', `user_id` int(11) DEFAULT NULL COMMENT 'user id', `ip` varchar(45) DEFAULT NULL COMMENT 'ip', `last_login_time` datetime DEFAULT NULL COMMENT 'last login time', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_session -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_task_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_instance`; CREATE TABLE `t_ds_task_instance` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `name` varchar(255) DEFAULT NULL COMMENT 'task name', `task_type` varchar(50) NOT NULL COMMENT 'task type', `task_code` bigint(20) NOT NULL COMMENT 'task definition code', `task_definition_version` int(11) DEFAULT '0' COMMENT 'task definition version', `process_instance_id` int(11) DEFAULT NULL COMMENT 'process instance id', `state` tinyint(4) DEFAULT NULL COMMENT 'Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete', `submit_time` datetime DEFAULT NULL COMMENT 'task submit time', `start_time` datetime DEFAULT NULL COMMENT 'task start time', `end_time` datetime DEFAULT NULL COMMENT 'task end time', `host` varchar(135) DEFAULT NULL COMMENT 'host of task running on', `execute_path` varchar(200) DEFAULT NULL COMMENT 'task execute path in the host', `log_path` varchar(200) DEFAULT NULL COMMENT 'task log path', `alert_flag` tinyint(4) DEFAULT NULL COMMENT 'whether alert', `retry_times` int(4) DEFAULT '0' COMMENT 'task retry times', `pid` int(4) DEFAULT NULL COMMENT 'pid of task', `app_link` text COMMENT 'yarn app id', `task_params` text COMMENT 'job custom parameters', `flag` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available', `retry_interval` int(4) DEFAULT NULL COMMENT 'retry interval when task failed ', `max_retry_times` int(2) DEFAULT NULL COMMENT 'max retry times', `task_instance_priority` int(11) DEFAULT NULL COMMENT 'task instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) DEFAULT NULL COMMENT 'worker group id', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `environment_config` text COMMENT 'this config contains many environment variables config', `executor_id` int(11) DEFAULT NULL, `first_submit_time` datetime DEFAULT NULL COMMENT 'task first submit time', `delay_time` int(4) DEFAULT '0' COMMENT 'task delay execution time', `var_pool` longtext COMMENT 'var_pool', `task_group_id` int(11) DEFAULT NULL COMMENT 'task group id', `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag: 0 normal, 1 dry run', PRIMARY KEY (`id`), KEY `process_instance_id` (`process_instance_id`) USING BTREE, CONSTRAINT `foreign_key_instance_id` FOREIGN KEY (`process_instance_id`) REFERENCES `t_ds_process_instance` (`id`) ON DELETE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_task_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_tenant -- ---------------------------- DROP TABLE IF EXISTS `t_ds_tenant`; CREATE TABLE `t_ds_tenant` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `tenant_code` varchar(64) DEFAULT NULL COMMENT 'tenant code', `description` varchar(255) DEFAULT NULL, `queue_id` int(11) DEFAULT NULL COMMENT 'queue id', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_tenant -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_udfs -- ---------------------------- DROP TABLE IF EXISTS `t_ds_udfs`; CREATE TABLE `t_ds_udfs` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'user id', `func_name` varchar(100) NOT NULL COMMENT 'UDF function name', `class_name` varchar(255) NOT NULL COMMENT 'class of udf', `type` tinyint(4) NOT NULL COMMENT 'Udf function type', `arg_types` varchar(255) DEFAULT NULL COMMENT 'arguments types', `database` varchar(255) DEFAULT NULL COMMENT 'data base', `description` varchar(255) DEFAULT NULL, `resource_id` int(11) NOT NULL COMMENT 'resource id', `resource_name` varchar(255) NOT NULL COMMENT 'resource name', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_udfs -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_user`; CREATE TABLE `t_ds_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'user id', `user_name` varchar(64) DEFAULT NULL COMMENT 'user name', `user_password` varchar(64) DEFAULT NULL COMMENT 'user password', `user_type` tinyint(4) DEFAULT NULL COMMENT 'user type, 0:administrator,1:ordinary user', `email` varchar(64) DEFAULT NULL COMMENT 'email', `phone` varchar(11) DEFAULT NULL COMMENT 'phone', `tenant_id` int(11) DEFAULT NULL COMMENT 'tenant id', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', `queue` varchar(64) DEFAULT NULL COMMENT 'queue', `state` tinyint(4) DEFAULT '1' COMMENT 'state 0:disable 1:enable', PRIMARY KEY (`id`), UNIQUE KEY `user_name_unique` (`user_name`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_worker_group -- ---------------------------- DROP TABLE IF EXISTS `t_ds_worker_group`; CREATE TABLE `t_ds_worker_group` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `name` varchar(255) NOT NULL COMMENT 'worker group name', `addr_list` text NULL DEFAULT NULL COMMENT 'worker addr list. split by [,]', `create_time` datetime NULL DEFAULT NULL COMMENT 'create time', `update_time` datetime NULL DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), UNIQUE KEY `name_unique` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_worker_group -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_version -- ---------------------------- DROP TABLE IF EXISTS `t_ds_version`; CREATE TABLE `t_ds_version` ( `id` int(11) NOT NULL AUTO_INCREMENT, `version` varchar(200) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `version_UNIQUE` (`version`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COMMENT='version'; -- ---------------------------- -- Records of t_ds_version -- ---------------------------- INSERT INTO `t_ds_version` VALUES ('1', '2.0.2'); -- ---------------------------- -- Records of t_ds_alertgroup -- ---------------------------- INSERT INTO `t_ds_alertgroup`(alert_instance_ids, create_user_id, group_name, description, create_time, update_time) VALUES ("1,2", 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39'); -- ---------------------------- -- Records of t_ds_user -- ---------------------------- INSERT INTO `t_ds_user` VALUES ('1', 'admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22', null, 1); -- ---------------------------- -- Table structure for t_ds_plugin_define -- ---------------------------- SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY','')); DROP TABLE IF EXISTS `t_ds_plugin_define`; CREATE TABLE `t_ds_plugin_define` ( `id` int NOT NULL AUTO_INCREMENT, `plugin_name` varchar(100) NOT NULL COMMENT 'the name of plugin eg: email', `plugin_type` varchar(100) NOT NULL COMMENT 'plugin type . alert=alert plugin, job=job plugin', `plugin_params` text COMMENT 'plugin params', `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `t_ds_plugin_define_UN` (`plugin_name`,`plugin_type`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_alert_plugin_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_alert_plugin_instance`; CREATE TABLE `t_ds_alert_plugin_instance` ( `id` int NOT NULL AUTO_INCREMENT, `plugin_define_id` int NOT NULL, `plugin_instance_params` text COMMENT 'plugin instance params. Also contain the params value which user input in web ui.', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `instance_name` varchar(200) DEFAULT NULL COMMENT 'alert instance name', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_environment -- ---------------------------- DROP TABLE IF EXISTS `t_ds_environment`; CREATE TABLE `t_ds_environment` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `code` bigint(20) DEFAULT NULL COMMENT 'encoding', `name` varchar(100) NOT NULL COMMENT 'environment name', `config` text NULL DEFAULT NULL COMMENT 'this config contains many environment variables config', `description` text NULL DEFAULT NULL COMMENT 'the details', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `environment_name_unique` (`name`), UNIQUE KEY `environment_code_unique` (`code`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_environment_worker_group_relation -- ---------------------------- DROP TABLE IF EXISTS `t_ds_environment_worker_group_relation`; CREATE TABLE `t_ds_environment_worker_group_relation` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `environment_code` bigint(20) NOT NULL COMMENT 'environment code', `worker_group` varchar(255) NOT NULL COMMENT 'worker group id', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `environment_worker_group_unique` (`environment_code`,`worker_group`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_task_group_queue -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_group_queue`; CREATE TABLE `t_ds_task_group_queue` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT'key', `task_id` int(11) DEFAULT NULL COMMENT 'taskintanceid', `task_name` varchar(100) DEFAULT NULL COMMENT 'TaskInstance name', `group_id` int(11) DEFAULT NULL COMMENT 'taskGroup id', `process_id` int(11) DEFAULT NULL COMMENT 'processInstace id', `priority` int(8) DEFAULT '0' COMMENT 'priority', `status` tinyint(4) DEFAULT '-1' COMMENT '-1: waiting 1: running 2: finished', `force_start` tinyint(4) DEFAULT '0' COMMENT 'is force start 0 NO ,1 YES', `in_queue` tinyint(4) DEFAULT '0' COMMENT 'ready to get the queue by other task finish 0 NO ,1 YES', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY( `id` ) )ENGINE= INNODB AUTO_INCREMENT= 1 DEFAULT CHARSET= utf8; -- ---------------------------- -- Table structure for t_ds_task_group -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_group`; CREATE TABLE `t_ds_task_group` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT'key', `name` varchar(100) DEFAULT NULL COMMENT 'task_group name', `description` varchar(200) DEFAULT NULL, `group_size` int (11) NOT NULL COMMENT'group size', `use_size` int (11) DEFAULT '0' COMMENT 'used size', `user_id` int(11) DEFAULT NULL COMMENT 'creator id', `project_code` bigint(20) DEFAULT 0 COMMENT 'project code', `status` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY(`id`) ) ENGINE= INNODB AUTO_INCREMENT= 1 DEFAULT CHARSET= utf8;
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,520
[Bug] [Master] Data too long for column 'task_params' at row 1
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ``` [INFO] 2021-12-20 12:44:29.923 org.apache.dolphinscheduler.service.process.ProcessService:[1060] - start submit task : 依赖检查, instance id:176379, state: RUNNING_EXECUTION [ERROR] 2021-12-20 12:44:29.933 org.apache.dolphinscheduler.service.process.ProcessService:[1043] - task commit to mysql failed org.springframework.dao.DataIntegrityViolationException: ### Error updating database. Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ### The error may exist in org/apache/dolphinscheduler/dao/mapper/TaskInstanceMapper.java (best guess) ### The error may involve org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper.insert-Inline ### The error occurred while setting parameters ### SQL: INSERT INTO t_ds_task_instance ( dry_run, flag, environment_code, pid, task_params, task_type, task_instance_priority, task_code, worker_group, state, process_instance_id, executor_id, alert_flag, first_submit_time, max_retry_times, retry_times, submit_time, name, task_definition_version, delay_time, retry_interval ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) ### Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ; Data truncation: Data too long for column 'task_params' at row 1; nested exception is com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:104) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:74) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:440) at com.sun.proxy.$Proxy83.insert(Unknown Source) at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:271) at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:58) at com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:61) at com.sun.proxy.$Proxy90.insert(Unknown Source) at org.apache.dolphinscheduler.service.process.ProcessService.createTaskInstance(ProcessService.java:1445) at org.apache.dolphinscheduler.service.process.ProcessService.saveTaskInstance(ProcessService.java:1434) at org.apache.dolphinscheduler.service.process.ProcessService.submitTaskInstanceToDB(ProcessService.java:1326) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1063) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1032) at org.apache.dolphinscheduler.service.process.ProcessService$$FastClassBySpringCGLIB$$ed138739.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.service.process.ProcessService$$EnhancerBySpringCGLIB$$b8f3698c.submitTask(<generated>) at org.apache.dolphinscheduler.server.master.runner.task.DependentTaskProcessor.submit(DependentTaskProcessor.java:86) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:620) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1273) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:888) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.startProcess(WorkflowExecuteThread.java:505) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### What you expected to happen run job successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7520
https://github.com/apache/dolphinscheduler/pull/7521
7a888c544c562b9320d786ad9700b2754746f2d3
c7d7eec67931009a9713bd370ab8b39c57f50219
"2021-12-21T06:10:57Z"
java
"2021-12-24T14:51:18Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.1.0_schema/mysql/dolphinscheduler_ddl.sql
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,520
[Bug] [Master] Data too long for column 'task_params' at row 1
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ``` [INFO] 2021-12-20 12:44:29.923 org.apache.dolphinscheduler.service.process.ProcessService:[1060] - start submit task : 依赖检查, instance id:176379, state: RUNNING_EXECUTION [ERROR] 2021-12-20 12:44:29.933 org.apache.dolphinscheduler.service.process.ProcessService:[1043] - task commit to mysql failed org.springframework.dao.DataIntegrityViolationException: ### Error updating database. Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ### The error may exist in org/apache/dolphinscheduler/dao/mapper/TaskInstanceMapper.java (best guess) ### The error may involve org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper.insert-Inline ### The error occurred while setting parameters ### SQL: INSERT INTO t_ds_task_instance ( dry_run, flag, environment_code, pid, task_params, task_type, task_instance_priority, task_code, worker_group, state, process_instance_id, executor_id, alert_flag, first_submit_time, max_retry_times, retry_times, submit_time, name, task_definition_version, delay_time, retry_interval ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) ### Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ; Data truncation: Data too long for column 'task_params' at row 1; nested exception is com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:104) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:74) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:440) at com.sun.proxy.$Proxy83.insert(Unknown Source) at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:271) at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:58) at com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:61) at com.sun.proxy.$Proxy90.insert(Unknown Source) at org.apache.dolphinscheduler.service.process.ProcessService.createTaskInstance(ProcessService.java:1445) at org.apache.dolphinscheduler.service.process.ProcessService.saveTaskInstance(ProcessService.java:1434) at org.apache.dolphinscheduler.service.process.ProcessService.submitTaskInstanceToDB(ProcessService.java:1326) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1063) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1032) at org.apache.dolphinscheduler.service.process.ProcessService$$FastClassBySpringCGLIB$$ed138739.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.service.process.ProcessService$$EnhancerBySpringCGLIB$$b8f3698c.submitTask(<generated>) at org.apache.dolphinscheduler.server.master.runner.task.DependentTaskProcessor.submit(DependentTaskProcessor.java:86) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:620) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1273) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:888) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.startProcess(WorkflowExecuteThread.java:505) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### What you expected to happen run job successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7520
https://github.com/apache/dolphinscheduler/pull/7521
7a888c544c562b9320d786ad9700b2754746f2d3
c7d7eec67931009a9713bd370ab8b39c57f50219
"2021-12-21T06:10:57Z"
java
"2021-12-24T14:51:18Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.1.0_schema/mysql/dolphinscheduler_dml.sql
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,520
[Bug] [Master] Data too long for column 'task_params' at row 1
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ``` [INFO] 2021-12-20 12:44:29.923 org.apache.dolphinscheduler.service.process.ProcessService:[1060] - start submit task : 依赖检查, instance id:176379, state: RUNNING_EXECUTION [ERROR] 2021-12-20 12:44:29.933 org.apache.dolphinscheduler.service.process.ProcessService:[1043] - task commit to mysql failed org.springframework.dao.DataIntegrityViolationException: ### Error updating database. Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ### The error may exist in org/apache/dolphinscheduler/dao/mapper/TaskInstanceMapper.java (best guess) ### The error may involve org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper.insert-Inline ### The error occurred while setting parameters ### SQL: INSERT INTO t_ds_task_instance ( dry_run, flag, environment_code, pid, task_params, task_type, task_instance_priority, task_code, worker_group, state, process_instance_id, executor_id, alert_flag, first_submit_time, max_retry_times, retry_times, submit_time, name, task_definition_version, delay_time, retry_interval ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) ### Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ; Data truncation: Data too long for column 'task_params' at row 1; nested exception is com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:104) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:74) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:440) at com.sun.proxy.$Proxy83.insert(Unknown Source) at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:271) at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:58) at com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:61) at com.sun.proxy.$Proxy90.insert(Unknown Source) at org.apache.dolphinscheduler.service.process.ProcessService.createTaskInstance(ProcessService.java:1445) at org.apache.dolphinscheduler.service.process.ProcessService.saveTaskInstance(ProcessService.java:1434) at org.apache.dolphinscheduler.service.process.ProcessService.submitTaskInstanceToDB(ProcessService.java:1326) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1063) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1032) at org.apache.dolphinscheduler.service.process.ProcessService$$FastClassBySpringCGLIB$$ed138739.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.service.process.ProcessService$$EnhancerBySpringCGLIB$$b8f3698c.submitTask(<generated>) at org.apache.dolphinscheduler.server.master.runner.task.DependentTaskProcessor.submit(DependentTaskProcessor.java:86) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:620) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1273) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:888) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.startProcess(WorkflowExecuteThread.java:505) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### What you expected to happen run job successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7520
https://github.com/apache/dolphinscheduler/pull/7521
7a888c544c562b9320d786ad9700b2754746f2d3
c7d7eec67931009a9713bd370ab8b39c57f50219
"2021-12-21T06:10:57Z"
java
"2021-12-24T14:51:18Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.1.0_schema/postgresql/dolphinscheduler_ddl.sql
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,520
[Bug] [Master] Data too long for column 'task_params' at row 1
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ``` [INFO] 2021-12-20 12:44:29.923 org.apache.dolphinscheduler.service.process.ProcessService:[1060] - start submit task : 依赖检查, instance id:176379, state: RUNNING_EXECUTION [ERROR] 2021-12-20 12:44:29.933 org.apache.dolphinscheduler.service.process.ProcessService:[1043] - task commit to mysql failed org.springframework.dao.DataIntegrityViolationException: ### Error updating database. Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ### The error may exist in org/apache/dolphinscheduler/dao/mapper/TaskInstanceMapper.java (best guess) ### The error may involve org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper.insert-Inline ### The error occurred while setting parameters ### SQL: INSERT INTO t_ds_task_instance ( dry_run, flag, environment_code, pid, task_params, task_type, task_instance_priority, task_code, worker_group, state, process_instance_id, executor_id, alert_flag, first_submit_time, max_retry_times, retry_times, submit_time, name, task_definition_version, delay_time, retry_interval ) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ? ) ### Cause: com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 ; Data truncation: Data too long for column 'task_params' at row 1; nested exception is com.mysql.cj.jdbc.exceptions.MysqlDataTruncation: Data truncation: Data too long for column 'task_params' at row 1 at org.springframework.jdbc.support.SQLStateSQLExceptionTranslator.doTranslate(SQLStateSQLExceptionTranslator.java:104) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:79) at org.mybatis.spring.MyBatisExceptionTranslator.translateExceptionIfPossible(MyBatisExceptionTranslator.java:74) at org.mybatis.spring.SqlSessionTemplate$SqlSessionInterceptor.invoke(SqlSessionTemplate.java:440) at com.sun.proxy.$Proxy83.insert(Unknown Source) at org.mybatis.spring.SqlSessionTemplate.insert(SqlSessionTemplate.java:271) at com.baomidou.mybatisplus.core.override.MybatisMapperMethod.execute(MybatisMapperMethod.java:58) at com.baomidou.mybatisplus.core.override.MybatisMapperProxy.invoke(MybatisMapperProxy.java:61) at com.sun.proxy.$Proxy90.insert(Unknown Source) at org.apache.dolphinscheduler.service.process.ProcessService.createTaskInstance(ProcessService.java:1445) at org.apache.dolphinscheduler.service.process.ProcessService.saveTaskInstance(ProcessService.java:1434) at org.apache.dolphinscheduler.service.process.ProcessService.submitTaskInstanceToDB(ProcessService.java:1326) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1063) at org.apache.dolphinscheduler.service.process.ProcessService.submitTask(ProcessService.java:1032) at org.apache.dolphinscheduler.service.process.ProcessService$$FastClassBySpringCGLIB$$ed138739.invoke(<generated>) at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218) at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:689) at org.apache.dolphinscheduler.service.process.ProcessService$$EnhancerBySpringCGLIB$$b8f3698c.submitTask(<generated>) at org.apache.dolphinscheduler.server.master.runner.task.DependentTaskProcessor.submit(DependentTaskProcessor.java:86) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitTaskExec(WorkflowExecuteThread.java:620) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitStandByTask(WorkflowExecuteThread.java:1273) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.submitPostNode(WorkflowExecuteThread.java:888) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.startProcess(WorkflowExecuteThread.java:505) at org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThread.run(WorkflowExecuteThread.java:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` ### What you expected to happen run job successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7520
https://github.com/apache/dolphinscheduler/pull/7521
7a888c544c562b9320d786ad9700b2754746f2d3
c7d7eec67931009a9713bd370ab8b39c57f50219
"2021-12-21T06:10:57Z"
java
"2021-12-24T14:51:18Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.1.0_schema/postgresql/dolphinscheduler_dml.sql
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,576
[Feature][Master] Optimize complement task's date
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description ![](https://ftp.bmp.ovh/imgs/2021/12/3b6c71080f2d3cf3.png) Current complement task's date is left closed and right open time interval (startDate <= N < endDate). From a user's point of view, what you see is what you get.In order that users do not need to transform understanding.I suggest changing left closed and right open time interval to left closed and right closed time interval. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7576
https://github.com/apache/dolphinscheduler/pull/7585
f450b7ef28ef97a9fc6caa6a6978b94e659a8e7f
3d9d91ccc37f9dc2f6d6081892f8e18acff82fe5
"2021-12-23T07:27:57Z"
java
"2021-12-25T04:22:22Z"
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/ExecutorServiceImpl.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.api.service.impl; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_START_NODES; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_START_PARAMS; import static org.apache.dolphinscheduler.common.Constants.MAX_TASK_TIMEOUT; import org.apache.dolphinscheduler.api.enums.ExecuteType; import org.apache.dolphinscheduler.api.enums.Status; import org.apache.dolphinscheduler.api.service.ExecutorService; import org.apache.dolphinscheduler.api.service.MonitorService; import org.apache.dolphinscheduler.api.service.ProjectService; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.CommandType; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.FailureStrategy; import org.apache.dolphinscheduler.common.enums.Flag; import org.apache.dolphinscheduler.common.enums.Priority; import org.apache.dolphinscheduler.common.enums.ReleaseState; import org.apache.dolphinscheduler.common.enums.RunMode; import org.apache.dolphinscheduler.common.enums.TaskDependType; import org.apache.dolphinscheduler.common.enums.TaskGroupQueueStatus; import org.apache.dolphinscheduler.common.enums.WarningType; import org.apache.dolphinscheduler.common.model.Server; import org.apache.dolphinscheduler.common.utils.DateUtils; import org.apache.dolphinscheduler.common.utils.JSONUtils; import org.apache.dolphinscheduler.dao.entity.Command; import org.apache.dolphinscheduler.dao.entity.ProcessDefinition; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.Project; import org.apache.dolphinscheduler.dao.entity.Schedule; import org.apache.dolphinscheduler.dao.entity.TaskGroupQueue; import org.apache.dolphinscheduler.dao.entity.Tenant; import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper; import org.apache.dolphinscheduler.dao.mapper.ProjectMapper; import org.apache.dolphinscheduler.remote.command.StateEventChangeCommand; import org.apache.dolphinscheduler.remote.processor.StateEventCallbackService; import org.apache.dolphinscheduler.service.process.ProcessService; import org.apache.dolphinscheduler.service.quartz.cron.CronUtils; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.collections.MapUtils; import org.apache.commons.lang.StringUtils; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.LinkedList; import java.util.List; import java.util.Map; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import com.fasterxml.jackson.core.type.TypeReference; /** * executor service impl */ @Service public class ExecutorServiceImpl extends BaseServiceImpl implements ExecutorService { private static final Logger logger = LoggerFactory.getLogger(ExecutorServiceImpl.class); @Autowired private ProjectMapper projectMapper; @Autowired private ProjectService projectService; @Autowired private ProcessDefinitionMapper processDefinitionMapper; @Autowired private MonitorService monitorService; @Autowired private ProcessInstanceMapper processInstanceMapper; @Autowired private ProcessService processService; @Autowired StateEventCallbackService stateEventCallbackService; /** * execute process instance * * @param loginUser login user * @param projectCode project code * @param processDefinitionCode process definition code * @param cronTime cron time * @param commandType command type * @param failureStrategy failure strategy * @param startNodeList start nodelist * @param taskDependType node dependency type * @param warningType warning type * @param warningGroupId notify group id * @param processInstancePriority process instance priority * @param workerGroup worker group name * @param environmentCode environment code * @param runMode run mode * @param timeout timeout * @param startParams the global param values which pass to new process instance * @param expectedParallelismNumber the expected parallelism number when execute complement in parallel mode * @return execute process instance code */ @Override public Map<String, Object> execProcessInstance(User loginUser, long projectCode, long processDefinitionCode, String cronTime, CommandType commandType, FailureStrategy failureStrategy, String startNodeList, TaskDependType taskDependType, WarningType warningType, int warningGroupId, RunMode runMode, Priority processInstancePriority, String workerGroup, Long environmentCode,Integer timeout, Map<String, String> startParams, Integer expectedParallelismNumber, int dryRun) { Project project = projectMapper.queryByCode(projectCode); //check user access for project Map<String, Object> result = projectService.checkProjectAndAuth(loginUser, project, projectCode); if (result.get(Constants.STATUS) != Status.SUCCESS) { return result; } // timeout is invalid if (timeout <= 0 || timeout > MAX_TASK_TIMEOUT) { putMsg(result, Status.TASK_TIMEOUT_PARAMS_ERROR); return result; } // check process define release state ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(processDefinitionCode); result = checkProcessDefinitionValid(projectCode, processDefinition, processDefinitionCode); if (result.get(Constants.STATUS) != Status.SUCCESS) { return result; } if (!checkTenantSuitable(processDefinition)) { logger.error("there is not any valid tenant for the process definition: id:{},name:{}, ", processDefinition.getId(), processDefinition.getName()); putMsg(result, Status.TENANT_NOT_SUITABLE); return result; } // check master exists if (!checkMasterExists(result)) { return result; } /** * create command */ int create = this.createCommand(commandType, processDefinition.getCode(), taskDependType, failureStrategy, startNodeList, cronTime, warningType, loginUser.getId(), warningGroupId, runMode, processInstancePriority, workerGroup, environmentCode, startParams, expectedParallelismNumber, dryRun); if (create > 0) { processDefinition.setWarningGroupId(warningGroupId); processDefinitionMapper.updateById(processDefinition); putMsg(result, Status.SUCCESS); } else { putMsg(result, Status.START_PROCESS_INSTANCE_ERROR); } return result; } /** * check whether master exists * * @param result result * @return master exists return true , otherwise return false */ private boolean checkMasterExists(Map<String, Object> result) { // check master server exists List<Server> masterServers = monitorService.getServerListFromRegistry(true); // no master if (masterServers.isEmpty()) { putMsg(result, Status.MASTER_NOT_EXISTS); return false; } return true; } /** * check whether the process definition can be executed * * @param projectCode project code * @param processDefinition process definition * @param processDefineCode process definition code * @return check result code */ @Override public Map<String, Object> checkProcessDefinitionValid(long projectCode, ProcessDefinition processDefinition, long processDefineCode) { Map<String, Object> result = new HashMap<>(); if (processDefinition == null || projectCode != processDefinition.getProjectCode()) { // check process definition exists putMsg(result, Status.PROCESS_DEFINE_NOT_EXIST, processDefineCode); } else if (processDefinition.getReleaseState() != ReleaseState.ONLINE) { // check process definition online putMsg(result, Status.PROCESS_DEFINE_NOT_RELEASE, processDefineCode); } else { result.put(Constants.STATUS, Status.SUCCESS); } return result; } /** * do action to process instance:pause, stop, repeat, recover from pause, recover from stop * * @param loginUser login user * @param projectCode project code * @param processInstanceId process instance id * @param executeType execute type * @return execute result code */ @Override public Map<String, Object> execute(User loginUser, long projectCode, Integer processInstanceId, ExecuteType executeType) { Project project = projectMapper.queryByCode(projectCode); //check user access for project Map<String, Object> result = projectService.checkProjectAndAuth(loginUser, project, projectCode); if (result.get(Constants.STATUS) != Status.SUCCESS) { return result; } // check master exists if (!checkMasterExists(result)) { return result; } ProcessInstance processInstance = processService.findProcessInstanceDetailById(processInstanceId); if (processInstance == null) { putMsg(result, Status.PROCESS_INSTANCE_NOT_EXIST, processInstanceId); return result; } ProcessDefinition processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); if (executeType != ExecuteType.STOP && executeType != ExecuteType.PAUSE) { result = checkProcessDefinitionValid(projectCode, processDefinition, processInstance.getProcessDefinitionCode()); if (result.get(Constants.STATUS) != Status.SUCCESS) { return result; } } result = checkExecuteType(processInstance, executeType); if (result.get(Constants.STATUS) != Status.SUCCESS) { return result; } if (!checkTenantSuitable(processDefinition)) { logger.error("there is not any valid tenant for the process definition: id:{},name:{}, ", processDefinition.getId(), processDefinition.getName()); putMsg(result, Status.TENANT_NOT_SUITABLE); } //get the startParams user specified at the first starting while repeat running is needed Map<String, Object> commandMap = JSONUtils.parseObject(processInstance.getCommandParam(), new TypeReference<Map<String, Object>>() {}); String startParams = null; if (MapUtils.isNotEmpty(commandMap) && executeType == ExecuteType.REPEAT_RUNNING) { Object startParamsJson = commandMap.get(Constants.CMD_PARAM_START_PARAMS); if (startParamsJson != null) { startParams = startParamsJson.toString(); } } switch (executeType) { case REPEAT_RUNNING: result = insertCommand(loginUser, processInstanceId, processDefinition.getCode(), processDefinition.getVersion(), CommandType.REPEAT_RUNNING, startParams); break; case RECOVER_SUSPENDED_PROCESS: result = insertCommand(loginUser, processInstanceId, processDefinition.getCode(), processDefinition.getVersion(), CommandType.RECOVER_SUSPENDED_PROCESS, startParams); break; case START_FAILURE_TASK_PROCESS: result = insertCommand(loginUser, processInstanceId, processDefinition.getCode(), processDefinition.getVersion(), CommandType.START_FAILURE_TASK_PROCESS, startParams); break; case STOP: if (processInstance.getState() == ExecutionStatus.READY_STOP) { putMsg(result, Status.PROCESS_INSTANCE_ALREADY_CHANGED, processInstance.getName(), processInstance.getState()); } else { result = updateProcessInstancePrepare(processInstance, CommandType.STOP, ExecutionStatus.READY_STOP); } break; case PAUSE: if (processInstance.getState() == ExecutionStatus.READY_PAUSE) { putMsg(result, Status.PROCESS_INSTANCE_ALREADY_CHANGED, processInstance.getName(), processInstance.getState()); } else { result = updateProcessInstancePrepare(processInstance, CommandType.PAUSE, ExecutionStatus.READY_PAUSE); } break; default: logger.error("unknown execute type : {}", executeType); putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR, "unknown execute type"); break; } return result; } /** * check tenant suitable * * @param processDefinition process definition * @return true if tenant suitable, otherwise return false */ private boolean checkTenantSuitable(ProcessDefinition processDefinition) { Tenant tenant = processService.getTenantForProcess(processDefinition.getTenantId(), processDefinition.getUserId()); return tenant != null; } /** * Check the state of process instance and the type of operation match * * @param processInstance process instance * @param executeType execute type * @return check result code */ private Map<String, Object> checkExecuteType(ProcessInstance processInstance, ExecuteType executeType) { Map<String, Object> result = new HashMap<>(); ExecutionStatus executionStatus = processInstance.getState(); boolean checkResult = false; switch (executeType) { case PAUSE: case STOP: if (executionStatus.typeIsRunning()) { checkResult = true; } break; case REPEAT_RUNNING: if (executionStatus.typeIsFinished()) { checkResult = true; } break; case START_FAILURE_TASK_PROCESS: if (executionStatus.typeIsFailure()) { checkResult = true; } break; case RECOVER_SUSPENDED_PROCESS: if (executionStatus.typeIsPause() || executionStatus.typeIsCancel()) { checkResult = true; } break; default: break; } if (!checkResult) { putMsg(result, Status.PROCESS_INSTANCE_STATE_OPERATION_ERROR, processInstance.getName(), executionStatus.toString(), executeType.toString()); } else { putMsg(result, Status.SUCCESS); } return result; } /** * prepare to update process instance command type and status * * @param processInstance process instance * @param commandType command type * @param executionStatus execute status * @return update result */ private Map<String, Object> updateProcessInstancePrepare(ProcessInstance processInstance, CommandType commandType, ExecutionStatus executionStatus) { Map<String, Object> result = new HashMap<>(); processInstance.setCommandType(commandType); processInstance.addHistoryCmd(commandType); processInstance.setState(executionStatus); int update = processService.updateProcessInstance(processInstance); // determine whether the process is normal if (update > 0) { String host = processInstance.getHost(); String address = host.split(":")[0]; int port = Integer.parseInt(host.split(":")[1]); StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand( processInstance.getId(), 0, processInstance.getState(), processInstance.getId(), 0 ); stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command()); putMsg(result, Status.SUCCESS); } else { putMsg(result, Status.EXECUTE_PROCESS_INSTANCE_ERROR); } return result; } /** * prepare to update process instance command type and status * * @param processInstance process instance * @return update result */ private Map<String, Object> forceStartTaskInstance(ProcessInstance processInstance, int taskId) { Map<String, Object> result = new HashMap<>(); TaskGroupQueue taskGroupQueue = processService.loadTaskGroupQueue(taskId); if (taskGroupQueue.getStatus() != TaskGroupQueueStatus.WAIT_QUEUE) { putMsg(result, Status.TASK_GROUP_QUEUE_ALREADY_START); return result; } taskGroupQueue.setForceStart(Flag.YES.getCode()); processService.updateTaskGroupQueue(taskGroupQueue); processService.sendStartTask2Master(processInstance,taskId ,org.apache.dolphinscheduler.remote.command.CommandType.TASK_FORCE_STATE_EVENT_REQUEST); putMsg(result, Status.SUCCESS); return result; } /** * insert command, used in the implementation of the page, re run, recovery (pause / failure) execution * * @param loginUser login user * @param instanceId instance id * @param processDefinitionCode process definition code * @param processVersion * @param commandType command type * @return insert result code */ private Map<String, Object> insertCommand(User loginUser, Integer instanceId, long processDefinitionCode, int processVersion, CommandType commandType, String startParams) { Map<String, Object> result = new HashMap<>(); //To add startParams only when repeat running is needed Map<String, Object> cmdParam = new HashMap<>(); cmdParam.put(CMD_PARAM_RECOVER_PROCESS_ID_STRING, instanceId); if (!StringUtils.isEmpty(startParams)) { cmdParam.put(CMD_PARAM_START_PARAMS, startParams); } Command command = new Command(); command.setCommandType(commandType); command.setProcessDefinitionCode(processDefinitionCode); command.setCommandParam(JSONUtils.toJsonString(cmdParam)); command.setExecutorId(loginUser.getId()); command.setProcessDefinitionVersion(processVersion); command.setProcessInstanceId(instanceId); if (!processService.verifyIsNeedCreateCommand(command)) { putMsg(result, Status.PROCESS_INSTANCE_EXECUTING_COMMAND, processDefinitionCode); return result; } int create = processService.createCommand(command); if (create > 0) { putMsg(result, Status.SUCCESS); } else { putMsg(result, Status.EXECUTE_PROCESS_INSTANCE_ERROR); } return result; } /** * check if sub processes are offline before starting process definition * * @param processDefinitionCode process definition code * @return check result code */ @Override public Map<String, Object> startCheckByProcessDefinedCode(long processDefinitionCode) { Map<String, Object> result = new HashMap<>(); ProcessDefinition processDefinition = processDefinitionMapper.queryByCode(processDefinitionCode); if (processDefinition == null) { logger.error("process definition is not found"); putMsg(result, Status.REQUEST_PARAMS_NOT_VALID_ERROR, "processDefinitionCode"); return result; } List<Long> codes = new ArrayList<>(); processService.recurseFindSubProcess(processDefinition.getCode(), codes); if (!codes.isEmpty()) { List<ProcessDefinition> processDefinitionList = processDefinitionMapper.queryByCodes(codes); if (processDefinitionList != null) { for (ProcessDefinition processDefinitionTmp : processDefinitionList) { /** * if there is no online process, exit directly */ if (processDefinitionTmp.getReleaseState() != ReleaseState.ONLINE) { putMsg(result, Status.PROCESS_DEFINE_NOT_RELEASE, processDefinitionTmp.getName()); logger.info("not release process definition id: {} , name : {}", processDefinitionTmp.getId(), processDefinitionTmp.getName()); return result; } } } } putMsg(result, Status.SUCCESS); return result; } /** * create command * * @param commandType commandType * @param processDefineCode processDefineCode * @param nodeDep nodeDep * @param failureStrategy failureStrategy * @param startNodeList startNodeList * @param schedule schedule * @param warningType warningType * @param executorId executorId * @param warningGroupId warningGroupId * @param runMode runMode * @param processInstancePriority processInstancePriority * @param workerGroup workerGroup * @param environmentCode environmentCode * @return command id */ private int createCommand(CommandType commandType, long processDefineCode, TaskDependType nodeDep, FailureStrategy failureStrategy, String startNodeList, String schedule, WarningType warningType, int executorId, int warningGroupId, RunMode runMode, Priority processInstancePriority, String workerGroup, Long environmentCode, Map<String, String> startParams, Integer expectedParallelismNumber, int dryRun) { /** * instantiate command schedule instance */ Command command = new Command(); Map<String, String> cmdParam = new HashMap<>(); if (commandType == null) { command.setCommandType(CommandType.START_PROCESS); } else { command.setCommandType(commandType); } command.setProcessDefinitionCode(processDefineCode); if (nodeDep != null) { command.setTaskDependType(nodeDep); } if (failureStrategy != null) { command.setFailureStrategy(failureStrategy); } if (!StringUtils.isEmpty(startNodeList)) { cmdParam.put(CMD_PARAM_START_NODES, startNodeList); } if (warningType != null) { command.setWarningType(warningType); } if (startParams != null && startParams.size() > 0) { cmdParam.put(CMD_PARAM_START_PARAMS, JSONUtils.toJsonString(startParams)); } command.setCommandParam(JSONUtils.toJsonString(cmdParam)); command.setExecutorId(executorId); command.setWarningGroupId(warningGroupId); command.setProcessInstancePriority(processInstancePriority); command.setWorkerGroup(workerGroup); command.setEnvironmentCode(environmentCode); command.setDryRun(dryRun); ProcessDefinition processDefinition = processService.findProcessDefinitionByCode(processDefineCode); if (processDefinition != null) { command.setProcessDefinitionVersion(processDefinition.getVersion()); } command.setProcessInstanceId(0); Date start = null; Date end = null; if (!StringUtils.isEmpty(schedule)) { String[] interval = schedule.split(","); if (interval.length == 2) { start = DateUtils.getScheduleDate(interval[0]); end = DateUtils.getScheduleDate(interval[1]); if (start.after(end)) { logger.info("complement data error, wrong date start:{} and end date:{} ", start, end ); return 0; } } } // determine whether to complement if (commandType == CommandType.COMPLEMENT_DATA) { if (start == null || end == null) { return 0; } return createComplementCommandList(start, end, runMode, command, expectedParallelismNumber); } else { command.setCommandParam(JSONUtils.toJsonString(cmdParam)); return processService.createCommand(command); } } /** * create complement command * close left open right * * @param start * @param end * @param runMode * @return */ private int createComplementCommandList(Date start, Date end, RunMode runMode, Command command, Integer expectedParallelismNumber) { int createCount = 0; runMode = (runMode == null) ? RunMode.RUN_MODE_SERIAL : runMode; Map<String, String> cmdParam = JSONUtils.toMap(command.getCommandParam()); switch (runMode) { case RUN_MODE_SERIAL: { cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, DateUtils.dateToString(start)); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, DateUtils.dateToString(end)); command.setCommandParam(JSONUtils.toJsonString(cmdParam)); createCount = processService.createCommand(command); break; } case RUN_MODE_PARALLEL: { LinkedList<Date> listDate = new LinkedList<>(); List<Schedule> schedules = processService.queryReleaseSchedulerListByProcessDefinitionCode(command.getProcessDefinitionCode()); listDate.addAll(CronUtils.getSelfFireDateList(start, end, schedules)); createCount = listDate.size(); if (!CollectionUtils.isEmpty(listDate)) { if (expectedParallelismNumber != null && expectedParallelismNumber != 0) { createCount = Math.min(listDate.size(), expectedParallelismNumber); } logger.info("In parallel mode, current expectedParallelismNumber:{}", createCount); listDate.addLast(end); int chunkSize = listDate.size() / createCount; for (int i = 0; i < createCount; i++) { int rangeStart = i == 0 ? i : (i * chunkSize); int rangeEnd = i == createCount - 1 ? listDate.size() - 1 : rangeStart + chunkSize; cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, DateUtils.dateToString(listDate.get(rangeStart))); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, DateUtils.dateToString(listDate.get(rangeEnd))); command.setCommandParam(JSONUtils.toJsonString(cmdParam)); processService.createCommand(command); } } break; } default: break; } logger.info("create complement command count: {}", createCount); return createCount; } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,576
[Feature][Master] Optimize complement task's date
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description ![](https://ftp.bmp.ovh/imgs/2021/12/3b6c71080f2d3cf3.png) Current complement task's date is left closed and right open time interval (startDate <= N < endDate). From a user's point of view, what you see is what you get.In order that users do not need to transform understanding.I suggest changing left closed and right open time interval to left closed and right closed time interval. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7576
https://github.com/apache/dolphinscheduler/pull/7585
f450b7ef28ef97a9fc6caa6a6978b94e659a8e7f
3d9d91ccc37f9dc2f6d6081892f8e18acff82fe5
"2021-12-23T07:27:57Z"
java
"2021-12-25T04:22:22Z"
dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/service/ExecutorServiceTest.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.api.service; import static org.mockito.ArgumentMatchers.any; import static org.mockito.Mockito.times; import static org.mockito.Mockito.verify; import org.apache.dolphinscheduler.api.enums.ExecuteType; import org.apache.dolphinscheduler.api.enums.Status; import org.apache.dolphinscheduler.api.service.impl.ExecutorServiceImpl; import org.apache.dolphinscheduler.api.service.impl.ProjectServiceImpl; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.CommandType; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.Priority; import org.apache.dolphinscheduler.common.enums.ReleaseState; import org.apache.dolphinscheduler.common.enums.RunMode; import org.apache.dolphinscheduler.common.model.Server; import org.apache.dolphinscheduler.dao.entity.Command; import org.apache.dolphinscheduler.dao.entity.ProcessDefinition; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.Project; import org.apache.dolphinscheduler.dao.entity.Schedule; import org.apache.dolphinscheduler.dao.entity.Tenant; import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.ProjectMapper; import org.apache.dolphinscheduler.service.process.ProcessService; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.LinkedList; import java.util.List; import java.util.Map; import org.junit.Assert; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.Mockito; import org.mockito.junit.MockitoJUnitRunner; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * executor service 2 test */ @RunWith(MockitoJUnitRunner.Silent.class) public class ExecutorServiceTest { private static final Logger logger = LoggerFactory.getLogger(ExecutorServiceTest.class); @InjectMocks private ExecutorServiceImpl executorService; @Mock private ProcessService processService; @Mock private ProcessDefinitionMapper processDefinitionMapper; @Mock private ProjectMapper projectMapper; @Mock private ProjectServiceImpl projectService; @Mock private MonitorService monitorService; private int processDefinitionId = 1; private long processDefinitionCode = 1L; private int processInstanceId = 1; private int tenantId = 1; private int userId = 1; private ProcessDefinition processDefinition = new ProcessDefinition(); private ProcessInstance processInstance = new ProcessInstance(); private User loginUser = new User(); private long projectCode = 1L; private String projectName = "projectName"; private Project project = new Project(); private String cronTime; @Before public void init() { // user loginUser.setId(userId); // processDefinition processDefinition.setId(processDefinitionId); processDefinition.setReleaseState(ReleaseState.ONLINE); processDefinition.setTenantId(tenantId); processDefinition.setUserId(userId); processDefinition.setVersion(1); processDefinition.setCode(1L); processDefinition.setProjectCode(projectCode); // processInstance processInstance.setId(processInstanceId); processInstance.setState(ExecutionStatus.FAILURE); processInstance.setExecutorId(userId); processInstance.setTenantId(tenantId); processInstance.setProcessDefinitionVersion(1); processInstance.setProcessDefinitionCode(1L); // project project.setCode(projectCode); project.setName(projectName); // cronRangeTime cronTime = "2020-01-01 00:00:00,2020-01-31 23:00:00"; // mock Mockito.when(projectMapper.queryByCode(projectCode)).thenReturn(project); Mockito.when(projectService.checkProjectAndAuth(loginUser, project, projectCode)).thenReturn(checkProjectAndAuth()); Mockito.when(processDefinitionMapper.queryByCode(processDefinitionCode)).thenReturn(processDefinition); Mockito.when(processService.getTenantForProcess(tenantId, userId)).thenReturn(new Tenant()); Mockito.when(processService.createCommand(any(Command.class))).thenReturn(1); Mockito.when(monitorService.getServerListFromRegistry(true)).thenReturn(getMasterServersList()); Mockito.when(processService.findProcessInstanceDetailById(processInstanceId)).thenReturn(processInstance); Mockito.when(processService.findProcessDefinition(1L, 1)).thenReturn(processDefinition); } /** * not complement */ @Test public void testNoComplement() { Mockito.when(processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode)).thenReturn(zeroSchedulerList()); Map<String, Object> result = executorService.execProcessInstance(loginUser, projectCode, processDefinitionCode, cronTime, CommandType.START_PROCESS, null, null, null, null, 0, RunMode.RUN_MODE_SERIAL, Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 100L, 10, null, 0, Constants.DRY_RUN_FLAG_NO); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); verify(processService, times(1)).createCommand(any(Command.class)); } /** * not complement */ @Test public void testComplementWithStartNodeList() { Mockito.when(processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode)).thenReturn(zeroSchedulerList()); Map<String, Object> result = executorService.execProcessInstance(loginUser, projectCode, processDefinitionCode, cronTime, CommandType.START_PROCESS, null, "n1,n2", null, null, 0, RunMode.RUN_MODE_SERIAL, Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 100L,110, null, 0, Constants.DRY_RUN_FLAG_NO); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); verify(processService, times(1)).createCommand(any(Command.class)); } /** * date error */ @Test public void testDateError() { Mockito.when(processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode)).thenReturn(zeroSchedulerList()); Map<String, Object> result = executorService.execProcessInstance(loginUser, projectCode, processDefinitionCode, "2020-01-31 23:00:00,2020-01-01 00:00:00", CommandType.COMPLEMENT_DATA, null, null, null, null, 0, RunMode.RUN_MODE_SERIAL, Priority.LOW, Constants.DEFAULT_WORKER_GROUP,100L, 110, null, 0, Constants.DRY_RUN_FLAG_NO); Assert.assertEquals(Status.START_PROCESS_INSTANCE_ERROR, result.get(Constants.STATUS)); verify(processService, times(0)).createCommand(any(Command.class)); } /** * serial */ @Test public void testSerial() { Mockito.when(processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode)).thenReturn(zeroSchedulerList()); Map<String, Object> result = executorService.execProcessInstance(loginUser, projectCode, processDefinitionCode, cronTime, CommandType.COMPLEMENT_DATA, null, null, null, null, 0, RunMode.RUN_MODE_SERIAL, Priority.LOW, Constants.DEFAULT_WORKER_GROUP,100L, 110, null, 0, Constants.DRY_RUN_FLAG_NO); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); verify(processService, times(1)).createCommand(any(Command.class)); } /** * without schedule */ @Test public void testParallelWithOutSchedule() { Mockito.when(processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode)).thenReturn(zeroSchedulerList()); Map<String, Object> result = executorService.execProcessInstance(loginUser, projectCode, processDefinitionCode, cronTime, CommandType.COMPLEMENT_DATA, null, null, null, null, 0, RunMode.RUN_MODE_PARALLEL, Priority.LOW, Constants.DEFAULT_WORKER_GROUP,100L, 110, null, 0, Constants.DRY_RUN_FLAG_NO); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); verify(processService, times(31)).createCommand(any(Command.class)); } /** * with schedule */ @Test public void testParallelWithSchedule() { Mockito.when(processService.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode)).thenReturn(oneSchedulerList()); Map<String, Object> result = executorService.execProcessInstance(loginUser, projectCode, processDefinitionCode, cronTime, CommandType.COMPLEMENT_DATA, null, null, null, null, 0, RunMode.RUN_MODE_PARALLEL, Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 100L,110, null, 15, Constants.DRY_RUN_FLAG_NO); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); verify(processService, times(15)).createCommand(any(Command.class)); } @Test public void testNoMasterServers() { Mockito.when(monitorService.getServerListFromRegistry(true)).thenReturn(new ArrayList<>()); Map<String, Object> result = executorService.execProcessInstance(loginUser, projectCode, processDefinitionCode, cronTime, CommandType.COMPLEMENT_DATA, null, null, null, null, 0, RunMode.RUN_MODE_PARALLEL, Priority.LOW, Constants.DEFAULT_WORKER_GROUP, 100L,110, null, 0, Constants.DRY_RUN_FLAG_NO); Assert.assertEquals(result.get(Constants.STATUS), Status.MASTER_NOT_EXISTS); } @Test public void testExecuteRepeatRunning() { Mockito.when(processService.verifyIsNeedCreateCommand(any(Command.class))).thenReturn(true); Map<String, Object> result = executorService.execute(loginUser, projectCode, processInstanceId, ExecuteType.REPEAT_RUNNING); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); } @Test public void testStartCheckByProcessDefinedCode() { List<Long> ids = new ArrayList<>(); ids.add(1L); Mockito.doNothing().when(processService).recurseFindSubProcess(1, ids); List<ProcessDefinition> processDefinitionList = new ArrayList<>(); ProcessDefinition processDefinition = new ProcessDefinition(); processDefinition.setId(1); processDefinition.setReleaseState(ReleaseState.ONLINE); processDefinitionList.add(processDefinition); Mockito.when(processDefinitionMapper.queryDefinitionListByIdList(new Integer[ids.size()])) .thenReturn(processDefinitionList); Map<String, Object> result = executorService.startCheckByProcessDefinedCode(1L); Assert.assertEquals(Status.SUCCESS, result.get(Constants.STATUS)); } private List<Server> getMasterServersList() { List<Server> masterServerList = new ArrayList<>(); Server masterServer1 = new Server(); masterServer1.setId(1); masterServer1.setHost("192.168.220.188"); masterServer1.setPort(1121); masterServerList.add(masterServer1); Server masterServer2 = new Server(); masterServer2.setId(2); masterServer2.setHost("192.168.220.189"); masterServer2.setPort(1122); masterServerList.add(masterServer2); return masterServerList; } private List zeroSchedulerList() { return Collections.EMPTY_LIST; } private List<Schedule> oneSchedulerList() { List<Schedule> schedulerList = new LinkedList<>(); Schedule schedule = new Schedule(); schedule.setCrontab("0 0 0 1/2 * ?"); schedulerList.add(schedule); return schedulerList; } private Map<String, Object> checkProjectAndAuth() { Map<String, Object> result = new HashMap<>(); result.put(Constants.STATUS, Status.SUCCESS); return result; } @Test public void testCreateComplementToParallel() { List<String> result = new ArrayList<>(); int expectedParallelismNumber = 3; LinkedList<Integer> listDate = new LinkedList<>(); listDate.add(0); listDate.add(1); listDate.add(2); listDate.add(3); int createCount = Math.min(listDate.size(), expectedParallelismNumber); logger.info("In parallel mode, current expectedParallelismNumber:{}", createCount); listDate.addLast(4); int chunkSize = listDate.size() / createCount; for (int i = 0; i < createCount; i++) { int rangeStart = i == 0 ? i : (i * chunkSize); int rangeEnd = i == createCount - 1 ? listDate.size() - 1 : rangeStart + chunkSize; logger.info("rangeStart:{}, rangeEnd:{}",rangeStart, rangeEnd); result.add(listDate.get(rangeStart) + "," + listDate.get(rangeEnd)); } Assert.assertEquals("0,1", result.get(0)); Assert.assertEquals("1,2", result.get(1)); Assert.assertEquals("2,4", result.get(2)); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,576
[Feature][Master] Optimize complement task's date
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description ![](https://ftp.bmp.ovh/imgs/2021/12/3b6c71080f2d3cf3.png) Current complement task's date is left closed and right open time interval (startDate <= N < endDate). From a user's point of view, what you see is what you get.In order that users do not need to transform understanding.I suggest changing left closed and right open time interval to left closed and right closed time interval. ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7576
https://github.com/apache/dolphinscheduler/pull/7585
f450b7ef28ef97a9fc6caa6a6978b94e659a8e7f
3d9d91ccc37f9dc2f6d6081892f8e18acff82fe5
"2021-12-23T07:27:57Z"
java
"2021-12-25T04:22:22Z"
dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/quartz/cron/CronUtils.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.service.quartz.cron; import static org.apache.dolphinscheduler.service.quartz.cron.CycleFactory.day; import static org.apache.dolphinscheduler.service.quartz.cron.CycleFactory.hour; import static org.apache.dolphinscheduler.service.quartz.cron.CycleFactory.min; import static org.apache.dolphinscheduler.service.quartz.cron.CycleFactory.month; import static org.apache.dolphinscheduler.service.quartz.cron.CycleFactory.week; import static com.cronutils.model.CronType.QUARTZ; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.CycleEnum; import org.apache.dolphinscheduler.common.thread.Stopper; import org.apache.dolphinscheduler.common.utils.DateUtils; import org.apache.dolphinscheduler.dao.entity.Schedule; import org.apache.commons.collections.CollectionUtils; import java.text.ParseException; import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; import java.util.Date; import java.util.GregorianCalendar; import java.util.List; import org.quartz.CronExpression; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.cronutils.model.Cron; import com.cronutils.model.definition.CronDefinitionBuilder; import com.cronutils.parser.CronParser; /** * cron utils */ public class CronUtils { private CronUtils() { throw new IllegalStateException("CronUtils class"); } private static final Logger logger = LoggerFactory.getLogger(CronUtils.class); private static final CronParser QUARTZ_CRON_PARSER = new CronParser(CronDefinitionBuilder.instanceDefinitionFor(QUARTZ)); /** * parse to cron * * @param cronExpression cron expression, never null * @return Cron instance, corresponding to cron expression received */ public static Cron parse2Cron(String cronExpression) { return QUARTZ_CRON_PARSER.parse(cronExpression); } /** * build a new CronExpression based on the string cronExpression * * @param cronExpression String representation of the cron expression the new object should represent * @return CronExpression * @throws ParseException if the string expression cannot be parsed into a valid */ public static CronExpression parse2CronExpression(String cronExpression) throws ParseException { return new CronExpression(cronExpression); } /** * get max cycle * * @param cron cron * @return CycleEnum */ public static CycleEnum getMaxCycle(Cron cron) { return min(cron).addCycle(hour(cron)).addCycle(day(cron)).addCycle(week(cron)).addCycle(month(cron)).getCycle(); } /** * get min cycle * * @param cron cron * @return CycleEnum */ public static CycleEnum getMiniCycle(Cron cron) { return min(cron).addCycle(hour(cron)).addCycle(day(cron)).addCycle(week(cron)).addCycle(month(cron)).getMiniCycle(); } /** * get max cycle * * @param crontab crontab * @return CycleEnum */ public static CycleEnum getMaxCycle(String crontab) { return getMaxCycle(parse2Cron(crontab)); } /** * gets all scheduled times for a period of time based on not self dependency * * @param startTime startTime * @param endTime endTime * @param cronExpression cronExpression * @return date list */ public static List<Date> getFireDateList(Date startTime, Date endTime, CronExpression cronExpression) { List<Date> dateList = new ArrayList<>(); while (Stopper.isRunning()) { startTime = cronExpression.getNextValidTimeAfter(startTime); if (startTime.after(endTime)) { break; } dateList.add(startTime); } return dateList; } /** * gets expect scheduled times for a period of time based on self dependency * * @param startTime startTime * @param endTime endTime * @param cronExpression cronExpression * @param fireTimes fireTimes * @return date list */ public static List<Date> getSelfFireDateList(Date startTime, Date endTime, CronExpression cronExpression, int fireTimes) { List<Date> dateList = new ArrayList<>(); while (fireTimes > 0) { startTime = cronExpression.getNextValidTimeAfter(startTime); if (startTime.after(endTime) || startTime.equals(endTime)) { break; } dateList.add(startTime); fireTimes--; } return dateList; } /** * gets all scheduled times for a period of time based on self dependency * * @param startTime startTime * @param endTime endTime * @param cronExpression cronExpression * @return date list */ public static List<Date> getSelfFireDateList(Date startTime, Date endTime, CronExpression cronExpression) { List<Date> dateList = new ArrayList<>(); while (Stopper.isRunning()) { startTime = cronExpression.getNextValidTimeAfter(startTime); if (startTime.after(endTime) || startTime.equals(endTime)) { break; } dateList.add(startTime); } return dateList; } /** * gets all scheduled times for a period of time based on self dependency * if schedulers is empty then default scheduler = 1 day */ public static List<Date> getSelfFireDateList(final Date startTime, final Date endTime, final List<Schedule> schedules) { List<Date> result = new ArrayList<>(); if(startTime.equals(endTime)){ result.add(startTime); return result; } // support left closed and right open time interval (startDate <= N < endDate) Date from = new Date(startTime.getTime() - Constants.SECOND_TIME_MILLIS); Date to = new Date(endTime.getTime() - Constants.SECOND_TIME_MILLIS); List<Schedule> listSchedule = new ArrayList<>(); listSchedule.addAll(schedules); if (CollectionUtils.isEmpty(listSchedule)) { Schedule schedule = new Schedule(); schedule.setCrontab(Constants.DEFAULT_CRON_STRING); listSchedule.add(schedule); } for (Schedule schedule : listSchedule) { result.addAll(CronUtils.getSelfFireDateList(from, to, schedule.getCrontab())); } return result; } /** * gets all scheduled times for a period of time based on self dependency * * @param startTime startTime * @param endTime endTime * @param cron cron * @return date list */ public static List<Date> getSelfFireDateList(Date startTime, Date endTime, String cron) { CronExpression cronExpression = null; try { cronExpression = parse2CronExpression(cron); } catch (ParseException e) { logger.error(e.getMessage(), e); return Collections.emptyList(); } return getSelfFireDateList(startTime, endTime, cronExpression); } /** * get expiration time * * @param startTime startTime * @param cycleEnum cycleEnum * @return date */ public static Date getExpirationTime(Date startTime, CycleEnum cycleEnum) { Date maxExpirationTime = null; Date startTimeMax = null; try { startTimeMax = getEndTime(startTime); Calendar calendar = Calendar.getInstance(); calendar.setTime(startTime); switch (cycleEnum) { case HOUR: calendar.add(Calendar.HOUR, 1); break; case DAY: calendar.add(Calendar.DATE, 1); break; case WEEK: calendar.add(Calendar.DATE, 1); break; case MONTH: calendar.add(Calendar.DATE, 1); break; default: logger.error("Dependent process definition's cycleEnum is {},not support!!", cycleEnum); break; } maxExpirationTime = calendar.getTime(); } catch (Exception e) { logger.error(e.getMessage(), e); } return DateUtils.compare(startTimeMax, maxExpirationTime) ? maxExpirationTime : startTimeMax; } /** * get the end time of the day by value of date * * @return date */ private static Date getEndTime(Date date) { Calendar end = new GregorianCalendar(); end.setTime(date); end.set(Calendar.HOUR_OF_DAY, 23); end.set(Calendar.MINUTE, 59); end.set(Calendar.SECOND, 59); end.set(Calendar.MILLISECOND, 999); return end.getTime(); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,627
[Bug] [python] Task relation missing when run tutorial
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened After I run `python tutorial.py` it could DAG like ``` --> task_child_one / \ task_parent --> --> task_union \ / --> task_child_two ``` but what I really got is ``` --> task_child_one / \ task_parent --> --> task_union \ --> task_child_two ``` ### What you expected to happen My DAG should look like ``` --> task_child_one / \ task_parent --> --> task_union \ / --> task_child_two ``` ### How to reproduce Run `python tutorial.py` ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7627
https://github.com/apache/dolphinscheduler/pull/7628
0d861fe46af595156342b56ef2edf3a2f24a0e4c
b1afd99c9e4ba5f4fc3bb3ee3fbce5fe64cbc201
"2021-12-26T05:23:25Z"
java
"2021-12-26T07:44:30Z"
dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/constants.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Constants for pydolphinscheduler.""" class ProcessDefinitionReleaseState: """Constants for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition` release state.""" ONLINE: str = "ONLINE" OFFLINE: str = "OFFLINE" class ProcessDefinitionDefault: """Constants default value for :class:`pydolphinscheduler.core.process_definition.ProcessDefinition`.""" PROJECT: str = "project-pydolphin" TENANT: str = "tenant_pydolphin" USER: str = "userPythonGateway" # TODO simple set password same as username USER_PWD: str = "userPythonGateway" USER_EMAIL: str = "userPythonGateway@dolphinscheduler.com" USER_PHONE: str = "11111111111" USER_STATE: int = 1 QUEUE: str = "queuePythonGateway" WORKER_GROUP: str = "default" TIME_ZONE: str = "Asia/Shanghai" class TaskPriority(str): """Constants for task priority.""" HIGHEST = "HIGHEST" HIGH = "HIGH" MEDIUM = "MEDIUM" LOW = "LOW" LOWEST = "LOWEST" class TaskFlag(str): """Constants for task flag.""" YES = "YES" NO = "NO" class TaskTimeoutFlag(str): """Constants for task timeout flag.""" CLOSE = "CLOSE" class TaskType(str): """Constants for task type, it will also show you which kind we support up to now.""" SHELL = "SHELL" HTTP = "HTTP" PYTHON = "PYTHON" SQL = "SQL" SUB_PROCESS = "SUB_PROCESS" PROCEDURE = "PROCEDURE" DATAX = "DATAX" DEPENDENT = "DEPENDENT" CONDITIONS = "CONDITIONS" SWITCH = "SWITCH" class DefaultTaskCodeNum(str): """Constants and default value for default task code number.""" DEFAULT = 1 class JavaGatewayDefault(str): """Constants and default value for java gateway.""" RESULT_MESSAGE_KEYWORD = "msg" RESULT_MESSAGE_SUCCESS = "success" RESULT_STATUS_KEYWORD = "status" RESULT_STATUS_SUCCESS = "SUCCESS" RESULT_DATA = "data" class Delimiter(str): """Constants for delimiter.""" BAR = "-" DASH = "/" COLON = ":" UNDERSCORE = "_" class Time(str): """Constants for date.""" FMT_STD_DATE = "%Y-%m-%d" LEN_STD_DATE = 10 FMT_DASH_DATE = "%Y/%m/%d" FMT_SHORT_DATE = "%Y%m%d" LEN_SHORT_DATE = 8 FMT_STD_TIME = "%H:%M:%S" FMT_NO_COLON_TIME = "%H%M%S"
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,627
[Bug] [python] Task relation missing when run tutorial
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened After I run `python tutorial.py` it could DAG like ``` --> task_child_one / \ task_parent --> --> task_union \ / --> task_child_two ``` but what I really got is ``` --> task_child_one / \ task_parent --> --> task_union \ --> task_child_two ``` ### What you expected to happen My DAG should look like ``` --> task_child_one / \ task_parent --> --> task_union \ / --> task_child_two ``` ### How to reproduce Run `python tutorial.py` ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7627
https://github.com/apache/dolphinscheduler/pull/7628
0d861fe46af595156342b56ef2edf3a2f24a0e4c
b1afd99c9e4ba5f4fc3bb3ee3fbce5fe64cbc201
"2021-12-26T05:23:25Z"
java
"2021-12-26T07:44:30Z"
dolphinscheduler-python/pydolphinscheduler/src/pydolphinscheduler/core/task.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """DolphinScheduler Task and TaskRelation object.""" import logging from typing import Dict, List, Optional, Sequence, Set, Tuple, Union from pydolphinscheduler.constants import ( ProcessDefinitionDefault, TaskFlag, TaskPriority, TaskTimeoutFlag, ) from pydolphinscheduler.core.base import Base from pydolphinscheduler.core.process_definition import ( ProcessDefinition, ProcessDefinitionContext, ) from pydolphinscheduler.java_gateway import launch_gateway class TaskRelation(Base): """TaskRelation object, describe the relation of exactly two tasks.""" _DEFINE_ATTR = { "pre_task_code", "post_task_code", } _DEFAULT_ATTR = { "name": "", "preTaskVersion": 1, "postTaskVersion": 1, "conditionType": 0, "conditionParams": {}, } def __init__( self, pre_task_code: int, post_task_code: int, name: Optional[str] = None, ): super().__init__(name) self.pre_task_code = pre_task_code self.post_task_code = post_task_code def __hash__(self): return hash(f"{self.post_task_code}, {self.post_task_code}") class Task(Base): """Task object, parent class for all exactly task type.""" _DEFINE_ATTR = { "name", "code", "version", "task_type", "task_params", "description", "flag", "task_priority", "worker_group", "delay_time", "fail_retry_times", "fail_retry_interval", "timeout_flag", "timeout_notify_strategy", "timeout", } _task_custom_attr: set = set() DEFAULT_CONDITION_RESULT = {"successNode": [""], "failedNode": [""]} def __init__( self, name: str, task_type: str, description: Optional[str] = None, flag: Optional[str] = TaskFlag.YES, task_priority: Optional[str] = TaskPriority.MEDIUM, worker_group: Optional[str] = ProcessDefinitionDefault.WORKER_GROUP, delay_time: Optional[int] = 0, fail_retry_times: Optional[int] = 0, fail_retry_interval: Optional[int] = 1, timeout_flag: Optional[int] = TaskTimeoutFlag.CLOSE, timeout_notify_strategy: Optional = None, timeout: Optional[int] = 0, process_definition: Optional[ProcessDefinition] = None, local_params: Optional[List] = None, resource_list: Optional[List] = None, dependence: Optional[Dict] = None, wait_start_timeout: Optional[Dict] = None, condition_result: Optional[Dict] = None, ): super().__init__(name, description) self.task_type = task_type self.flag = flag self.task_priority = task_priority self.worker_group = worker_group self.fail_retry_times = fail_retry_times self.fail_retry_interval = fail_retry_interval self.delay_time = delay_time self.timeout_flag = timeout_flag self.timeout_notify_strategy = timeout_notify_strategy self.timeout = timeout self._process_definition = None self.process_definition: ProcessDefinition = ( process_definition or ProcessDefinitionContext.get() ) self._upstream_task_codes: Set[int] = set() self._downstream_task_codes: Set[int] = set() self._task_relation: Set[TaskRelation] = set() # move attribute code and version after _process_definition and process_definition declare self.code, self.version = self.gen_code_and_version() # Add task to process definition, maybe we could put into property process_definition latter if ( self.process_definition is not None and self.code not in self.process_definition.tasks ): self.process_definition.add_task(self) else: logging.warning( "Task code %d already in process definition, prohibit re-add task.", self.code, ) # Attribute for task param self.local_params = local_params or [] self.resource_list = resource_list or [] self.dependence = dependence or {} self.wait_start_timeout = wait_start_timeout or {} self.condition_result = condition_result or self.DEFAULT_CONDITION_RESULT @property def process_definition(self) -> Optional[ProcessDefinition]: """Get attribute process_definition.""" return self._process_definition @process_definition.setter def process_definition(self, process_definition: Optional[ProcessDefinition]): """Set attribute process_definition.""" self._process_definition = process_definition @property def task_params(self) -> Optional[Dict]: """Get task parameter object. Will get result to combine _task_custom_attr and custom_attr. """ custom_attr = { "local_params", "resource_list", "dependence", "wait_start_timeout", "condition_result", } custom_attr |= self._task_custom_attr return self.get_define_custom(custom_attr=custom_attr) def __hash__(self): return hash(self.code) def __lshift__(self, other: Union["Task", Sequence["Task"]]): """Implement Task << Task.""" self.set_upstream(other) return other def __rshift__(self, other: Union["Task", Sequence["Task"]]): """Implement Task >> Task.""" self.set_downstream(other) return other def __rrshift__(self, other: Union["Task", Sequence["Task"]]): """Call for Task >> [Task] because list don't have __rshift__ operators.""" self.__lshift__(other) return self def __rlshift__(self, other: Union["Task", Sequence["Task"]]): """Call for Task << [Task] because list don't have __lshift__ operators.""" self.__rshift__(other) return self def _set_deps( self, tasks: Union["Task", Sequence["Task"]], upstream: bool = True ) -> None: """ Set parameter tasks dependent to current task. it is a wrapper for :func:`set_upstream` and :func:`set_downstream`. """ if not isinstance(tasks, Sequence): tasks = [tasks] for task in tasks: if upstream: self._upstream_task_codes.add(task.code) task._downstream_task_codes.add(self.code) if self._process_definition: task_relation = TaskRelation( pre_task_code=task.code, post_task_code=self.code, ) self.process_definition._task_relations.add(task_relation) else: self._downstream_task_codes.add(task.code) task._upstream_task_codes.add(self.code) if self._process_definition: task_relation = TaskRelation( pre_task_code=self.code, post_task_code=task.code, ) self.process_definition._task_relations.add(task_relation) def set_upstream(self, tasks: Union["Task", Sequence["Task"]]) -> None: """Set parameter tasks as upstream to current task.""" self._set_deps(tasks, upstream=True) def set_downstream(self, tasks: Union["Task", Sequence["Task"]]) -> None: """Set parameter tasks as downstream to current task.""" self._set_deps(tasks, upstream=False) # TODO code should better generate in bulk mode when :ref: processDefinition run submit or start def gen_code_and_version(self) -> Tuple: """ Generate task code and version from java gateway. If task name do not exists in process definition before, if will generate new code and version id equal to 0 by java gateway, otherwise if will return the exists code and version. """ # TODO get code from specific project process definition and task name gateway = launch_gateway() result = gateway.entry_point.getCodeAndVersion( self.process_definition._project, self.name ) # result = gateway.entry_point.genTaskCodeList(DefaultTaskCodeNum.DEFAULT) # gateway_result_checker(result) return result.get("code"), result.get("version")
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,627
[Bug] [python] Task relation missing when run tutorial
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened After I run `python tutorial.py` it could DAG like ``` --> task_child_one / \ task_parent --> --> task_union \ / --> task_child_two ``` but what I really got is ``` --> task_child_one / \ task_parent --> --> task_union \ --> task_child_two ``` ### What you expected to happen My DAG should look like ``` --> task_child_one / \ task_parent --> --> task_union \ / --> task_child_two ``` ### How to reproduce Run `python tutorial.py` ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7627
https://github.com/apache/dolphinscheduler/pull/7628
0d861fe46af595156342b56ef2edf3a2f24a0e4c
b1afd99c9e4ba5f4fc3bb3ee3fbce5fe64cbc201
"2021-12-26T05:23:25Z"
java
"2021-12-26T07:44:30Z"
dolphinscheduler-python/pydolphinscheduler/tests/core/test_task.py
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. """Test Task class function.""" from unittest.mock import patch import pytest from pydolphinscheduler.core.task import Task, TaskRelation from tests.testing.task import Task as testTask @pytest.mark.parametrize( "attr, expect", [ ( dict(), { "localParams": [], "resourceList": [], "dependence": {}, "waitStartTimeout": {}, "conditionResult": {"successNode": [""], "failedNode": [""]}, }, ), ( { "local_params": ["foo", "bar"], "resource_list": ["foo", "bar"], "dependence": {"foo", "bar"}, "wait_start_timeout": {"foo", "bar"}, "condition_result": {"foo": ["bar"]}, }, { "localParams": ["foo", "bar"], "resourceList": ["foo", "bar"], "dependence": {"foo", "bar"}, "waitStartTimeout": {"foo", "bar"}, "conditionResult": {"foo": ["bar"]}, }, ), ], ) def test_property_task_params(attr, expect): """Test class task property.""" task = testTask( "test-property-task-params", "test-task", **attr, ) assert expect == task.task_params def test_task_relation_to_dict(): """Test TaskRelation object function to_dict.""" pre_task_code = 123 post_task_code = 456 expect = { "name": "", "preTaskCode": pre_task_code, "postTaskCode": post_task_code, "preTaskVersion": 1, "postTaskVersion": 1, "conditionType": 0, "conditionParams": {}, } task_param = TaskRelation( pre_task_code=pre_task_code, post_task_code=post_task_code ) assert task_param.get_define() == expect def test_task_get_define(): """Test Task object function get_define.""" code = 123 version = 1 name = "test_task_get_define" task_type = "test_task_get_define_type" expect = { "code": code, "name": name, "version": version, "description": None, "delayTime": 0, "taskType": task_type, "taskParams": { "resourceList": [], "localParams": [], "dependence": {}, "conditionResult": {"successNode": [""], "failedNode": [""]}, "waitStartTimeout": {}, }, "flag": "YES", "taskPriority": "MEDIUM", "workerGroup": "default", "failRetryTimes": 0, "failRetryInterval": 1, "timeoutFlag": "CLOSE", "timeoutNotifyStrategy": None, "timeout": 0, } with patch( "pydolphinscheduler.core.task.Task.gen_code_and_version", return_value=(code, version), ): task = Task(name=name, task_type=task_type) assert task.get_define() == expect @pytest.mark.parametrize("shift", ["<<", ">>"]) def test_two_tasks_shift(shift: str): """Test bit operator between tasks. Here we test both `>>` and `<<` bit operator. """ upstream = testTask(name="upstream", task_type=shift) downstream = testTask(name="downstream", task_type=shift) if shift == "<<": downstream << upstream elif shift == ">>": upstream >> downstream else: assert False, f"Unexpect bit operator type {shift}." assert ( 1 == len(upstream._downstream_task_codes) and downstream.code in upstream._downstream_task_codes ), "Task downstream task attributes error, downstream codes size or specific code failed." assert ( 1 == len(downstream._upstream_task_codes) and upstream.code in downstream._upstream_task_codes ), "Task upstream task attributes error, upstream codes size or upstream code failed." @pytest.mark.parametrize( "dep_expr, flag", [ ("task << tasks", "upstream"), ("tasks << task", "downstream"), ("task >> tasks", "downstream"), ("tasks >> task", "upstream"), ], ) def test_tasks_list_shift(dep_expr: str, flag: str): """Test bit operator between task and sequence of tasks. Here we test both `>>` and `<<` bit operator. """ reverse_dict = { "upstream": "downstream", "downstream": "upstream", } task_type = "dep_task_and_tasks" task = testTask(name="upstream", task_type=task_type) tasks = [ testTask(name="downstream1", task_type=task_type), testTask(name="downstream2", task_type=task_type), ] # Use build-in function eval to simply test case and reduce duplicate code eval(dep_expr) direction_attr = f"_{flag}_task_codes" reverse_direction_attr = f"_{reverse_dict[flag]}_task_codes" assert 2 == len(getattr(task, direction_attr)) assert [t.code in getattr(task, direction_attr) for t in tasks] assert all([1 == len(getattr(t, reverse_direction_attr)) for t in tasks]) assert all([task.code in getattr(t, reverse_direction_attr) for t in tasks])
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,564
[Bug] [UI] Datasource's domain name too long to fill
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://ftp.bmp.ovh/imgs/2021/12/98eb1e26b1bccdc1.png) ### What you expected to happen input domain successfully ### How to reproduce The domain name of AWS RDS and AWS aurora is very long. ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7564
https://github.com/apache/dolphinscheduler/pull/7625
b1afd99c9e4ba5f4fc3bb3ee3fbce5fe64cbc201
82db009781cbd9d2d8e1dbd8fc567f296de27496
"2021-12-23T02:25:21Z"
java
"2021-12-26T11:45:15Z"
dolphinscheduler-ui/src/js/conf/home/pages/datasource/pages/list/_source/createDataSource.vue
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ <template> <div class="datasource-popup-model"> <div class="content-p"> <div class="create-datasource-model"> <m-list-box-f> <template slot="name"><strong>*</strong>{{$t('Datasource')}}</template> <template slot="content" size="small"> <el-select style="width: 100%;" v-model="type" :disabled="this.item.id"> <el-option v-for="item in datasourceTypeList" :key="item.value" :value="item.value" :label="item.label"> </el-option> </el-select> </template> </m-list-box-f> <m-list-box-f> <template slot="name"><strong>*</strong>{{$t('Datasource Name')}}</template> <template slot="content"> <el-input type="input" v-model="name" maxlength="60" size="small" :placeholder="$t('Please enter datasource name')"> </el-input> </template> </m-list-box-f> <m-list-box-f> <template slot="name">{{$t('Description')}}</template> <template slot="content"> <el-input type="textarea" v-model="note" size="small" :placeholder="$t('Please enter description')"> </el-input> </template> </m-list-box-f> <m-list-box-f> <template slot="name"><strong>*</strong>{{$t('IP')}}</template> <template slot="content"> <el-input type="input" v-model="host" maxlength="60" size="small" :placeholder="$t('Please enter IP')"> </el-input> </template> </m-list-box-f> <m-list-box-f> <template slot="name"><strong>*</strong>{{$t('Port')}}</template> <template slot="content"> <el-input type="input" v-model="port" size="small" :placeholder="$t('Please enter port')"> </el-input> </template> </m-list-box-f> <m-list-box-f :class="{hidden:showPrincipal}"> <template slot="name"><strong>*</strong>Principal</template> <template slot="content"> <el-input type="input" v-model="principal" size="small" :placeholder="$t('Please enter Principal')"> </el-input> </template> </m-list-box-f> <m-list-box-f :class="{hidden:showPrincipal}"> <template slot="name">krb5.conf</template> <template slot="content"> <el-input type="input" v-model="javaSecurityKrb5Conf" size="small" :placeholder="$t('Please enter the kerberos authentication parameter java.security.krb5.conf')"> </el-input> </template> </m-list-box-f> <m-list-box-f :class="{hidden:showPrincipal}"> <template slot="name">keytab.username</template> <template slot="content"> <el-input type="input" v-model="loginUserKeytabUsername" size="small" :placeholder="$t('Please enter the kerberos authentication parameter login.user.keytab.username')"> </el-input> </template> </m-list-box-f> <m-list-box-f :class="{hidden:showPrincipal}"> <template slot="name">keytab.path</template> <template slot="content"> <el-input type="input" v-model="loginUserKeytabPath" size="small" :placeholder="$t('Please enter the kerberos authentication parameter login.user.keytab.path')"> </el-input> </template> </m-list-box-f> <m-list-box-f> <template slot="name"><strong>*</strong>{{$t('User Name')}}</template> <template slot="content"> <el-input type="input" v-model="userName" maxlength="60" size="small" :placeholder="$t('Please enter user name')"> </el-input> </template> </m-list-box-f> <m-list-box-f> <template slot="name">{{$t('Password')}}</template> <template slot="content"> <el-input type="password" v-model="password" size="small" :placeholder="$t('Please enter your password')"> </el-input> </template> </m-list-box-f> <m-list-box-f> <template slot="name"><strong :class="{hidden:showDatabase}">*</strong>{{$t('Database Name')}}</template> <template slot="content"> <el-input type="input" v-model="database" maxlength="60" size="small" :placeholder="$t('Please enter database name')"> </el-input> </template> </m-list-box-f> <m-list-box-f v-if="showConnectType"> <template slot="name"><strong>*</strong>{{$t('Oracle Connect Type')}}</template> <template slot="content"> <el-radio-group v-model="connectType" size="small" style="vertical-align: sub;"> <el-radio :label="'ORACLE_SERVICE_NAME'">{{$t('Oracle Service Name')}}</el-radio> <el-radio :label="'ORACLE_SID'">{{$t('Oracle SID')}}</el-radio> </el-radio-group> </template> </m-list-box-f> <m-list-box-f> <template slot="name">{{$t('jdbc connect parameters')}}</template> <template slot="content"> <el-input type="textarea" v-model="other" :autosize="{minRows:2}" size="small" :placeholder="_rtOtherPlaceholder()"> </el-input> </template> </m-list-box-f> </div> </div> <div class="bottom-p"> <el-button type="text" ize="mini" @click="_close()"> {{$t('Cancel')}} </el-button> <el-button type="success" size="mini" round @click="_testConnect()" :loading="testLoading">{{testLoading ? $t('Loading...') : $t('Test Connect')}}</el-button> <el-button type="primary" size="mini" round :loading="spinnerLoading" @click="_ok()">{{spinnerLoading ? $t('Loading...') :item ? `${$t('Edit')}` : `${$t('Submit')}`}} </el-button> </div> </div> </template> <script> import i18n from '@/module/i18n' import store from '@/conf/home/store' import { isJson } from '@/module/util/util' import mListBoxF from '@/module/components/listBoxF/listBoxF' export default { name: 'create-datasource', data () { return { store, // btn loading spinnerLoading: false, // Data source type type: 'MYSQL', // name name: '', // description note: '', // host host: '', // port port: '', // data storage name database: '', // principal principal: '', // java.security.krb5.conf javaSecurityKrb5Conf: '', // login.user.keytab.username loginUserKeytabUsername: '', // login.user.keytab.path loginUserKeytabPath: '', // database username userName: '', // Database password password: '', // Database connect type connectType: '', // Jdbc connection parameter other: '', // btn test loading testLoading: false, showPrincipal: true, showDatabase: false, showConnectType: false, isShowPrincipal: true, prePortMapper: {}, datasourceTypeList: [ { value: 'MYSQL', label: 'MYSQL' }, { value: 'POSTGRESQL', label: 'POSTGRESQL' }, { value: 'HIVE', label: 'HIVE/IMPALA' }, { value: 'SPARK', label: 'SPARK' }, { value: 'CLICKHOUSE', label: 'CLICKHOUSE' }, { value: 'ORACLE', label: 'ORACLE' }, { value: 'SQLSERVER', label: 'SQLSERVER' }, { value: 'DB2', label: 'DB2' }, { value: 'PRESTO', label: 'PRESTO' } ] } }, props: { item: Object }, methods: { _rtOtherPlaceholder () { return `${i18n.$t('Please enter format')} {"key1":"value1","key2":"value2"...} ${i18n.$t('connection parameter')}` }, /** * submit */ _ok () { if (this._verification()) { this._verifName().then(res => { this._submit() }) } }, /** * close */ _close () { this.$emit('close') }, /** * return param */ _rtParam () { return { type: this.type, name: this.name, note: this.note, host: this.host, port: this.port, database: this.database, principal: this.principal, javaSecurityKrb5Conf: this.javaSecurityKrb5Conf, loginUserKeytabUsername: this.loginUserKeytabUsername, loginUserKeytabPath: this.loginUserKeytabPath, userName: this.userName, password: this.password, connectType: this.connectType, other: this.other === '' ? null : JSON.parse(this.other) } }, /** * test connect */ _testConnect () { if (this._verification()) { this.testLoading = true this.store.dispatch('datasource/connectDatasources', this._rtParam()).then(res => { setTimeout(() => { this.$message.success(res.msg) this.testLoading = false }, 800) }).catch(e => { this.$message.error(e.msg || '') this.testLoading = false }) } }, /** * Verify that the data source name exists */ _verifName () { return new Promise((resolve, reject) => { if (this.name === this.item.name) { resolve() return } this.store.dispatch('datasource/verifyName', { name: this.name }).then(res => { resolve() }).catch(e => { this.$message.error(e.msg || '') reject(e) }) }) }, /** * verification */ _verification () { if (!this.name) { this.$message.warning(`${i18n.$t('Please enter resource name')}`) return false } if (!this.host) { this.$message.warning(`${i18n.$t('Please enter IP/hostname')}`) return false } if (!this.port) { this.$message.warning(`${i18n.$t('Please enter port')}`) return false } if (!this.userName) { this.$message.warning(`${i18n.$t('Please enter user name')}`) return false } if (!this.database && this.showDatabase === false) { this.$message.warning(`${i18n.$t('Please enter database name')}`) return false } if (this.other) { if (!isJson(this.other)) { this.$message.warning(`${i18n.$t('jdbc connection parameters is not a correct JSON format')}`) return false } } return true }, /** * submit => add/update */ _submit () { this.spinnerLoading = true let param = this._rtParam() // edit if (this.item) { param.id = this.item.id } this.store.dispatch(`datasource/${this.item ? 'updateDatasource' : 'createDatasources'}`, param).then(res => { this.$message.success(res.msg) this.spinnerLoading = false this.$emit('onUpdate') }).catch(e => { this.$message.error(e.msg || '') this.spinnerLoading = false }) }, /** * Get modified data */ _getEditDatasource () { this.store.dispatch('datasource/getEditDatasource', { id: this.item.id }).then(res => { this.type = res.type this.name = res.name this.note = res.note this.host = res.host // When in Editpage, Prevent default value overwrite backfill value setTimeout(() => { this.port = res.port }, 0) this.principal = res.principal this.javaSecurityKrb5Conf = res.javaSecurityKrb5Conf this.loginUserKeytabUsername = res.loginUserKeytabUsername this.loginUserKeytabPath = res.loginUserKeytabPath this.database = res.database this.userName = res.userName this.password = res.password this.connectType = res.connectType this.other = res.other === null ? '' : JSON.stringify(res.other) }).catch(e => { this.$message.error(e.msg || '') }) }, /** * Set default port for each type. */ _setDefaultValues (value) { // Default type is MYSQL let type = this.type || 'MYSQL' let defaultPort = this._getDefaultPort(type) // Backfill the previous input from memcache let mapperPort = this.prePortMapper[type] this.port = mapperPort || defaultPort }, /** * Get default port by type */ _getDefaultPort (type) { let defaultPort = '' switch (type) { case 'MYSQL': defaultPort = '3306' break case 'POSTGRESQL': defaultPort = '5432' break case 'HIVE': defaultPort = '10000' break case 'SPARK': defaultPort = '10015' break case 'CLICKHOUSE': defaultPort = '8123' break case 'ORACLE': defaultPort = '1521' break case 'SQLSERVER': defaultPort = '1433' break case 'DB2': defaultPort = '50000' break case 'PRESTO': defaultPort = '8080' break default: break } return defaultPort } }, created () { // Backfill if (this.item.id) { this._getEditDatasource() } this._setDefaultValues() }, watch: { type (value) { if (value === 'POSTGRESQL') { this.showDatabase = true } else { this.showDatabase = false } if (value === 'ORACLE' && !this.item.id) { this.showConnectType = true this.connectType = 'ORACLE_SERVICE_NAME' } else if (value === 'ORACLE' && this.item.id) { this.showConnectType = true } else { this.showConnectType = false } // Set default port for each type datasource this._setDefaultValues(value) return new Promise((resolve, reject) => { this.store.dispatch('datasource/getKerberosStartupState').then(res => { this.isShowPrincipal = res if ((value === 'HIVE' || value === 'SPARK') && this.isShowPrincipal === true) { this.showPrincipal = false } else { this.showPrincipal = true } }).catch(e => { this.$message.error(e.msg || '') reject(e) }) }) }, /** * Cache the previous input port for each type datasource * @param value */ port (value) { this.prePortMapper[this.type] = value } }, mounted () { }, components: { mListBoxF } } </script> <style lang="scss" rel="stylesheet/scss"> .datasource-popup-model { background: #fff; border-radius: 3px; .top-p { height: 70px; line-height: 70px; border-radius: 3px 3px 0 0; padding: 0 20px; >span { font-size: 20px; } } .bottom-p { text-align: right; height: 72px; line-height: 72px; border-radius: 0 0 3px 3px; padding: 0 20px; } .content-p { min-width: 850px; min-height: 100px; .list-box-f { .text { width: 166px; } .cont { width: calc(100% - 186px); } } } .radio-label-last { margin-left: 0px !important; } } </style>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,629
[Bug] [Script] The dolphinscheduler-daemon.sh status result is wrong.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened The dolphinscheduler-daemon.sh status result is wrong. ### What you expected to happen The dolphinscheduler-daemon.sh status result is correct. ### How to reproduce Create multi dolphinscheduler clusters in the same server with different dir. ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7629
https://github.com/apache/dolphinscheduler/pull/7630
82db009781cbd9d2d8e1dbd8fc567f296de27496
2d73083e87e7ab2e117e9512437e3ab4cd2be6a4
"2021-12-26T09:58:38Z"
java
"2021-12-26T11:48:59Z"
script/dolphinscheduler-daemon.sh
#!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # usage="Usage: dolphinscheduler-daemon.sh (start|stop|status) <api-server|master-server|worker-server|alert-server|standalone-server> " # if no args specified, show usage if [ $# -le 1 ]; then echo $usage exit 1 fi startStop=$1 shift command=$1 shift echo "Begin $startStop $command......" BIN_DIR=`dirname $0` BIN_DIR=`cd "$BIN_DIR"; pwd` DOLPHINSCHEDULER_HOME=`cd "$BIN_DIR/.."; pwd` source "${DOLPHINSCHEDULER_HOME}/bin/env/dolphinscheduler_env.sh" export HOSTNAME=`hostname` export DOLPHINSCHEDULER_LOG_DIR=$DOLPHINSCHEDULER_HOME/$command/logs export STOP_TIMEOUT=5 if [ ! -d "$DOLPHINSCHEDULER_LOG_DIR" ]; then mkdir $DOLPHINSCHEDULER_LOG_DIR fi log=$DOLPHINSCHEDULER_HOME/$command-$HOSTNAME.out pid=$DOLPHINSCHEDULER_HOME/$command/pid cd $DOLPHINSCHEDULER_HOME/$command if [ "$command" = "api-server" ]; then : elif [ "$command" = "master-server" ]; then : elif [ "$command" = "worker-server" ]; then : elif [ "$command" = "alert-server" ]; then : elif [ "$command" = "logger-server" ]; then : elif [ "$command" = "standalone-server" ]; then : else echo "Error: No command named '$command' was found." exit 1 fi case $startStop in (start) echo starting $command, logging to $DOLPHINSCHEDULER_LOG_DIR nohup "$DOLPHINSCHEDULER_HOME/$command/bin/start.sh" > $log 2>&1 & echo $! > $pid ;; (stop) if [ -f $pid ]; then TARGET_PID=`cat $pid` if kill -0 $TARGET_PID > /dev/null 2>&1; then echo stopping $command pkill -P $TARGET_PID sleep $STOP_TIMEOUT if kill -0 $TARGET_PID > /dev/null 2>&1; then echo "$command did not stop gracefully after $STOP_TIMEOUT seconds: killing with kill -9" pkill -P -9 $TARGET_PID fi else echo no $command to stop fi rm -f $pid else echo no $command to stop fi ;; (status) # more details about the status can be added later serverCount=`ps -ef |grep "$CLASS" |grep -v "grep" |wc -l` state="STOP" # font color - red state="[ \033[1;31m $state \033[0m ]" if [[ $serverCount -gt 0 ]];then state="RUNNING" # font color - green state="[ \033[1;32m $state \033[0m ]" fi echo -e "$command $state" ;; (*) echo $usage exit 1 ;; esac echo "End $startStop $command."
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,605
[Feature][UI Next] Add card component.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add card component. ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7605
https://github.com/apache/dolphinscheduler/pull/7635
2d73083e87e7ab2e117e9512437e3ab4cd2be6a4
36dd43737764a2e81875ca2f7b0831cc4b5709dc
"2021-12-24T02:51:25Z"
java
"2021-12-27T01:42:11Z"
dolphinscheduler-ui-next/src/components/card/index.tsx
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { defineComponent, defineProps } from 'vue' import { NCard } from 'naive-ui' import Props from '@/components/card/types' const headerStyle = { borderBottom: '1px solid var(--border-color)', } const Card = defineComponent({ name: 'Card', setup() { const props = defineProps<Props>() return { ...props } }, render() { const { title, $slots } = this return ( <NCard title={title} size='small' headerStyle={headerStyle}> {$slots} </NCard> ) }, }) export default Card
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,605
[Feature][UI Next] Add card component.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add card component. ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7605
https://github.com/apache/dolphinscheduler/pull/7635
2d73083e87e7ab2e117e9512437e3ab4cd2be6a4
36dd43737764a2e81875ca2f7b0831cc4b5709dc
"2021-12-24T02:51:25Z"
java
"2021-12-27T01:42:11Z"
dolphinscheduler-ui-next/src/components/card/types.ts
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ interface Props { title: string } export default Props
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,605
[Feature][UI Next] Add card component.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Add card component. ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7605
https://github.com/apache/dolphinscheduler/pull/7635
2d73083e87e7ab2e117e9512437e3ab4cd2be6a4
36dd43737764a2e81875ca2f7b0831cc4b5709dc
"2021-12-24T02:51:25Z"
java
"2021-12-27T01:42:11Z"
dolphinscheduler-ui-next/src/components/chart/index.ts
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { getCurrentInstance, onMounted, onBeforeUnmount, watch, } from 'vue' import { useThemeStore } from '@/store/theme/theme' import { throttle } from 'echarts' import { useI18n } from 'vue-i18n' import type { Ref } from 'vue' import type { ECharts } from 'echarts' import type { ECBasicOption } from 'echarts/types/dist/shared' function initChart<Opt extends ECBasicOption>( domRef: Ref<HTMLDivElement | null>, option: Opt ): ECharts | null { let chart: ECharts | null = null const themeStore = useThemeStore() const { locale } = useI18n() const globalProperties = getCurrentInstance()?.appContext.config.globalProperties const init = () => { chart = globalProperties?.echarts.init( domRef.value, themeStore.darkTheme ? 'dark-bold' : 'macarons' ) chart && chart.setOption(option) } const resize = throttle(() => { chart && chart.resize() }, 20) watch( () => themeStore.darkTheme, () => { chart?.dispose() init() } ) watch( () => locale.value, () => { chart?.dispose() init() } ) onMounted(() => { init() addEventListener('resize', resize) }) onBeforeUnmount(() => { removeEventListener('resize', resize) }) return chart } export default initChart
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,259
[Bug] [deploy] When starting and stopping all servers, the api service prompt information is inaccurate
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened When using stop-all.sh or start-all.sh to start and stop all services, api-server prompts an error. ![Snipaste_2021-12-08_10-58-56](https://user-images.githubusercontent.com/56599784/145140615-b9eb6b0d-6006-4255-8744-7484656e644c.png) The problem also exists in the dev branch. ### What you expected to happen When starting and stopping the api-server service, you should print api-server is stopping/starting. ### How to reproduce exec stop-all.sh or start-all.sh ### Anything else _No response_ ### Version 1.3.9 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7259
https://github.com/apache/dolphinscheduler/pull/7634
36dd43737764a2e81875ca2f7b0831cc4b5709dc
5e9679f1b2c787739228b2f32f41a2ae451145c1
"2021-12-08T03:02:29Z"
java
"2021-12-27T03:05:26Z"
script/start-all.sh
#!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # workDir=`dirname $0` workDir=`cd ${workDir};pwd` source ${workDir}/env/install_env.sh workersGroupMap=() workersGroup=(${workers//,/ }) for workerGroup in ${workersGroup[@]} do echo $workerGroup; worker=`echo $workerGroup|awk -F':' '{print $1}'` groupName=`echo $workerGroup|awk -F':' '{print $2}'` workersGroupMap+=([$worker]=$groupName) done mastersHost=(${masters//,/ }) for master in ${mastersHost[@]} do echo "$master master server is starting" ssh -p $sshPort $master "cd $installPath/; sh bin/dolphinscheduler-daemon.sh start master-server;" done for worker in ${!workersGroupMap[*]} do echo "$worker worker server is starting" ssh -p $sshPort $worker "cd $installPath/; sh bin/dolphinscheduler-daemon.sh start worker-server;" ssh -p $sshPort $worker "cd $installPath/; sh bin/dolphinscheduler-daemon.sh start logger-server;" done ssh -p $sshPort $alertServer "cd $installPath/; sh bin/dolphinscheduler-daemon.sh start alert-server;" apiServersHost=(${apiServers//,/ }) for apiServer in ${apiServersHost[@]} do echo "$apiServer worker server is starting" ssh -p $sshPort $apiServer "cd $installPath/; sh bin/dolphinscheduler-daemon.sh start api-server;" done # query server status echo "query server status" cd $installPath/; sh bin/status-all.sh
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,259
[Bug] [deploy] When starting and stopping all servers, the api service prompt information is inaccurate
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened When using stop-all.sh or start-all.sh to start and stop all services, api-server prompts an error. ![Snipaste_2021-12-08_10-58-56](https://user-images.githubusercontent.com/56599784/145140615-b9eb6b0d-6006-4255-8744-7484656e644c.png) The problem also exists in the dev branch. ### What you expected to happen When starting and stopping the api-server service, you should print api-server is stopping/starting. ### How to reproduce exec stop-all.sh or start-all.sh ### Anything else _No response_ ### Version 1.3.9 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7259
https://github.com/apache/dolphinscheduler/pull/7634
36dd43737764a2e81875ca2f7b0831cc4b5709dc
5e9679f1b2c787739228b2f32f41a2ae451145c1
"2021-12-08T03:02:29Z"
java
"2021-12-27T03:05:26Z"
script/stop-all.sh
#!/bin/sh # # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # workDir=`dirname $0` workDir=`cd ${workDir};pwd` source ${workDir}/env/install_env.sh declare -A workersGroupMap=() workersGroup=(${workers//,/ }) for workerGroup in ${workersGroup[@]} do echo $workerGroup; worker=`echo $workerGroup|awk -F':' '{print $1}'` groupName=`echo $workerGroup|awk -F':' '{print $2}'` workersGroupMap+=([$worker]=$groupName) done mastersHost=(${masters//,/ }) for master in ${mastersHost[@]} do echo "$master master server is stopping" ssh -p $sshPort $master "cd $installPath/; sh bin/dolphinscheduler-daemon.sh stop master-server;" done for worker in ${!workersGroupMap[*]} do echo "$worker worker server is stopping" ssh -p $sshPort $worker "cd $installPath/; sh bin/dolphinscheduler-daemon.sh stop worker-server;" ssh -p $sshPort $worker "cd $installPath/; sh bin/dolphinscheduler-daemon.sh stop logger-server;" done ssh -p $sshPort $alertServer "cd $installPath/; sh bin/dolphinscheduler-daemon.sh stop alert-server;" apiServersHost=(${apiServers//,/ }) for apiServer in ${apiServersHost[@]} do echo "$apiServer worker server is stopping" ssh -p $sshPort $apiServer "cd $installPath/; sh bin/dolphinscheduler-daemon.sh stop api-server;" done
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,569
[Feature][Dao] Optimize Dependent node loading times
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Optimize Dependent node loading times ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7569
https://github.com/apache/dolphinscheduler/pull/7626
2d2cc35ccae54f6e89532d92944b8df60721b92d
8b292132c83374b6364abf4d40505593760c27c2
"2021-12-23T05:47:35Z"
java
"2021-12-27T10:05:56Z"
dolphinscheduler-dao/src/main/resources/sql/dolphinscheduler_mysql.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ SET FOREIGN_KEY_CHECKS=0; -- ---------------------------- -- Table structure for QRTZ_BLOB_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_BLOB_TRIGGERS`; CREATE TABLE `QRTZ_BLOB_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `BLOB_DATA` blob, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), KEY `SCHED_NAME` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), CONSTRAINT `QRTZ_BLOB_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_BLOB_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_CALENDARS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_CALENDARS`; CREATE TABLE `QRTZ_CALENDARS` ( `SCHED_NAME` varchar(120) NOT NULL, `CALENDAR_NAME` varchar(200) NOT NULL, `CALENDAR` blob NOT NULL, PRIMARY KEY (`SCHED_NAME`,`CALENDAR_NAME`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_CALENDARS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_CRON_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_CRON_TRIGGERS`; CREATE TABLE `QRTZ_CRON_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `CRON_EXPRESSION` varchar(120) NOT NULL, `TIME_ZONE_ID` varchar(80) DEFAULT NULL, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), CONSTRAINT `QRTZ_CRON_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_CRON_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_FIRED_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_FIRED_TRIGGERS`; CREATE TABLE `QRTZ_FIRED_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `ENTRY_ID` varchar(200) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `INSTANCE_NAME` varchar(200) NOT NULL, `FIRED_TIME` bigint(13) NOT NULL, `SCHED_TIME` bigint(13) NOT NULL, `PRIORITY` int(11) NOT NULL, `STATE` varchar(16) NOT NULL, `JOB_NAME` varchar(200) DEFAULT NULL, `JOB_GROUP` varchar(200) DEFAULT NULL, `IS_NONCONCURRENT` varchar(1) DEFAULT NULL, `REQUESTS_RECOVERY` varchar(1) DEFAULT NULL, PRIMARY KEY (`SCHED_NAME`,`ENTRY_ID`), KEY `IDX_QRTZ_FT_TRIG_INST_NAME` (`SCHED_NAME`,`INSTANCE_NAME`), KEY `IDX_QRTZ_FT_INST_JOB_REQ_RCVRY` (`SCHED_NAME`,`INSTANCE_NAME`,`REQUESTS_RECOVERY`), KEY `IDX_QRTZ_FT_J_G` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_FT_JG` (`SCHED_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_FT_T_G` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), KEY `IDX_QRTZ_FT_TG` (`SCHED_NAME`,`TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_FIRED_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_JOB_DETAILS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_JOB_DETAILS`; CREATE TABLE `QRTZ_JOB_DETAILS` ( `SCHED_NAME` varchar(120) NOT NULL, `JOB_NAME` varchar(200) NOT NULL, `JOB_GROUP` varchar(200) NOT NULL, `DESCRIPTION` varchar(250) DEFAULT NULL, `JOB_CLASS_NAME` varchar(250) NOT NULL, `IS_DURABLE` varchar(1) NOT NULL, `IS_NONCONCURRENT` varchar(1) NOT NULL, `IS_UPDATE_DATA` varchar(1) NOT NULL, `REQUESTS_RECOVERY` varchar(1) NOT NULL, `JOB_DATA` blob, PRIMARY KEY (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_J_REQ_RECOVERY` (`SCHED_NAME`,`REQUESTS_RECOVERY`), KEY `IDX_QRTZ_J_GRP` (`SCHED_NAME`,`JOB_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_JOB_DETAILS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_LOCKS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_LOCKS`; CREATE TABLE `QRTZ_LOCKS` ( `SCHED_NAME` varchar(120) NOT NULL, `LOCK_NAME` varchar(40) NOT NULL, PRIMARY KEY (`SCHED_NAME`,`LOCK_NAME`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_LOCKS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_PAUSED_TRIGGER_GRPS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_PAUSED_TRIGGER_GRPS`; CREATE TABLE `QRTZ_PAUSED_TRIGGER_GRPS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_PAUSED_TRIGGER_GRPS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SCHEDULER_STATE -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_SCHEDULER_STATE`; CREATE TABLE `QRTZ_SCHEDULER_STATE` ( `SCHED_NAME` varchar(120) NOT NULL, `INSTANCE_NAME` varchar(200) NOT NULL, `LAST_CHECKIN_TIME` bigint(13) NOT NULL, `CHECKIN_INTERVAL` bigint(13) NOT NULL, PRIMARY KEY (`SCHED_NAME`,`INSTANCE_NAME`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_SCHEDULER_STATE -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SIMPLE_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_SIMPLE_TRIGGERS`; CREATE TABLE `QRTZ_SIMPLE_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `REPEAT_COUNT` bigint(7) NOT NULL, `REPEAT_INTERVAL` bigint(12) NOT NULL, `TIMES_TRIGGERED` bigint(10) NOT NULL, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), CONSTRAINT `QRTZ_SIMPLE_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_SIMPLE_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SIMPROP_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_SIMPROP_TRIGGERS`; CREATE TABLE `QRTZ_SIMPROP_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `STR_PROP_1` varchar(512) DEFAULT NULL, `STR_PROP_2` varchar(512) DEFAULT NULL, `STR_PROP_3` varchar(512) DEFAULT NULL, `INT_PROP_1` int(11) DEFAULT NULL, `INT_PROP_2` int(11) DEFAULT NULL, `LONG_PROP_1` bigint(20) DEFAULT NULL, `LONG_PROP_2` bigint(20) DEFAULT NULL, `DEC_PROP_1` decimal(13,4) DEFAULT NULL, `DEC_PROP_2` decimal(13,4) DEFAULT NULL, `BOOL_PROP_1` varchar(1) DEFAULT NULL, `BOOL_PROP_2` varchar(1) DEFAULT NULL, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), CONSTRAINT `QRTZ_SIMPROP_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) REFERENCES `QRTZ_TRIGGERS` (`SCHED_NAME`, `TRIGGER_NAME`, `TRIGGER_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_SIMPROP_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS `QRTZ_TRIGGERS`; CREATE TABLE `QRTZ_TRIGGERS` ( `SCHED_NAME` varchar(120) NOT NULL, `TRIGGER_NAME` varchar(200) NOT NULL, `TRIGGER_GROUP` varchar(200) NOT NULL, `JOB_NAME` varchar(200) NOT NULL, `JOB_GROUP` varchar(200) NOT NULL, `DESCRIPTION` varchar(250) DEFAULT NULL, `NEXT_FIRE_TIME` bigint(13) DEFAULT NULL, `PREV_FIRE_TIME` bigint(13) DEFAULT NULL, `PRIORITY` int(11) DEFAULT NULL, `TRIGGER_STATE` varchar(16) NOT NULL, `TRIGGER_TYPE` varchar(8) NOT NULL, `START_TIME` bigint(13) NOT NULL, `END_TIME` bigint(13) DEFAULT NULL, `CALENDAR_NAME` varchar(200) DEFAULT NULL, `MISFIRE_INSTR` smallint(2) DEFAULT NULL, `JOB_DATA` blob, PRIMARY KEY (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`), KEY `IDX_QRTZ_T_J` (`SCHED_NAME`,`JOB_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_T_JG` (`SCHED_NAME`,`JOB_GROUP`), KEY `IDX_QRTZ_T_C` (`SCHED_NAME`,`CALENDAR_NAME`), KEY `IDX_QRTZ_T_G` (`SCHED_NAME`,`TRIGGER_GROUP`), KEY `IDX_QRTZ_T_STATE` (`SCHED_NAME`,`TRIGGER_STATE`), KEY `IDX_QRTZ_T_N_STATE` (`SCHED_NAME`,`TRIGGER_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`), KEY `IDX_QRTZ_T_N_G_STATE` (`SCHED_NAME`,`TRIGGER_GROUP`,`TRIGGER_STATE`), KEY `IDX_QRTZ_T_NEXT_FIRE_TIME` (`SCHED_NAME`,`NEXT_FIRE_TIME`), KEY `IDX_QRTZ_T_NFT_ST` (`SCHED_NAME`,`TRIGGER_STATE`,`NEXT_FIRE_TIME`), KEY `IDX_QRTZ_T_NFT_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`), KEY `IDX_QRTZ_T_NFT_ST_MISFIRE` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_STATE`), KEY `IDX_QRTZ_T_NFT_ST_MISFIRE_GRP` (`SCHED_NAME`,`MISFIRE_INSTR`,`NEXT_FIRE_TIME`,`TRIGGER_GROUP`,`TRIGGER_STATE`), CONSTRAINT `QRTZ_TRIGGERS_ibfk_1` FOREIGN KEY (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`) REFERENCES `QRTZ_JOB_DETAILS` (`SCHED_NAME`, `JOB_NAME`, `JOB_GROUP`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of QRTZ_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_access_token -- ---------------------------- DROP TABLE IF EXISTS `t_ds_access_token`; CREATE TABLE `t_ds_access_token` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) DEFAULT NULL COMMENT 'user id', `token` varchar(64) DEFAULT NULL COMMENT 'token', `expire_time` datetime DEFAULT NULL COMMENT 'end time of token ', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_access_token -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_alert -- ---------------------------- DROP TABLE IF EXISTS `t_ds_alert`; CREATE TABLE `t_ds_alert` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `title` varchar(64) DEFAULT NULL COMMENT 'title', `content` text COMMENT 'Message content (can be email, can be SMS. Mail is stored in JSON map, and SMS is string)', `alert_status` tinyint(4) DEFAULT '0' COMMENT '0:wait running,1:success,2:failed', `log` text COMMENT 'log', `alertgroup_id` int(11) DEFAULT NULL COMMENT 'alert group id', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_alert -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_alertgroup -- ---------------------------- DROP TABLE IF EXISTS `t_ds_alertgroup`; CREATE TABLE `t_ds_alertgroup`( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `alert_instance_ids` varchar (255) DEFAULT NULL COMMENT 'alert instance ids', `create_user_id` int(11) DEFAULT NULL COMMENT 'create user id', `group_name` varchar(255) DEFAULT NULL COMMENT 'group name', `description` varchar(255) DEFAULT NULL, `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), UNIQUE KEY `t_ds_alertgroup_name_un` (`group_name`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_alertgroup -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_command -- ---------------------------- DROP TABLE IF EXISTS `t_ds_command`; CREATE TABLE `t_ds_command` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `command_type` tinyint(4) DEFAULT NULL COMMENT 'Command type: 0 start workflow, 1 start execution from current node, 2 resume fault-tolerant workflow, 3 resume pause process, 4 start execution from failed node, 5 complement, 6 schedule, 7 rerun, 8 pause, 9 stop, 10 resume waiting thread', `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code', `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version', `process_instance_id` int(11) DEFAULT '0' COMMENT 'process instance id', `command_param` text COMMENT 'json command parameters', `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'Node dependency type: 0 current node, 1 forward, 2 backward', `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'Failed policy: 0 end, 1 continue', `warning_type` tinyint(4) DEFAULT '0' COMMENT 'Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent', `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group', `schedule_time` datetime DEFAULT NULL COMMENT 'schedule time', `start_time` datetime DEFAULT NULL COMMENT 'start time', `executor_id` int(11) DEFAULT NULL COMMENT 'executor id', `update_time` datetime DEFAULT NULL COMMENT 'update time', `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority: 0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) COMMENT 'worker group', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run', PRIMARY KEY (`id`), KEY `priority_id_index` (`process_instance_priority`,`id`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_command -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_datasource -- ---------------------------- DROP TABLE IF EXISTS `t_ds_datasource`; CREATE TABLE `t_ds_datasource` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `name` varchar(64) NOT NULL COMMENT 'data source name', `note` varchar(255) DEFAULT NULL COMMENT 'description', `type` tinyint(4) NOT NULL COMMENT 'data source type: 0:mysql,1:postgresql,2:hive,3:spark', `user_id` int(11) NOT NULL COMMENT 'the creator id', `connection_params` text NOT NULL COMMENT 'json connection params', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), UNIQUE KEY `t_ds_datasource_name_un` (`name`, `type`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_datasource -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_error_command -- ---------------------------- DROP TABLE IF EXISTS `t_ds_error_command`; CREATE TABLE `t_ds_error_command` ( `id` int(11) NOT NULL COMMENT 'key', `command_type` tinyint(4) DEFAULT NULL COMMENT 'command type', `executor_id` int(11) DEFAULT NULL COMMENT 'executor id', `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code', `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version', `process_instance_id` int(11) DEFAULT '0' COMMENT 'process instance id: 0', `command_param` text COMMENT 'json command parameters', `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'task depend type', `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'failure strategy', `warning_type` tinyint(4) DEFAULT '0' COMMENT 'warning type', `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group id', `schedule_time` datetime DEFAULT NULL COMMENT 'scheduler time', `start_time` datetime DEFAULT NULL COMMENT 'start time', `update_time` datetime DEFAULT NULL COMMENT 'update time', `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority, 0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) COMMENT 'worker group', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `message` text COMMENT 'message', `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag: 0 normal, 1 dry run', PRIMARY KEY (`id`) USING BTREE ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC; -- ---------------------------- -- Records of t_ds_error_command -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_process_definition -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_definition`; CREATE TABLE `t_ds_process_definition` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(255) DEFAULT NULL COMMENT 'process definition name', `version` int(11) DEFAULT '0' COMMENT 'process definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online', `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id', `global_params` text COMMENT 'global parameters', `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available', `locations` text COMMENT 'Node location information', `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id', `timeout` int(11) DEFAULT '0' COMMENT 'time out, unit: minute', `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id', `execution_type` tinyint(4) DEFAULT '0' COMMENT 'execution_type 0:parallel,1:serial wait,2:serial discard,3:serial priority', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`,`code`), UNIQUE KEY `process_unique` (`name`,`project_code`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_process_definition -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_process_definition_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_definition_log`; CREATE TABLE `t_ds_process_definition_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'process definition name', `version` int(11) DEFAULT '0' COMMENT 'process definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online', `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id', `global_params` text COMMENT 'global parameters', `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available', `locations` text COMMENT 'Node location information', `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id', `timeout` int(11) DEFAULT '0' COMMENT 'time out,unit: minute', `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id', `execution_type` tinyint(4) DEFAULT '0' COMMENT 'execution_type 0:parallel,1:serial wait,2:serial discard,3:serial priority', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_task_definition -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_definition`; CREATE TABLE `t_ds_task_definition` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'task definition name', `version` int(11) DEFAULT '0' COMMENT 'task definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id', `task_type` varchar(50) NOT NULL COMMENT 'task type', `task_params` longtext COMMENT 'job custom parameters', `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available', `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority', `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries', `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval', `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open', `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail', `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute', `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute', `resource_ids` text COMMENT 'resource id, separated by comma', `task_group_id` int(11) DEFAULT NULL COMMENT 'task group id', `task_group_priority` tinyint(4) DEFAULT 1 COMMENT 'task group priority', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`,`code`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_task_definition_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_definition_log`; CREATE TABLE `t_ds_task_definition_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'task definition name', `version` int(11) DEFAULT '0' COMMENT 'task definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id', `task_type` varchar(50) NOT NULL COMMENT 'task type', `task_params` longtext COMMENT 'job custom parameters', `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available', `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority', `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries', `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval', `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open', `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail', `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute', `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute', `resource_ids` text DEFAULT NULL COMMENT 'resource id, separated by comma', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `task_group_id` int(11) DEFAULT NULL COMMENT 'task group id', `task_group_priority` tinyint(4) DEFAULT 1 COMMENT 'task group priority', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_task_relation -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_task_relation`; CREATE TABLE `t_ds_process_task_relation` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `name` varchar(200) DEFAULT NULL COMMENT 'relation name', `project_code` bigint(20) NOT NULL COMMENT 'project code', `process_definition_code` bigint(20) NOT NULL COMMENT 'process code', `process_definition_version` int(11) NOT NULL COMMENT 'process version', `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code', `pre_task_version` int(11) NOT NULL COMMENT 'pre task version', `post_task_code` bigint(20) NOT NULL COMMENT 'post task code', `post_task_version` int(11) NOT NULL COMMENT 'post task version', `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay', `condition_params` text COMMENT 'condition params(json)', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_task_relation_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_task_relation_log`; CREATE TABLE `t_ds_process_task_relation_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `name` varchar(200) DEFAULT NULL COMMENT 'relation name', `project_code` bigint(20) NOT NULL COMMENT 'project code', `process_definition_code` bigint(20) NOT NULL COMMENT 'process code', `process_definition_version` int(11) NOT NULL COMMENT 'process version', `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code', `pre_task_version` int(11) NOT NULL COMMENT 'pre task version', `post_task_code` bigint(20) NOT NULL COMMENT 'post task code', `post_task_version` int(11) NOT NULL COMMENT 'post task version', `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay', `condition_params` text COMMENT 'condition params(json)', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_instance`; CREATE TABLE `t_ds_process_instance` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `name` varchar(255) DEFAULT NULL COMMENT 'process instance name', `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code', `process_definition_version` int(11) DEFAULT '0' COMMENT 'process definition version', `state` tinyint(4) DEFAULT NULL COMMENT 'process instance Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete', `recovery` tinyint(4) DEFAULT NULL COMMENT 'process instance failover flag:0:normal,1:failover instance', `start_time` datetime DEFAULT NULL COMMENT 'process instance start time', `end_time` datetime DEFAULT NULL COMMENT 'process instance end time', `run_times` int(11) DEFAULT NULL COMMENT 'process instance run times', `host` varchar(135) DEFAULT NULL COMMENT 'process instance host', `command_type` tinyint(4) DEFAULT NULL COMMENT 'command type', `command_param` text COMMENT 'json command parameters', `task_depend_type` tinyint(4) DEFAULT NULL COMMENT 'task depend type. 0: only current node,1:before the node,2:later nodes', `max_try_times` tinyint(4) DEFAULT '0' COMMENT 'max try times', `failure_strategy` tinyint(4) DEFAULT '0' COMMENT 'failure strategy. 0:end the process when node failed,1:continue running the other nodes when node failed', `warning_type` tinyint(4) DEFAULT '0' COMMENT 'warning type. 0:no warning,1:warning if process success,2:warning if process failed,3:warning if success', `warning_group_id` int(11) DEFAULT NULL COMMENT 'warning group id', `schedule_time` datetime DEFAULT NULL COMMENT 'schedule time', `command_start_time` datetime DEFAULT NULL COMMENT 'command start time', `global_params` text COMMENT 'global parameters', `flag` tinyint(4) DEFAULT '1' COMMENT 'flag', `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `is_sub_process` int(11) DEFAULT '0' COMMENT 'flag, whether the process is sub process', `executor_id` int(11) NOT NULL COMMENT 'executor id', `history_cmd` text COMMENT 'history commands of process instance operation', `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority. 0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) DEFAULT NULL COMMENT 'worker group id', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `timeout` int(11) DEFAULT '0' COMMENT 'time out', `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id', `var_pool` longtext COMMENT 'var_pool', `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run', `next_process_instance_id` int(11) DEFAULT '0' COMMENT 'serial queue next processInstanceId', PRIMARY KEY (`id`), KEY `process_instance_index` (`process_definition_code`,`id`) USING BTREE, KEY `start_time_index` (`start_time`,`end_time`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_process_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_project -- ---------------------------- DROP TABLE IF EXISTS `t_ds_project`; CREATE TABLE `t_ds_project` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `name` varchar(100) DEFAULT NULL COMMENT 'project name', `code` bigint(20) NOT NULL COMMENT 'encoding', `description` varchar(200) DEFAULT NULL, `user_id` int(11) DEFAULT NULL COMMENT 'creator id', `flag` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), KEY `user_id_index` (`user_id`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_project -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_queue -- ---------------------------- DROP TABLE IF EXISTS `t_ds_queue`; CREATE TABLE `t_ds_queue` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `queue_name` varchar(64) DEFAULT NULL COMMENT 'queue name', `queue` varchar(64) DEFAULT NULL COMMENT 'yarn queue name', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_queue -- ---------------------------- INSERT INTO `t_ds_queue` VALUES ('1', 'default', 'default', null, null); -- ---------------------------- -- Table structure for t_ds_relation_datasource_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_datasource_user`; CREATE TABLE `t_ds_relation_datasource_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'user id', `datasource_id` int(11) DEFAULT NULL COMMENT 'data source id', `perm` int(11) DEFAULT '1' COMMENT 'limits of authority', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_relation_datasource_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_process_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_process_instance`; CREATE TABLE `t_ds_relation_process_instance` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `parent_process_instance_id` int(11) DEFAULT NULL COMMENT 'parent process instance id', `parent_task_instance_id` int(11) DEFAULT NULL COMMENT 'parent process instance id', `process_instance_id` int(11) DEFAULT NULL COMMENT 'child process instance id', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_relation_process_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_project_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_project_user`; CREATE TABLE `t_ds_relation_project_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'user id', `project_id` int(11) DEFAULT NULL COMMENT 'project id', `perm` int(11) DEFAULT '1' COMMENT 'limits of authority', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), KEY `user_id_index` (`user_id`) USING BTREE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_relation_project_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_resources_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_resources_user`; CREATE TABLE `t_ds_relation_resources_user` ( `id` int(11) NOT NULL AUTO_INCREMENT, `user_id` int(11) NOT NULL COMMENT 'user id', `resources_id` int(11) DEFAULT NULL COMMENT 'resource id', `perm` int(11) DEFAULT '1' COMMENT 'limits of authority', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_relation_resources_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_udfs_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_relation_udfs_user`; CREATE TABLE `t_ds_relation_udfs_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'userid', `udf_id` int(11) DEFAULT NULL COMMENT 'udf id', `perm` int(11) DEFAULT '1' COMMENT 'limits of authority', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_resources -- ---------------------------- DROP TABLE IF EXISTS `t_ds_resources`; CREATE TABLE `t_ds_resources` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `alias` varchar(64) DEFAULT NULL COMMENT 'alias', `file_name` varchar(64) DEFAULT NULL COMMENT 'file name', `description` varchar(255) DEFAULT NULL, `user_id` int(11) DEFAULT NULL COMMENT 'user id', `type` tinyint(4) DEFAULT NULL COMMENT 'resource type,0:FILE,1:UDF', `size` bigint(20) DEFAULT NULL COMMENT 'resource size', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', `pid` int(11) DEFAULT NULL, `full_name` varchar(64) DEFAULT NULL, `is_directory` tinyint(4) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `t_ds_resources_un` (`full_name`,`type`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_resources -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_schedules -- ---------------------------- DROP TABLE IF EXISTS `t_ds_schedules`; CREATE TABLE `t_ds_schedules` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `process_definition_code` bigint(20) NOT NULL COMMENT 'process definition code', `start_time` datetime NOT NULL COMMENT 'start time', `end_time` datetime NOT NULL COMMENT 'end time', `timezone_id` varchar(40) DEFAULT NULL COMMENT 'schedule timezone id', `crontab` varchar(255) NOT NULL COMMENT 'crontab description', `failure_strategy` tinyint(4) NOT NULL COMMENT 'failure strategy. 0:end,1:continue', `user_id` int(11) NOT NULL COMMENT 'user id', `release_state` tinyint(4) NOT NULL COMMENT 'release state. 0:offline,1:online ', `warning_type` tinyint(4) NOT NULL COMMENT 'Alarm type: 0 is not sent, 1 process is sent successfully, 2 process is sent failed, 3 process is sent successfully and all failures are sent', `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id', `process_instance_priority` int(11) DEFAULT NULL COMMENT 'process instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) DEFAULT '' COMMENT 'worker group id', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_schedules -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_session -- ---------------------------- DROP TABLE IF EXISTS `t_ds_session`; CREATE TABLE `t_ds_session` ( `id` varchar(64) NOT NULL COMMENT 'key', `user_id` int(11) DEFAULT NULL COMMENT 'user id', `ip` varchar(45) DEFAULT NULL COMMENT 'ip', `last_login_time` datetime DEFAULT NULL COMMENT 'last login time', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_session -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_task_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_instance`; CREATE TABLE `t_ds_task_instance` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `name` varchar(255) DEFAULT NULL COMMENT 'task name', `task_type` varchar(50) NOT NULL COMMENT 'task type', `task_code` bigint(20) NOT NULL COMMENT 'task definition code', `task_definition_version` int(11) DEFAULT '0' COMMENT 'task definition version', `process_instance_id` int(11) DEFAULT NULL COMMENT 'process instance id', `state` tinyint(4) DEFAULT NULL COMMENT 'Status: 0 commit succeeded, 1 running, 2 prepare to pause, 3 pause, 4 prepare to stop, 5 stop, 6 fail, 7 succeed, 8 need fault tolerance, 9 kill, 10 wait for thread, 11 wait for dependency to complete', `submit_time` datetime DEFAULT NULL COMMENT 'task submit time', `start_time` datetime DEFAULT NULL COMMENT 'task start time', `end_time` datetime DEFAULT NULL COMMENT 'task end time', `host` varchar(135) DEFAULT NULL COMMENT 'host of task running on', `execute_path` varchar(200) DEFAULT NULL COMMENT 'task execute path in the host', `log_path` varchar(200) DEFAULT NULL COMMENT 'task log path', `alert_flag` tinyint(4) DEFAULT NULL COMMENT 'whether alert', `retry_times` int(4) DEFAULT '0' COMMENT 'task retry times', `pid` int(4) DEFAULT NULL COMMENT 'pid of task', `app_link` text COMMENT 'yarn app id', `task_params` longtext COMMENT 'job custom parameters', `flag` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available', `retry_interval` int(4) DEFAULT NULL COMMENT 'retry interval when task failed ', `max_retry_times` int(2) DEFAULT NULL COMMENT 'max retry times', `task_instance_priority` int(11) DEFAULT NULL COMMENT 'task instance priority:0 Highest,1 High,2 Medium,3 Low,4 Lowest', `worker_group` varchar(64) DEFAULT NULL COMMENT 'worker group id', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `environment_config` text COMMENT 'this config contains many environment variables config', `executor_id` int(11) DEFAULT NULL, `first_submit_time` datetime DEFAULT NULL COMMENT 'task first submit time', `delay_time` int(4) DEFAULT '0' COMMENT 'task delay execution time', `var_pool` longtext COMMENT 'var_pool', `task_group_id` int(11) DEFAULT NULL COMMENT 'task group id', `dry_run` tinyint(4) DEFAULT '0' COMMENT 'dry run flag: 0 normal, 1 dry run', PRIMARY KEY (`id`), KEY `process_instance_id` (`process_instance_id`) USING BTREE, CONSTRAINT `foreign_key_instance_id` FOREIGN KEY (`process_instance_id`) REFERENCES `t_ds_process_instance` (`id`) ON DELETE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_task_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_tenant -- ---------------------------- DROP TABLE IF EXISTS `t_ds_tenant`; CREATE TABLE `t_ds_tenant` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `tenant_code` varchar(64) DEFAULT NULL COMMENT 'tenant code', `description` varchar(255) DEFAULT NULL, `queue_id` int(11) DEFAULT NULL COMMENT 'queue id', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_tenant -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_udfs -- ---------------------------- DROP TABLE IF EXISTS `t_ds_udfs`; CREATE TABLE `t_ds_udfs` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'key', `user_id` int(11) NOT NULL COMMENT 'user id', `func_name` varchar(100) NOT NULL COMMENT 'UDF function name', `class_name` varchar(255) NOT NULL COMMENT 'class of udf', `type` tinyint(4) NOT NULL COMMENT 'Udf function type', `arg_types` varchar(255) DEFAULT NULL COMMENT 'arguments types', `database` varchar(255) DEFAULT NULL COMMENT 'data base', `description` varchar(255) DEFAULT NULL, `resource_id` int(11) NOT NULL COMMENT 'resource id', `resource_name` varchar(255) NOT NULL COMMENT 'resource name', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_udfs -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_user -- ---------------------------- DROP TABLE IF EXISTS `t_ds_user`; CREATE TABLE `t_ds_user` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'user id', `user_name` varchar(64) DEFAULT NULL COMMENT 'user name', `user_password` varchar(64) DEFAULT NULL COMMENT 'user password', `user_type` tinyint(4) DEFAULT NULL COMMENT 'user type, 0:administrator,1:ordinary user', `email` varchar(64) DEFAULT NULL COMMENT 'email', `phone` varchar(11) DEFAULT NULL COMMENT 'phone', `tenant_id` int(11) DEFAULT NULL COMMENT 'tenant id', `create_time` datetime DEFAULT NULL COMMENT 'create time', `update_time` datetime DEFAULT NULL COMMENT 'update time', `queue` varchar(64) DEFAULT NULL COMMENT 'queue', `state` tinyint(4) DEFAULT '1' COMMENT 'state 0:disable 1:enable', PRIMARY KEY (`id`), UNIQUE KEY `user_name_unique` (`user_name`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_worker_group -- ---------------------------- DROP TABLE IF EXISTS `t_ds_worker_group`; CREATE TABLE `t_ds_worker_group` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `name` varchar(255) NOT NULL COMMENT 'worker group name', `addr_list` text NULL DEFAULT NULL COMMENT 'worker addr list. split by [,]', `create_time` datetime NULL DEFAULT NULL COMMENT 'create time', `update_time` datetime NULL DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), UNIQUE KEY `name_unique` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Records of t_ds_worker_group -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_version -- ---------------------------- DROP TABLE IF EXISTS `t_ds_version`; CREATE TABLE `t_ds_version` ( `id` int(11) NOT NULL AUTO_INCREMENT, `version` varchar(200) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `version_UNIQUE` (`version`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COMMENT='version'; -- ---------------------------- -- Records of t_ds_version -- ---------------------------- INSERT INTO `t_ds_version` VALUES ('1', '2.0.2'); -- ---------------------------- -- Records of t_ds_alertgroup -- ---------------------------- INSERT INTO `t_ds_alertgroup`(alert_instance_ids, create_user_id, group_name, description, create_time, update_time) VALUES ("1,2", 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39'); -- ---------------------------- -- Records of t_ds_user -- ---------------------------- INSERT INTO `t_ds_user` VALUES ('1', 'admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22', null, 1); -- ---------------------------- -- Table structure for t_ds_plugin_define -- ---------------------------- SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY','')); DROP TABLE IF EXISTS `t_ds_plugin_define`; CREATE TABLE `t_ds_plugin_define` ( `id` int NOT NULL AUTO_INCREMENT, `plugin_name` varchar(100) NOT NULL COMMENT 'the name of plugin eg: email', `plugin_type` varchar(100) NOT NULL COMMENT 'plugin type . alert=alert plugin, job=job plugin', `plugin_params` text COMMENT 'plugin params', `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `t_ds_plugin_define_UN` (`plugin_name`,`plugin_type`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_alert_plugin_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_alert_plugin_instance`; CREATE TABLE `t_ds_alert_plugin_instance` ( `id` int NOT NULL AUTO_INCREMENT, `plugin_define_id` int NOT NULL, `plugin_instance_params` text COMMENT 'plugin instance params. Also contain the params value which user input in web ui.', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `instance_name` varchar(200) DEFAULT NULL COMMENT 'alert instance name', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_environment -- ---------------------------- DROP TABLE IF EXISTS `t_ds_environment`; CREATE TABLE `t_ds_environment` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `code` bigint(20) DEFAULT NULL COMMENT 'encoding', `name` varchar(100) NOT NULL COMMENT 'environment name', `config` text NULL DEFAULT NULL COMMENT 'this config contains many environment variables config', `description` text NULL DEFAULT NULL COMMENT 'the details', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `environment_name_unique` (`name`), UNIQUE KEY `environment_code_unique` (`code`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_environment_worker_group_relation -- ---------------------------- DROP TABLE IF EXISTS `t_ds_environment_worker_group_relation`; CREATE TABLE `t_ds_environment_worker_group_relation` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `environment_code` bigint(20) NOT NULL COMMENT 'environment code', `worker_group` varchar(255) NOT NULL COMMENT 'worker group id', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `environment_worker_group_unique` (`environment_code`,`worker_group`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_task_group_queue -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_group_queue`; CREATE TABLE `t_ds_task_group_queue` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT'key', `task_id` int(11) DEFAULT NULL COMMENT 'taskintanceid', `task_name` varchar(100) DEFAULT NULL COMMENT 'TaskInstance name', `group_id` int(11) DEFAULT NULL COMMENT 'taskGroup id', `process_id` int(11) DEFAULT NULL COMMENT 'processInstace id', `priority` int(8) DEFAULT '0' COMMENT 'priority', `status` tinyint(4) DEFAULT '-1' COMMENT '-1: waiting 1: running 2: finished', `force_start` tinyint(4) DEFAULT '0' COMMENT 'is force start 0 NO ,1 YES', `in_queue` tinyint(4) DEFAULT '0' COMMENT 'ready to get the queue by other task finish 0 NO ,1 YES', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY( `id` ) )ENGINE= INNODB AUTO_INCREMENT= 1 DEFAULT CHARSET= utf8; -- ---------------------------- -- Table structure for t_ds_task_group -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_group`; CREATE TABLE `t_ds_task_group` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT'key', `name` varchar(100) DEFAULT NULL COMMENT 'task_group name', `description` varchar(200) DEFAULT NULL, `group_size` int (11) NOT NULL COMMENT'group size', `use_size` int (11) DEFAULT '0' COMMENT 'used size', `user_id` int(11) DEFAULT NULL COMMENT 'creator id', `project_code` bigint(20) DEFAULT 0 COMMENT 'project code', `status` tinyint(4) DEFAULT '1' COMMENT '0 not available, 1 available', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY(`id`) ) ENGINE= INNODB AUTO_INCREMENT= 1 DEFAULT CHARSET= utf8;
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,569
[Feature][Dao] Optimize Dependent node loading times
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Optimize Dependent node loading times ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7569
https://github.com/apache/dolphinscheduler/pull/7626
2d2cc35ccae54f6e89532d92944b8df60721b92d
8b292132c83374b6364abf4d40505593760c27c2
"2021-12-23T05:47:35Z"
java
"2021-12-27T10:05:56Z"
dolphinscheduler-dao/src/main/resources/sql/dolphinscheduler_postgresql.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ DROP TABLE IF EXISTS QRTZ_FIRED_TRIGGERS; DROP TABLE IF EXISTS QRTZ_PAUSED_TRIGGER_GRPS; DROP TABLE IF EXISTS QRTZ_SCHEDULER_STATE; DROP TABLE IF EXISTS QRTZ_LOCKS; DROP TABLE IF EXISTS QRTZ_SIMPLE_TRIGGERS; DROP TABLE IF EXISTS QRTZ_SIMPROP_TRIGGERS; DROP TABLE IF EXISTS QRTZ_CRON_TRIGGERS; DROP TABLE IF EXISTS QRTZ_BLOB_TRIGGERS; DROP TABLE IF EXISTS QRTZ_TRIGGERS; DROP TABLE IF EXISTS QRTZ_JOB_DETAILS; DROP TABLE IF EXISTS QRTZ_CALENDARS; CREATE TABLE QRTZ_JOB_DETAILS ( SCHED_NAME character varying(120) NOT NULL, JOB_NAME character varying(200) NOT NULL, JOB_GROUP character varying(200) NOT NULL, DESCRIPTION character varying(250) NULL, JOB_CLASS_NAME character varying(250) NOT NULL, IS_DURABLE boolean NOT NULL, IS_NONCONCURRENT boolean NOT NULL, IS_UPDATE_DATA boolean NOT NULL, REQUESTS_RECOVERY boolean NOT NULL, JOB_DATA bytea NULL ); alter table QRTZ_JOB_DETAILS add primary key(SCHED_NAME,JOB_NAME,JOB_GROUP); CREATE TABLE QRTZ_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, JOB_NAME character varying(200) NOT NULL, JOB_GROUP character varying(200) NOT NULL, DESCRIPTION character varying(250) NULL, NEXT_FIRE_TIME BIGINT NULL, PREV_FIRE_TIME BIGINT NULL, PRIORITY INTEGER NULL, TRIGGER_STATE character varying(16) NOT NULL, TRIGGER_TYPE character varying(8) NOT NULL, START_TIME BIGINT NOT NULL, END_TIME BIGINT NULL, CALENDAR_NAME character varying(200) NULL, MISFIRE_INSTR SMALLINT NULL, JOB_DATA bytea NULL ) ; alter table QRTZ_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_SIMPLE_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, REPEAT_COUNT BIGINT NOT NULL, REPEAT_INTERVAL BIGINT NOT NULL, TIMES_TRIGGERED BIGINT NOT NULL ) ; alter table QRTZ_SIMPLE_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_CRON_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, CRON_EXPRESSION character varying(120) NOT NULL, TIME_ZONE_ID character varying(80) ) ; alter table QRTZ_CRON_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_SIMPROP_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, STR_PROP_1 character varying(512) NULL, STR_PROP_2 character varying(512) NULL, STR_PROP_3 character varying(512) NULL, INT_PROP_1 INT NULL, INT_PROP_2 INT NULL, LONG_PROP_1 BIGINT NULL, LONG_PROP_2 BIGINT NULL, DEC_PROP_1 NUMERIC(13,4) NULL, DEC_PROP_2 NUMERIC(13,4) NULL, BOOL_PROP_1 boolean NULL, BOOL_PROP_2 boolean NULL ) ; alter table QRTZ_SIMPROP_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_BLOB_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, BLOB_DATA bytea NULL ) ; alter table QRTZ_BLOB_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_CALENDARS ( SCHED_NAME character varying(120) NOT NULL, CALENDAR_NAME character varying(200) NOT NULL, CALENDAR bytea NOT NULL ) ; alter table QRTZ_CALENDARS add primary key(SCHED_NAME,CALENDAR_NAME); CREATE TABLE QRTZ_PAUSED_TRIGGER_GRPS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL ) ; alter table QRTZ_PAUSED_TRIGGER_GRPS add primary key(SCHED_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_FIRED_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, ENTRY_ID character varying(200) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, INSTANCE_NAME character varying(200) NOT NULL, FIRED_TIME BIGINT NOT NULL, SCHED_TIME BIGINT NOT NULL, PRIORITY INTEGER NOT NULL, STATE character varying(16) NOT NULL, JOB_NAME character varying(200) NULL, JOB_GROUP character varying(200) NULL, IS_NONCONCURRENT boolean NULL, REQUESTS_RECOVERY boolean NULL ) ; alter table QRTZ_FIRED_TRIGGERS add primary key(SCHED_NAME,ENTRY_ID); CREATE TABLE QRTZ_SCHEDULER_STATE ( SCHED_NAME character varying(120) NOT NULL, INSTANCE_NAME character varying(200) NOT NULL, LAST_CHECKIN_TIME BIGINT NOT NULL, CHECKIN_INTERVAL BIGINT NOT NULL ) ; alter table QRTZ_SCHEDULER_STATE add primary key(SCHED_NAME,INSTANCE_NAME); CREATE TABLE QRTZ_LOCKS ( SCHED_NAME character varying(120) NOT NULL, LOCK_NAME character varying(40) NOT NULL ) ; alter table QRTZ_LOCKS add primary key(SCHED_NAME,LOCK_NAME); CREATE INDEX IDX_QRTZ_J_REQ_RECOVERY ON QRTZ_JOB_DETAILS(SCHED_NAME,REQUESTS_RECOVERY); CREATE INDEX IDX_QRTZ_J_GRP ON QRTZ_JOB_DETAILS(SCHED_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_T_J ON QRTZ_TRIGGERS(SCHED_NAME,JOB_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_T_JG ON QRTZ_TRIGGERS(SCHED_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_T_C ON QRTZ_TRIGGERS(SCHED_NAME,CALENDAR_NAME); CREATE INDEX IDX_QRTZ_T_G ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_GROUP); CREATE INDEX IDX_QRTZ_T_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_T_N_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_T_N_G_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_GROUP,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_T_NEXT_FIRE_TIME ON QRTZ_TRIGGERS(SCHED_NAME,NEXT_FIRE_TIME); CREATE INDEX IDX_QRTZ_T_NFT_ST ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_STATE,NEXT_FIRE_TIME); CREATE INDEX IDX_QRTZ_T_NFT_MISFIRE ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME); CREATE INDEX IDX_QRTZ_T_NFT_ST_MISFIRE ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_T_NFT_ST_MISFIRE_GRP ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME,TRIGGER_GROUP,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_FT_TRIG_INST_NAME ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,INSTANCE_NAME); CREATE INDEX IDX_QRTZ_FT_INST_JOB_REQ_RCVRY ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,INSTANCE_NAME,REQUESTS_RECOVERY); CREATE INDEX IDX_QRTZ_FT_J_G ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,JOB_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_FT_JG ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_FT_T_G ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE INDEX IDX_QRTZ_FT_TG ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,TRIGGER_GROUP); -- -- Table structure for table t_ds_access_token -- DROP TABLE IF EXISTS t_ds_access_token; CREATE TABLE t_ds_access_token ( id int NOT NULL , user_id int DEFAULT NULL , token varchar(64) DEFAULT NULL , expire_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_alert -- DROP TABLE IF EXISTS t_ds_alert; CREATE TABLE t_ds_alert ( id int NOT NULL , title varchar(64) DEFAULT NULL , content text , alert_status int DEFAULT '0' , log text , alertgroup_id int DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_alertgroup -- DROP TABLE IF EXISTS t_ds_alertgroup; CREATE TABLE t_ds_alertgroup( id int NOT NULL, alert_instance_ids varchar (255) DEFAULT NULL, create_user_id int4 DEFAULT NULL, group_name varchar(255) DEFAULT NULL, description varchar(255) DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id), CONSTRAINT t_ds_alertgroup_name_un UNIQUE (group_name) ) ; -- -- Table structure for table t_ds_command -- DROP TABLE IF EXISTS t_ds_command; CREATE TABLE t_ds_command ( id int NOT NULL , command_type int DEFAULT NULL , process_definition_code bigint NOT NULL , command_param text , task_depend_type int DEFAULT NULL , failure_strategy int DEFAULT '0' , warning_type int DEFAULT '0' , warning_group_id int DEFAULT NULL , schedule_time timestamp DEFAULT NULL , start_time timestamp DEFAULT NULL , executor_id int DEFAULT NULL , update_time timestamp DEFAULT NULL , process_instance_priority int DEFAULT NULL , worker_group varchar(64), environment_code bigint DEFAULT '-1', dry_run int DEFAULT '0' , process_instance_id int DEFAULT 0, process_definition_version int DEFAULT 0, PRIMARY KEY (id) ) ; create index priority_id_index on t_ds_command (process_instance_priority,id); -- -- Table structure for table t_ds_datasource -- DROP TABLE IF EXISTS t_ds_datasource; CREATE TABLE t_ds_datasource ( id int NOT NULL , name varchar(64) NOT NULL , note varchar(255) DEFAULT NULL , type int NOT NULL , user_id int NOT NULL , connection_params text NOT NULL , create_time timestamp NOT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id), CONSTRAINT t_ds_datasource_name_un UNIQUE (name, type) ) ; -- -- Table structure for table t_ds_error_command -- DROP TABLE IF EXISTS t_ds_error_command; CREATE TABLE t_ds_error_command ( id int NOT NULL , command_type int DEFAULT NULL , process_definition_code bigint NOT NULL , command_param text , task_depend_type int DEFAULT NULL , failure_strategy int DEFAULT '0' , warning_type int DEFAULT '0' , warning_group_id int DEFAULT NULL , schedule_time timestamp DEFAULT NULL , start_time timestamp DEFAULT NULL , executor_id int DEFAULT NULL , update_time timestamp DEFAULT NULL , process_instance_priority int DEFAULT NULL , worker_group varchar(64), environment_code bigint DEFAULT '-1', dry_run int DEFAULT '0' , message text , process_instance_id int DEFAULT 0, process_definition_version int DEFAULT 0, PRIMARY KEY (id) ); -- -- Table structure for table t_ds_master_server -- -- -- Table structure for table t_ds_process_definition -- DROP TABLE IF EXISTS t_ds_process_definition; CREATE TABLE t_ds_process_definition ( id int NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , release_state int DEFAULT NULL , user_id int DEFAULT NULL , global_params text , locations text , warning_group_id int DEFAULT NULL , flag int DEFAULT NULL , timeout int DEFAULT '0' , tenant_id int DEFAULT '-1' , execution_type int DEFAULT '0', create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) , CONSTRAINT process_definition_unique UNIQUE (name, project_code) ) ; create index process_definition_index on t_ds_process_definition (code,id); DROP TABLE IF EXISTS t_ds_process_definition_log; CREATE TABLE t_ds_process_definition_log ( id int NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , release_state int DEFAULT NULL , user_id int DEFAULT NULL , global_params text , locations text , warning_group_id int DEFAULT NULL , flag int DEFAULT NULL , timeout int DEFAULT '0' , tenant_id int DEFAULT '-1' , execution_type int DEFAULT '0', operator int DEFAULT NULL , operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; DROP TABLE IF EXISTS t_ds_task_definition; CREATE TABLE t_ds_task_definition ( id int NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , user_id int DEFAULT NULL , task_type varchar(50) DEFAULT NULL , task_params text , flag int DEFAULT NULL , task_priority int DEFAULT NULL , worker_group varchar(255) DEFAULT NULL , environment_code bigint DEFAULT '-1', fail_retry_times int DEFAULT NULL , fail_retry_interval int DEFAULT NULL , timeout_flag int DEFAULT NULL , timeout_notify_strategy int DEFAULT NULL , timeout int DEFAULT '0' , delay_time int DEFAULT '0' , task_group_id int DEFAULT NULL, task_group_priority int(4) DEFAULT '0', resource_ids text , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; create index task_definition_index on t_ds_task_definition (project_code,id); DROP TABLE IF EXISTS t_ds_task_definition_log; CREATE TABLE t_ds_task_definition_log ( id int NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , user_id int DEFAULT NULL , task_type varchar(50) DEFAULT NULL , task_params text , flag int DEFAULT NULL , task_priority int DEFAULT NULL , worker_group varchar(255) DEFAULT NULL , environment_code bigint DEFAULT '-1', fail_retry_times int DEFAULT NULL , fail_retry_interval int DEFAULT NULL , timeout_flag int DEFAULT NULL , timeout_notify_strategy int DEFAULT NULL , timeout int DEFAULT '0' , delay_time int DEFAULT '0' , resource_ids text , operator int DEFAULT NULL , task_group_id int DEFAULT NULL, task_group_priority int(4) DEFAULT '0', operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; DROP TABLE IF EXISTS t_ds_process_task_relation; CREATE TABLE t_ds_process_task_relation ( id int NOT NULL , name varchar(255) DEFAULT NULL , project_code bigint DEFAULT NULL , process_definition_code bigint DEFAULT NULL , process_definition_version int DEFAULT NULL , pre_task_code bigint DEFAULT NULL , pre_task_version int DEFAULT '0' , post_task_code bigint DEFAULT NULL , post_task_version int DEFAULT '0' , condition_type int DEFAULT NULL , condition_params text , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; DROP TABLE IF EXISTS t_ds_process_task_relation_log; CREATE TABLE t_ds_process_task_relation_log ( id int NOT NULL , name varchar(255) DEFAULT NULL , project_code bigint DEFAULT NULL , process_definition_code bigint DEFAULT NULL , process_definition_version int DEFAULT NULL , pre_task_code bigint DEFAULT NULL , pre_task_version int DEFAULT '0' , post_task_code bigint DEFAULT NULL , post_task_version int DEFAULT '0' , condition_type int DEFAULT NULL , condition_params text , operator int DEFAULT NULL , operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_process_instance -- DROP TABLE IF EXISTS t_ds_process_instance; CREATE TABLE t_ds_process_instance ( id int NOT NULL , name varchar(255) DEFAULT NULL , process_definition_code bigint DEFAULT NULL , process_definition_version int DEFAULT NULL , state int DEFAULT NULL , recovery int DEFAULT NULL , start_time timestamp DEFAULT NULL , end_time timestamp DEFAULT NULL , run_times int DEFAULT NULL , host varchar(135) DEFAULT NULL , command_type int DEFAULT NULL , command_param text , task_depend_type int DEFAULT NULL , max_try_times int DEFAULT '0' , failure_strategy int DEFAULT '0' , warning_type int DEFAULT '0' , warning_group_id int DEFAULT NULL , schedule_time timestamp DEFAULT NULL , command_start_time timestamp DEFAULT NULL , global_params text , process_instance_json text , flag int DEFAULT '1' , update_time timestamp NULL , is_sub_process int DEFAULT '0' , executor_id int NOT NULL , history_cmd text , dependence_schedule_times text , process_instance_priority int DEFAULT NULL , worker_group varchar(64) , environment_code bigint DEFAULT '-1', timeout int DEFAULT '0' , tenant_id int NOT NULL DEFAULT '-1' , var_pool text , dry_run int DEFAULT '0' , next_process_instance_id int DEFAULT '0', PRIMARY KEY (id) ) ; create index process_instance_index on t_ds_process_instance (process_definition_code,id); create index start_time_index on t_ds_process_instance (start_time,end_time); -- -- Table structure for table t_ds_project -- DROP TABLE IF EXISTS t_ds_project; CREATE TABLE t_ds_project ( id int NOT NULL , name varchar(100) DEFAULT NULL , code bigint NOT NULL, description varchar(200) DEFAULT NULL , user_id int DEFAULT NULL , flag int DEFAULT '1' , create_time timestamp DEFAULT CURRENT_TIMESTAMP , update_time timestamp DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (id) ) ; create index user_id_index on t_ds_project (user_id); -- -- Table structure for table t_ds_queue -- DROP TABLE IF EXISTS t_ds_queue; CREATE TABLE t_ds_queue ( id int NOT NULL , queue_name varchar(64) DEFAULT NULL , queue varchar(64) DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ); -- -- Table structure for table t_ds_relation_datasource_user -- DROP TABLE IF EXISTS t_ds_relation_datasource_user; CREATE TABLE t_ds_relation_datasource_user ( id int NOT NULL , user_id int NOT NULL , datasource_id int DEFAULT NULL , perm int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; ; -- -- Table structure for table t_ds_relation_process_instance -- DROP TABLE IF EXISTS t_ds_relation_process_instance; CREATE TABLE t_ds_relation_process_instance ( id int NOT NULL , parent_process_instance_id int DEFAULT NULL , parent_task_instance_id int DEFAULT NULL , process_instance_id int DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_relation_project_user -- DROP TABLE IF EXISTS t_ds_relation_project_user; CREATE TABLE t_ds_relation_project_user ( id int NOT NULL , user_id int NOT NULL , project_id int DEFAULT NULL , perm int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; create index relation_project_user_id_index on t_ds_relation_project_user (user_id); -- -- Table structure for table t_ds_relation_resources_user -- DROP TABLE IF EXISTS t_ds_relation_resources_user; CREATE TABLE t_ds_relation_resources_user ( id int NOT NULL , user_id int NOT NULL , resources_id int DEFAULT NULL , perm int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_relation_udfs_user -- DROP TABLE IF EXISTS t_ds_relation_udfs_user; CREATE TABLE t_ds_relation_udfs_user ( id int NOT NULL , user_id int NOT NULL , udf_id int DEFAULT NULL , perm int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; ; -- -- Table structure for table t_ds_resources -- DROP TABLE IF EXISTS t_ds_resources; CREATE TABLE t_ds_resources ( id int NOT NULL , alias varchar(64) DEFAULT NULL , file_name varchar(64) DEFAULT NULL , description varchar(255) DEFAULT NULL , user_id int DEFAULT NULL , type int DEFAULT NULL , size bigint DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , pid int, full_name varchar(64), is_directory int, PRIMARY KEY (id), CONSTRAINT t_ds_resources_un UNIQUE (full_name, type) ) ; -- -- Table structure for table t_ds_schedules -- DROP TABLE IF EXISTS t_ds_schedules; CREATE TABLE t_ds_schedules ( id int NOT NULL , process_definition_code bigint NOT NULL , start_time timestamp NOT NULL , end_time timestamp NOT NULL , timezone_id varchar(40) default NULL , crontab varchar(255) NOT NULL , failure_strategy int NOT NULL , user_id int NOT NULL , release_state int NOT NULL , warning_type int NOT NULL , warning_group_id int DEFAULT NULL , process_instance_priority int DEFAULT NULL , worker_group varchar(64), environment_code bigint DEFAULT '-1', create_time timestamp NOT NULL , update_time timestamp NOT NULL , PRIMARY KEY (id) ); -- -- Table structure for table t_ds_session -- DROP TABLE IF EXISTS t_ds_session; CREATE TABLE t_ds_session ( id varchar(64) NOT NULL , user_id int DEFAULT NULL , ip varchar(45) DEFAULT NULL , last_login_time timestamp DEFAULT NULL , PRIMARY KEY (id) ); -- -- Table structure for table t_ds_task_instance -- DROP TABLE IF EXISTS t_ds_task_instance; CREATE TABLE t_ds_task_instance ( id int NOT NULL , name varchar(255) DEFAULT NULL , task_type varchar(50) DEFAULT NULL , task_code bigint NOT NULL, task_definition_version int DEFAULT NULL , process_instance_id int DEFAULT NULL , state int DEFAULT NULL , submit_time timestamp DEFAULT NULL , start_time timestamp DEFAULT NULL , end_time timestamp DEFAULT NULL , host varchar(135) DEFAULT NULL , execute_path varchar(200) DEFAULT NULL , log_path varchar(200) DEFAULT NULL , alert_flag int DEFAULT NULL , retry_times int DEFAULT '0' , pid int DEFAULT NULL , app_link text , task_params text , flag int DEFAULT '1' , retry_interval int DEFAULT NULL , max_retry_times int DEFAULT NULL , task_instance_priority int DEFAULT NULL , worker_group varchar(64), environment_code bigint DEFAULT '-1', environment_config text, executor_id int DEFAULT NULL , first_submit_time timestamp DEFAULT NULL , delay_time int DEFAULT '0' , task_group_id int DEFAULT NULL, var_pool text , dry_run int DEFAULT '0' , PRIMARY KEY (id), CONSTRAINT foreign_key_instance_id FOREIGN KEY(process_instance_id) REFERENCES t_ds_process_instance(id) ON DELETE CASCADE ) ; -- -- Table structure for table t_ds_tenant -- DROP TABLE IF EXISTS t_ds_tenant; CREATE TABLE t_ds_tenant ( id int NOT NULL , tenant_code varchar(64) DEFAULT NULL , description varchar(255) DEFAULT NULL , queue_id int DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_udfs -- DROP TABLE IF EXISTS t_ds_udfs; CREATE TABLE t_ds_udfs ( id int NOT NULL , user_id int NOT NULL , func_name varchar(100) NOT NULL , class_name varchar(255) NOT NULL , type int NOT NULL , arg_types varchar(255) DEFAULT NULL , database varchar(255) DEFAULT NULL , description varchar(255) DEFAULT NULL , resource_id int NOT NULL , resource_name varchar(255) NOT NULL , create_time timestamp NOT NULL , update_time timestamp NOT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_user -- DROP TABLE IF EXISTS t_ds_user; CREATE TABLE t_ds_user ( id int NOT NULL , user_name varchar(64) DEFAULT NULL , user_password varchar(64) DEFAULT NULL , user_type int DEFAULT NULL , email varchar(64) DEFAULT NULL , phone varchar(11) DEFAULT NULL , tenant_id int DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , queue varchar(64) DEFAULT NULL , state int DEFAULT 1 , PRIMARY KEY (id) ); comment on column t_ds_user.state is 'state 0:disable 1:enable'; -- -- Table structure for table t_ds_version -- DROP TABLE IF EXISTS t_ds_version; CREATE TABLE t_ds_version ( id int NOT NULL , version varchar(200) NOT NULL, PRIMARY KEY (id) ) ; create index version_index on t_ds_version(version); -- -- Table structure for table t_ds_worker_group -- DROP TABLE IF EXISTS t_ds_worker_group; CREATE TABLE t_ds_worker_group ( id bigint NOT NULL , name varchar(255) NOT NULL , addr_list text DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) , CONSTRAINT name_unique UNIQUE (name) ) ; -- -- Table structure for table t_ds_worker_server -- DROP TABLE IF EXISTS t_ds_worker_server; CREATE TABLE t_ds_worker_server ( id int NOT NULL , host varchar(45) DEFAULT NULL , port int DEFAULT NULL , zk_directory varchar(64) DEFAULT NULL , res_info varchar(255) DEFAULT NULL , create_time timestamp DEFAULT NULL , last_heartbeat_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; DROP SEQUENCE IF EXISTS t_ds_access_token_id_sequence; CREATE SEQUENCE t_ds_access_token_id_sequence; ALTER TABLE t_ds_access_token ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_access_token_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_alert_id_sequence; CREATE SEQUENCE t_ds_alert_id_sequence; ALTER TABLE t_ds_alert ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_alert_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_alertgroup_id_sequence; CREATE SEQUENCE t_ds_alertgroup_id_sequence; ALTER TABLE t_ds_alertgroup ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_alertgroup_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_command_id_sequence; CREATE SEQUENCE t_ds_command_id_sequence; ALTER TABLE t_ds_command ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_command_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_datasource_id_sequence; CREATE SEQUENCE t_ds_datasource_id_sequence; ALTER TABLE t_ds_datasource ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_datasource_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_definition_id_sequence; CREATE SEQUENCE t_ds_process_definition_id_sequence; ALTER TABLE t_ds_process_definition ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_definition_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_definition_log_id_sequence; CREATE SEQUENCE t_ds_process_definition_log_id_sequence; ALTER TABLE t_ds_process_definition_log ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_definition_log_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_task_definition_id_sequence; CREATE SEQUENCE t_ds_task_definition_id_sequence; ALTER TABLE t_ds_task_definition ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_task_definition_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_task_definition_log_id_sequence; CREATE SEQUENCE t_ds_task_definition_log_id_sequence; ALTER TABLE t_ds_task_definition_log ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_task_definition_log_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_task_relation_id_sequence; CREATE SEQUENCE t_ds_process_task_relation_id_sequence; ALTER TABLE t_ds_process_task_relation ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_task_relation_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_task_relation_log_id_sequence; CREATE SEQUENCE t_ds_process_task_relation_log_id_sequence; ALTER TABLE t_ds_process_task_relation_log ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_task_relation_log_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_instance_id_sequence; CREATE SEQUENCE t_ds_process_instance_id_sequence; ALTER TABLE t_ds_process_instance ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_instance_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_project_id_sequence; CREATE SEQUENCE t_ds_project_id_sequence; ALTER TABLE t_ds_project ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_project_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_queue_id_sequence; CREATE SEQUENCE t_ds_queue_id_sequence; ALTER TABLE t_ds_queue ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_queue_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_datasource_user_id_sequence; CREATE SEQUENCE t_ds_relation_datasource_user_id_sequence; ALTER TABLE t_ds_relation_datasource_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_datasource_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_process_instance_id_sequence; CREATE SEQUENCE t_ds_relation_process_instance_id_sequence; ALTER TABLE t_ds_relation_process_instance ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_process_instance_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_project_user_id_sequence; CREATE SEQUENCE t_ds_relation_project_user_id_sequence; ALTER TABLE t_ds_relation_project_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_project_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_resources_user_id_sequence; CREATE SEQUENCE t_ds_relation_resources_user_id_sequence; ALTER TABLE t_ds_relation_resources_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_resources_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_udfs_user_id_sequence; CREATE SEQUENCE t_ds_relation_udfs_user_id_sequence; ALTER TABLE t_ds_relation_udfs_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_udfs_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_resources_id_sequence; CREATE SEQUENCE t_ds_resources_id_sequence; ALTER TABLE t_ds_resources ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_resources_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_schedules_id_sequence; CREATE SEQUENCE t_ds_schedules_id_sequence; ALTER TABLE t_ds_schedules ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_schedules_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_task_instance_id_sequence; CREATE SEQUENCE t_ds_task_instance_id_sequence; ALTER TABLE t_ds_task_instance ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_task_instance_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_tenant_id_sequence; CREATE SEQUENCE t_ds_tenant_id_sequence; ALTER TABLE t_ds_tenant ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_tenant_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_udfs_id_sequence; CREATE SEQUENCE t_ds_udfs_id_sequence; ALTER TABLE t_ds_udfs ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_udfs_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_user_id_sequence; CREATE SEQUENCE t_ds_user_id_sequence; ALTER TABLE t_ds_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_version_id_sequence; CREATE SEQUENCE t_ds_version_id_sequence; ALTER TABLE t_ds_version ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_version_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_worker_group_id_sequence; CREATE SEQUENCE t_ds_worker_group_id_sequence; ALTER TABLE t_ds_worker_group ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_worker_group_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_worker_server_id_sequence; CREATE SEQUENCE t_ds_worker_server_id_sequence; ALTER TABLE t_ds_worker_server ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_worker_server_id_sequence'); -- Records of t_ds_user?user : admin , password : dolphinscheduler123 INSERT INTO t_ds_user(user_name, user_password, user_type, email, phone, tenant_id, state, create_time, update_time) VALUES ('admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', 1, '2018-03-27 15:48:50', '2018-10-24 17:40:22'); -- Records of t_ds_alertgroup, default admin warning group INSERT INTO t_ds_alertgroup(alert_instance_ids, create_user_id, group_name, description, create_time, update_time) VALUES ('1,2', 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39'); -- Records of t_ds_queue,default queue name : default INSERT INTO t_ds_queue(queue_name, queue, create_time, update_time) VALUES ('default', 'default', '2018-11-29 10:22:33', '2018-11-29 10:22:33'); -- Records of t_ds_queue,default queue name : default INSERT INTO t_ds_version(version) VALUES ('1.4.0'); -- -- Table structure for table t_ds_plugin_define -- DROP TABLE IF EXISTS t_ds_plugin_define; CREATE TABLE t_ds_plugin_define ( id serial NOT NULL, plugin_name varchar(100) NOT NULL, plugin_type varchar(100) NOT NULL, plugin_params text NULL, create_time timestamp NULL, update_time timestamp NULL, CONSTRAINT t_ds_plugin_define_pk PRIMARY KEY (id), CONSTRAINT t_ds_plugin_define_un UNIQUE (plugin_name, plugin_type) ); -- -- Table structure for table t_ds_alert_plugin_instance -- DROP TABLE IF EXISTS t_ds_alert_plugin_instance; CREATE TABLE t_ds_alert_plugin_instance ( id serial NOT NULL, plugin_define_id int4 NOT NULL, plugin_instance_params text NULL, create_time timestamp NULL, update_time timestamp NULL, instance_name varchar(200) NULL, CONSTRAINT t_ds_alert_plugin_instance_pk PRIMARY KEY (id) ); -- -- Table structure for table t_ds_environment -- DROP TABLE IF EXISTS t_ds_environment; CREATE TABLE t_ds_environment ( id serial NOT NULL, code bigint NOT NULL, name varchar(100) DEFAULT NULL, config text DEFAULT NULL, description text, operator int DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id), CONSTRAINT environment_name_unique UNIQUE (name), CONSTRAINT environment_code_unique UNIQUE (code) ); -- -- Table structure for table t_ds_environment_worker_group_relation -- DROP TABLE IF EXISTS t_ds_environment_worker_group_relation; CREATE TABLE t_ds_environment_worker_group_relation ( id serial NOT NULL, environment_code bigint NOT NULL, worker_group varchar(255) NOT NULL, operator int DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id) , CONSTRAINT environment_worker_group_unique UNIQUE (environment_code,worker_group) ); DROP TABLE IF EXISTS t_ds_task_group_queue; CREATE TABLE t_ds_task_group_queue ( id serial NOT NULL, task_id int DEFAULT NULL , task_name VARCHAR(100) DEFAULT NULL , group_id int DEFAULT NULL , process_id int DEFAULT NULL , priority int DEFAULT '0' , status int DEFAULT '-1' , force_start int DEFAULT '0' , in_queue int DEFAULT '0' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ); DROP TABLE IF EXISTS t_ds_task_group; CREATE TABLE t_ds_task_group ( id serial NOT NULL, name varchar(100) DEFAULT NULL , description varchar(200) DEFAULT NULL , group_size int NOT NULL , project_code bigint DEFAULT '0' , use_size int DEFAULT '0' , user_id int DEFAULT NULL , project_id int DEFAULT NULL , status int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY(id) );
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,569
[Feature][Dao] Optimize Dependent node loading times
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Optimize Dependent node loading times ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7569
https://github.com/apache/dolphinscheduler/pull/7626
2d2cc35ccae54f6e89532d92944b8df60721b92d
8b292132c83374b6364abf4d40505593760c27c2
"2021-12-23T05:47:35Z"
java
"2021-12-27T10:05:56Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.1.0_schema/mysql/dolphinscheduler_ddl.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ ALTER TABLE `t_ds_task_instance` MODIFY COLUMN `task_params` longtext COMMENT 'job custom parameters' AFTER `app_link`;
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,569
[Feature][Dao] Optimize Dependent node loading times
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Optimize Dependent node loading times ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7569
https://github.com/apache/dolphinscheduler/pull/7626
2d2cc35ccae54f6e89532d92944b8df60721b92d
8b292132c83374b6364abf4d40505593760c27c2
"2021-12-23T05:47:35Z"
java
"2021-12-27T10:05:56Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.1.0_schema/mysql/dolphinscheduler_dml.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,569
[Feature][Dao] Optimize Dependent node loading times
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Optimize Dependent node loading times ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7569
https://github.com/apache/dolphinscheduler/pull/7626
2d2cc35ccae54f6e89532d92944b8df60721b92d
8b292132c83374b6364abf4d40505593760c27c2
"2021-12-23T05:47:35Z"
java
"2021-12-27T10:05:56Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.1.0_schema/postgresql/dolphinscheduler_ddl.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,569
[Feature][Dao] Optimize Dependent node loading times
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description Optimize Dependent node loading times ### Use case _No response_ ### Related issues _No response_ ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7569
https://github.com/apache/dolphinscheduler/pull/7626
2d2cc35ccae54f6e89532d92944b8df60721b92d
8b292132c83374b6364abf4d40505593760c27c2
"2021-12-23T05:47:35Z"
java
"2021-12-27T10:05:56Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.1.0_schema/postgresql/dolphinscheduler_dml.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,658
[Bug] [ApiServer] workflow copy error
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened copy workflow error ### What you expected to happen can copy workflow successfully ### How to reproduce create a easy workflow with a shell task, then copy. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7658
https://github.com/apache/dolphinscheduler/pull/7659
b2e2de7a5e7c172cc5d4923783f6e5e53a83da44
d75f22781be2bc33a399ded7aac5fb2665de8bcf
"2021-12-28T02:59:22Z"
java
"2021-12-28T03:25:18Z"
dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.service.process; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_EMPTY_SUB_PROCESS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_FATHER_PARAMS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID; import static org.apache.dolphinscheduler.common.Constants.LOCAL_PARAMS; import static java.util.stream.Collectors.toSet; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.AuthorizationType; import org.apache.dolphinscheduler.common.enums.CommandType; import org.apache.dolphinscheduler.common.enums.Direct; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.FailureStrategy; import org.apache.dolphinscheduler.common.enums.Flag; import org.apache.dolphinscheduler.common.enums.ReleaseState; import org.apache.dolphinscheduler.common.enums.TaskDependType; import org.apache.dolphinscheduler.common.enums.TaskGroupQueueStatus; import org.apache.dolphinscheduler.common.enums.TimeoutFlag; import org.apache.dolphinscheduler.common.enums.WarningType; import org.apache.dolphinscheduler.common.graph.DAG; import org.apache.dolphinscheduler.common.model.DateInterval; import org.apache.dolphinscheduler.common.model.TaskNode; import org.apache.dolphinscheduler.common.model.TaskNodeRelation; import org.apache.dolphinscheduler.common.process.ProcessDag; import org.apache.dolphinscheduler.common.process.Property; import org.apache.dolphinscheduler.common.process.ResourceInfo; import org.apache.dolphinscheduler.common.task.AbstractParameters; import org.apache.dolphinscheduler.common.task.TaskTimeoutParameter; import org.apache.dolphinscheduler.common.task.subprocess.SubProcessParameters; import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils; import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils.CodeGenerateException; import org.apache.dolphinscheduler.common.utils.DateUtils; import org.apache.dolphinscheduler.common.utils.JSONUtils; import org.apache.dolphinscheduler.common.utils.ParameterUtils; import org.apache.dolphinscheduler.common.utils.PropertyUtils; import org.apache.dolphinscheduler.common.utils.TaskParametersUtils; import org.apache.dolphinscheduler.dao.entity.Command; import org.apache.dolphinscheduler.dao.entity.DagData; import org.apache.dolphinscheduler.dao.entity.DataSource; import org.apache.dolphinscheduler.dao.entity.Environment; import org.apache.dolphinscheduler.dao.entity.ErrorCommand; import org.apache.dolphinscheduler.dao.entity.ProcessDefinition; import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.ProcessInstanceMap; import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation; import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog; import org.apache.dolphinscheduler.dao.entity.Project; import org.apache.dolphinscheduler.dao.entity.ProjectUser; import org.apache.dolphinscheduler.dao.entity.Resource; import org.apache.dolphinscheduler.dao.entity.Schedule; import org.apache.dolphinscheduler.dao.entity.TaskDefinition; import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog; import org.apache.dolphinscheduler.dao.entity.TaskGroup; import org.apache.dolphinscheduler.dao.entity.TaskGroupQueue; import org.apache.dolphinscheduler.dao.entity.TaskInstance; import org.apache.dolphinscheduler.dao.entity.Tenant; import org.apache.dolphinscheduler.dao.entity.UdfFunc; import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.dao.mapper.CommandMapper; import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper; import org.apache.dolphinscheduler.dao.mapper.EnvironmentMapper; import org.apache.dolphinscheduler.dao.mapper.ErrorCommandMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationLogMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationMapper; import org.apache.dolphinscheduler.dao.mapper.ProjectMapper; import org.apache.dolphinscheduler.dao.mapper.ResourceMapper; import org.apache.dolphinscheduler.dao.mapper.ResourceUserMapper; import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper; import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper; import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.TaskGroupMapper; import org.apache.dolphinscheduler.dao.mapper.TaskGroupQueueMapper; import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper; import org.apache.dolphinscheduler.dao.mapper.TenantMapper; import org.apache.dolphinscheduler.dao.mapper.UdfFuncMapper; import org.apache.dolphinscheduler.dao.mapper.UserMapper; import org.apache.dolphinscheduler.dao.utils.DagHelper; import org.apache.dolphinscheduler.remote.command.StateEventChangeCommand; import org.apache.dolphinscheduler.remote.command.TaskEventChangeCommand; import org.apache.dolphinscheduler.remote.processor.StateEventCallbackService; import org.apache.dolphinscheduler.remote.utils.Host; import org.apache.dolphinscheduler.service.bean.SpringApplicationContext; import org.apache.dolphinscheduler.service.exceptions.ServiceException; import org.apache.dolphinscheduler.service.log.LogClientService; import org.apache.dolphinscheduler.service.quartz.cron.CronUtils; import org.apache.dolphinscheduler.spi.enums.ResourceType; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.lang.StringUtils; import java.util.ArrayList; import java.util.Arrays; import java.util.Date; import java.util.EnumMap; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Objects; import java.util.Set; import java.util.stream.Collectors; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import org.springframework.transaction.annotation.Transactional; import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.node.ObjectNode; import com.google.common.collect.Lists; /** * process relative dao that some mappers in this. */ @Component public class ProcessService { private final Logger logger = LoggerFactory.getLogger(getClass()); private final int[] stateArray = new int[]{ExecutionStatus.SUBMITTED_SUCCESS.ordinal(), ExecutionStatus.RUNNING_EXECUTION.ordinal(), ExecutionStatus.DELAY_EXECUTION.ordinal(), ExecutionStatus.READY_PAUSE.ordinal(), ExecutionStatus.READY_STOP.ordinal()}; @Autowired private UserMapper userMapper; @Autowired private ProcessDefinitionMapper processDefineMapper; @Autowired private ProcessDefinitionLogMapper processDefineLogMapper; @Autowired private ProcessInstanceMapper processInstanceMapper; @Autowired private DataSourceMapper dataSourceMapper; @Autowired private ProcessInstanceMapMapper processInstanceMapMapper; @Autowired private TaskInstanceMapper taskInstanceMapper; @Autowired private CommandMapper commandMapper; @Autowired private ScheduleMapper scheduleMapper; @Autowired private UdfFuncMapper udfFuncMapper; @Autowired private ResourceMapper resourceMapper; @Autowired private ResourceUserMapper resourceUserMapper; @Autowired private ErrorCommandMapper errorCommandMapper; @Autowired private TenantMapper tenantMapper; @Autowired private ProjectMapper projectMapper; @Autowired private TaskDefinitionMapper taskDefinitionMapper; @Autowired private TaskDefinitionLogMapper taskDefinitionLogMapper; @Autowired private ProcessTaskRelationMapper processTaskRelationMapper; @Autowired private ProcessTaskRelationLogMapper processTaskRelationLogMapper; @Autowired StateEventCallbackService stateEventCallbackService; @Autowired private EnvironmentMapper environmentMapper; @Autowired private TaskGroupQueueMapper taskGroupQueueMapper; @Autowired private TaskGroupMapper taskGroupMapper; /** * handle Command (construct ProcessInstance from Command) , wrapped in transaction * * @param logger logger * @param host host * @param command found command * @return process instance */ @Transactional public ProcessInstance handleCommand(Logger logger, String host, Command command) { ProcessInstance processInstance = constructProcessInstance(command, host); // cannot construct process instance, return null if (processInstance == null) { logger.error("scan command, command parameter is error: {}", command); moveToErrorCommand(command, "process instance is null"); return null; } processInstance.setCommandType(command.getCommandType()); processInstance.addHistoryCmd(command.getCommandType()); //if the processDefination is serial ProcessDefinition processDefinition = this.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); if (processDefinition.getExecutionType().typeIsSerial()) { saveSerialProcess(processInstance, processDefinition); if (processInstance.getState() != ExecutionStatus.SUBMITTED_SUCCESS) { setSubProcessParam(processInstance); deleteCommandWithCheck(command.getId()); return null; } } else { saveProcessInstance(processInstance); } setSubProcessParam(processInstance); deleteCommandWithCheck(command.getId()); return processInstance; } private void saveSerialProcess(ProcessInstance processInstance, ProcessDefinition processDefinition) { processInstance.setState(ExecutionStatus.SERIAL_WAIT); saveProcessInstance(processInstance); //serial wait //when we get the running instance(or waiting instance) only get the priority instance(by id) if (processDefinition.getExecutionType().typeIsSerialWait()) { while (true) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isEmpty(runningProcessInstances)) { processInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS); saveProcessInstance(processInstance); return; } ProcessInstance runningProcess = runningProcessInstances.get(0); if (this.processInstanceMapper.updateNextProcessIdById(processInstance.getId(), runningProcess.getId())) { return; } } } else if (processDefinition.getExecutionType().typeIsSerialDiscard()) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isEmpty(runningProcessInstances)) { processInstance.setState(ExecutionStatus.STOP); saveProcessInstance(processInstance); } } else if (processDefinition.getExecutionType().typeIsSerialPriority()) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isNotEmpty(runningProcessInstances)) { for (ProcessInstance info : runningProcessInstances) { info.setCommandType(CommandType.STOP); info.addHistoryCmd(CommandType.STOP); info.setState(ExecutionStatus.READY_STOP); int update = updateProcessInstance(info); // determine whether the process is normal if (update > 0) { String host = info.getHost(); String address = host.split(":")[0]; int port = Integer.parseInt(host.split(":")[1]); StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand( info.getId(), 0, info.getState(), info.getId(), 0 ); try { stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command()); } catch (Exception e) { logger.error("sendResultError"); } } } } } } /** * save error command, and delete original command * * @param command command * @param message message */ public void moveToErrorCommand(Command command, String message) { ErrorCommand errorCommand = new ErrorCommand(command, message); this.errorCommandMapper.insert(errorCommand); this.commandMapper.deleteById(command.getId()); } /** * set process waiting thread * * @param command command * @param processInstance processInstance * @return process instance */ private ProcessInstance setWaitingThreadProcess(Command command, ProcessInstance processInstance) { processInstance.setState(ExecutionStatus.WAITING_THREAD); if (command.getCommandType() != CommandType.RECOVER_WAITING_THREAD) { processInstance.addHistoryCmd(command.getCommandType()); } saveProcessInstance(processInstance); this.setSubProcessParam(processInstance); createRecoveryWaitingThreadCommand(command, processInstance); return null; } /** * insert one command * * @param command command * @return create result */ public int createCommand(Command command) { int result = 0; if (command != null) { result = commandMapper.insert(command); } return result; } /** * get command page */ public List<Command> findCommandPage(int pageSize, int pageNumber) { return commandMapper.queryCommandPage(pageSize, pageNumber * pageSize); } /** * check the input command exists in queue list * * @param command command * @return create command result */ public boolean verifyIsNeedCreateCommand(Command command) { boolean isNeedCreate = true; EnumMap<CommandType, Integer> cmdTypeMap = new EnumMap<>(CommandType.class); cmdTypeMap.put(CommandType.REPEAT_RUNNING, 1); cmdTypeMap.put(CommandType.RECOVER_SUSPENDED_PROCESS, 1); cmdTypeMap.put(CommandType.START_FAILURE_TASK_PROCESS, 1); CommandType commandType = command.getCommandType(); if (cmdTypeMap.containsKey(commandType)) { ObjectNode cmdParamObj = JSONUtils.parseObject(command.getCommandParam()); int processInstanceId = cmdParamObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt(); List<Command> commands = commandMapper.selectList(null); // for all commands for (Command tmpCommand : commands) { if (cmdTypeMap.containsKey(tmpCommand.getCommandType())) { ObjectNode tempObj = JSONUtils.parseObject(tmpCommand.getCommandParam()); if (tempObj != null && processInstanceId == tempObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt()) { isNeedCreate = false; break; } } } } return isNeedCreate; } /** * find process instance detail by id * * @param processId processId * @return process instance */ public ProcessInstance findProcessInstanceDetailById(int processId) { return processInstanceMapper.queryDetailById(processId); } /** * get task node list by definitionId */ public List<TaskDefinition> getTaskNodeListByDefinition(long defineCode) { ProcessDefinition processDefinition = processDefineMapper.queryByCode(defineCode); if (processDefinition == null) { logger.error("process define not exists"); return Lists.newArrayList(); } List<ProcessTaskRelationLog> processTaskRelations = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion()); Set<TaskDefinition> taskDefinitionSet = new HashSet<>(); for (ProcessTaskRelationLog processTaskRelation : processTaskRelations) { if (processTaskRelation.getPostTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion())); } } if (taskDefinitionSet.isEmpty()) { return Lists.newArrayList(); } List<TaskDefinitionLog> taskDefinitionLogs = taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet); return Lists.newArrayList(taskDefinitionLogs); } /** * find process instance by id * * @param processId processId * @return process instance */ public ProcessInstance findProcessInstanceById(int processId) { return processInstanceMapper.selectById(processId); } /** * find process define by id. * * @param processDefinitionId processDefinitionId * @return process definition */ public ProcessDefinition findProcessDefineById(int processDefinitionId) { return processDefineMapper.selectById(processDefinitionId); } /** * find process define by code and version. * * @param processDefinitionCode processDefinitionCode * @return process definition */ public ProcessDefinition findProcessDefinition(Long processDefinitionCode, int version) { ProcessDefinition processDefinition = processDefineMapper.queryByCode(processDefinitionCode); if (processDefinition == null || processDefinition.getVersion() != version) { processDefinition = processDefineLogMapper.queryByDefinitionCodeAndVersion(processDefinitionCode, version); if (processDefinition != null) { processDefinition.setId(0); } } return processDefinition; } /** * find process define by code. * * @param processDefinitionCode processDefinitionCode * @return process definition */ public ProcessDefinition findProcessDefinitionByCode(Long processDefinitionCode) { return processDefineMapper.queryByCode(processDefinitionCode); } /** * delete work process instance by id * * @param processInstanceId processInstanceId * @return delete process instance result */ public int deleteWorkProcessInstanceById(int processInstanceId) { return processInstanceMapper.deleteById(processInstanceId); } /** * delete all sub process by parent instance id * * @param processInstanceId processInstanceId * @return delete all sub process instance result */ public int deleteAllSubWorkProcessByParentId(int processInstanceId) { List<Integer> subProcessIdList = processInstanceMapMapper.querySubIdListByParentId(processInstanceId); for (Integer subId : subProcessIdList) { deleteAllSubWorkProcessByParentId(subId); deleteWorkProcessMapByParentId(subId); removeTaskLogFile(subId); deleteWorkProcessInstanceById(subId); } return 1; } /** * remove task log file * * @param processInstanceId processInstanceId */ public void removeTaskLogFile(Integer processInstanceId) { List<TaskInstance> taskInstanceList = findValidTaskListByProcessId(processInstanceId); if (CollectionUtils.isEmpty(taskInstanceList)) { return; } try (LogClientService logClient = new LogClientService()) { for (TaskInstance taskInstance : taskInstanceList) { String taskLogPath = taskInstance.getLogPath(); if (StringUtils.isEmpty(taskInstance.getHost())) { continue; } int port = PropertyUtils.getInt(Constants.RPC_PORT, 50051); String ip = ""; try { ip = Host.of(taskInstance.getHost()).getIp(); } catch (Exception e) { // compatible old version ip = taskInstance.getHost(); } // remove task log from loggerserver logClient.removeTaskLog(ip, port, taskLogPath); } } } /** * recursive query sub process definition id by parent id. * * @param parentCode parentCode * @param ids ids */ public void recurseFindSubProcess(long parentCode, List<Long> ids) { List<TaskDefinition> taskNodeList = this.getTaskNodeListByDefinition(parentCode); if (taskNodeList != null && !taskNodeList.isEmpty()) { for (TaskDefinition taskNode : taskNodeList) { String parameter = taskNode.getTaskParams(); ObjectNode parameterJson = JSONUtils.parseObject(parameter); if (parameterJson.get(CMD_PARAM_SUB_PROCESS_DEFINE_CODE) != null) { SubProcessParameters subProcessParam = JSONUtils.parseObject(parameter, SubProcessParameters.class); ids.add(subProcessParam.getProcessDefinitionCode()); recurseFindSubProcess(subProcessParam.getProcessDefinitionCode(), ids); } } } } /** * create recovery waiting thread command when thread pool is not enough for the process instance. * sub work process instance need not to create recovery command. * create recovery waiting thread command and delete origin command at the same time. * if the recovery command is exists, only update the field update_time * * @param originCommand originCommand * @param processInstance processInstance */ public void createRecoveryWaitingThreadCommand(Command originCommand, ProcessInstance processInstance) { // sub process doesnot need to create wait command if (processInstance.getIsSubProcess() == Flag.YES) { if (originCommand != null) { commandMapper.deleteById(originCommand.getId()); } return; } Map<String, String> cmdParam = new HashMap<>(); cmdParam.put(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD, String.valueOf(processInstance.getId())); // process instance quit by "waiting thread" state if (originCommand == null) { Command command = new Command( CommandType.RECOVER_WAITING_THREAD, processInstance.getTaskDependType(), processInstance.getFailureStrategy(), processInstance.getExecutorId(), processInstance.getProcessDefinition().getCode(), JSONUtils.toJsonString(cmdParam), processInstance.getWarningType(), processInstance.getWarningGroupId(), processInstance.getScheduleTime(), processInstance.getWorkerGroup(), processInstance.getEnvironmentCode(), processInstance.getProcessInstancePriority(), processInstance.getDryRun(), processInstance.getId(), processInstance.getProcessDefinitionVersion() ); saveCommand(command); return; } // update the command time if current command if recover from waiting if (originCommand.getCommandType() == CommandType.RECOVER_WAITING_THREAD) { originCommand.setUpdateTime(new Date()); saveCommand(originCommand); } else { // delete old command and create new waiting thread command commandMapper.deleteById(originCommand.getId()); originCommand.setId(0); originCommand.setCommandType(CommandType.RECOVER_WAITING_THREAD); originCommand.setUpdateTime(new Date()); originCommand.setCommandParam(JSONUtils.toJsonString(cmdParam)); originCommand.setProcessInstancePriority(processInstance.getProcessInstancePriority()); saveCommand(originCommand); } } /** * get schedule time from command * * @param command command * @param cmdParam cmdParam map * @return date */ private Date getScheduleTime(Command command, Map<String, String> cmdParam) { Date scheduleTime = command.getScheduleTime(); if (scheduleTime == null && cmdParam != null && cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) { Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE)); Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); List<Schedule> schedules = queryReleaseSchedulerListByProcessDefinitionCode(command.getProcessDefinitionCode()); List<Date> complementDateList = CronUtils.getSelfFireDateList(start, end, schedules); if (complementDateList.size() > 0) { scheduleTime = complementDateList.get(0); } else { logger.error("set scheduler time error: complement date list is empty, command: {}", command.toString()); } } return scheduleTime; } /** * generate a new work process instance from command. * * @param processDefinition processDefinition * @param command command * @param cmdParam cmdParam map * @return process instance */ private ProcessInstance generateNewProcessInstance(ProcessDefinition processDefinition, Command command, Map<String, String> cmdParam) { ProcessInstance processInstance = new ProcessInstance(processDefinition); processInstance.setProcessDefinitionCode(processDefinition.getCode()); processInstance.setProcessDefinitionVersion(processDefinition.getVersion()); processInstance.setState(ExecutionStatus.RUNNING_EXECUTION); processInstance.setRecovery(Flag.NO); processInstance.setStartTime(new Date()); processInstance.setRunTimes(1); processInstance.setMaxTryTimes(0); processInstance.setCommandParam(command.getCommandParam()); processInstance.setCommandType(command.getCommandType()); processInstance.setIsSubProcess(Flag.NO); processInstance.setTaskDependType(command.getTaskDependType()); processInstance.setFailureStrategy(command.getFailureStrategy()); processInstance.setExecutorId(command.getExecutorId()); WarningType warningType = command.getWarningType() == null ? WarningType.NONE : command.getWarningType(); processInstance.setWarningType(warningType); Integer warningGroupId = command.getWarningGroupId() == null ? 0 : command.getWarningGroupId(); processInstance.setWarningGroupId(warningGroupId); processInstance.setDryRun(command.getDryRun()); if (command.getScheduleTime() != null) { processInstance.setScheduleTime(command.getScheduleTime()); } processInstance.setCommandStartTime(command.getStartTime()); processInstance.setLocations(processDefinition.getLocations()); // reset global params while there are start parameters setGlobalParamIfCommanded(processDefinition, cmdParam); // curing global params processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), getCommandTypeIfComplement(processInstance, command), processInstance.getScheduleTime())); // set process instance priority processInstance.setProcessInstancePriority(command.getProcessInstancePriority()); String workerGroup = StringUtils.isBlank(command.getWorkerGroup()) ? Constants.DEFAULT_WORKER_GROUP : command.getWorkerGroup(); processInstance.setWorkerGroup(workerGroup); processInstance.setEnvironmentCode(Objects.isNull(command.getEnvironmentCode()) ? -1 : command.getEnvironmentCode()); processInstance.setTimeout(processDefinition.getTimeout()); processInstance.setTenantId(processDefinition.getTenantId()); return processInstance; } private void setGlobalParamIfCommanded(ProcessDefinition processDefinition, Map<String, String> cmdParam) { // get start params from command param Map<String, String> startParamMap = new HashMap<>(); if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_START_PARAMS)) { String startParamJson = cmdParam.get(Constants.CMD_PARAM_START_PARAMS); startParamMap = JSONUtils.toMap(startParamJson); } Map<String, String> fatherParamMap = new HashMap<>(); if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_FATHER_PARAMS)) { String fatherParamJson = cmdParam.get(Constants.CMD_PARAM_FATHER_PARAMS); fatherParamMap = JSONUtils.toMap(fatherParamJson); } startParamMap.putAll(fatherParamMap); // set start param into global params if (startParamMap.size() > 0 && processDefinition.getGlobalParamMap() != null) { for (Map.Entry<String, String> param : processDefinition.getGlobalParamMap().entrySet()) { String val = startParamMap.get(param.getKey()); if (val != null) { param.setValue(val); } } } } /** * get process tenant * there is tenant id in definition, use the tenant of the definition. * if there is not tenant id in the definiton or the tenant not exist * use definition creator's tenant. * * @param tenantId tenantId * @param userId userId * @return tenant */ public Tenant getTenantForProcess(int tenantId, int userId) { Tenant tenant = null; if (tenantId >= 0) { tenant = tenantMapper.queryById(tenantId); } if (userId == 0) { return null; } if (tenant == null) { User user = userMapper.selectById(userId); tenant = tenantMapper.queryById(user.getTenantId()); } return tenant; } /** * get an environment * use the code of the environment to find a environment. * * @param environmentCode environmentCode * @return Environment */ public Environment findEnvironmentByCode(Long environmentCode) { Environment environment = null; if (environmentCode >= 0) { environment = environmentMapper.queryByEnvironmentCode(environmentCode); } return environment; } /** * check command parameters is valid * * @param command command * @param cmdParam cmdParam map * @return whether command param is valid */ private Boolean checkCmdParam(Command command, Map<String, String> cmdParam) { if (command.getTaskDependType() == TaskDependType.TASK_ONLY || command.getTaskDependType() == TaskDependType.TASK_PRE) { if (cmdParam == null || !cmdParam.containsKey(Constants.CMD_PARAM_START_NODES) || cmdParam.get(Constants.CMD_PARAM_START_NODES).isEmpty()) { logger.error("command node depend type is {}, but start nodes is null ", command.getTaskDependType()); return false; } } return true; } /** * construct process instance according to one command. * * @param command command * @param host host * @return process instance */ private ProcessInstance constructProcessInstance(Command command, String host) { ProcessInstance processInstance; ProcessDefinition processDefinition; CommandType commandType = command.getCommandType(); processDefinition = this.findProcessDefinition(command.getProcessDefinitionCode(), command.getProcessDefinitionVersion()); if (processDefinition == null) { logger.error("cannot find the work process define! define code : {}", command.getProcessDefinitionCode()); return null; } Map<String, String> cmdParam = JSONUtils.toMap(command.getCommandParam()); int processInstanceId = command.getProcessInstanceId(); if (processInstanceId == 0) { processInstance = generateNewProcessInstance(processDefinition, command, cmdParam); } else { processInstance = this.findProcessInstanceDetailById(processInstanceId); if (processInstance == null) { return processInstance; } } if (cmdParam != null) { CommandType commandTypeIfComplement = getCommandTypeIfComplement(processInstance, command); // reset global params while repeat running is needed by cmdParam if (commandTypeIfComplement == CommandType.REPEAT_RUNNING) { setGlobalParamIfCommanded(processDefinition, cmdParam); } // Recalculate global parameters after rerun. processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), commandTypeIfComplement, processInstance.getScheduleTime())); processInstance.setProcessDefinition(processDefinition); } //reset command parameter if (processInstance.getCommandParam() != null) { Map<String, String> processCmdParam = JSONUtils.toMap(processInstance.getCommandParam()); for (Map.Entry<String, String> entry : processCmdParam.entrySet()) { if (!cmdParam.containsKey(entry.getKey())) { cmdParam.put(entry.getKey(), entry.getValue()); } } } // reset command parameter if sub process if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) { processInstance.setCommandParam(command.getCommandParam()); } if (Boolean.FALSE.equals(checkCmdParam(command, cmdParam))) { logger.error("command parameter check failed!"); return null; } if (command.getScheduleTime() != null) { processInstance.setScheduleTime(command.getScheduleTime()); } processInstance.setHost(host); ExecutionStatus runStatus = ExecutionStatus.RUNNING_EXECUTION; int runTime = processInstance.getRunTimes(); switch (commandType) { case START_PROCESS: break; case START_FAILURE_TASK_PROCESS: // find failed tasks and init these tasks List<Integer> failedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.FAILURE); List<Integer> toleranceList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.NEED_FAULT_TOLERANCE); List<Integer> killedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.KILL); cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); failedList.addAll(killedList); failedList.addAll(toleranceList); for (Integer taskId : failedList) { initTaskInstance(this.findTaskInstanceById(taskId)); } cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING, String.join(Constants.COMMA, convertIntListToString(failedList))); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); processInstance.setRunTimes(runTime + 1); break; case START_CURRENT_TASK_PROCESS: break; case RECOVER_WAITING_THREAD: break; case RECOVER_SUSPENDED_PROCESS: // find pause tasks and init task's state cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); List<Integer> suspendedNodeList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.PAUSE); List<Integer> stopNodeList = findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.KILL); suspendedNodeList.addAll(stopNodeList); for (Integer taskId : suspendedNodeList) { // initialize the pause state initTaskInstance(this.findTaskInstanceById(taskId)); } cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING, String.join(",", convertIntListToString(suspendedNodeList))); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); processInstance.setRunTimes(runTime + 1); break; case RECOVER_TOLERANCE_FAULT_PROCESS: // recover tolerance fault process processInstance.setRecovery(Flag.YES); runStatus = processInstance.getState(); break; case COMPLEMENT_DATA: // delete all the valid tasks when complement data if id is not null if (processInstance.getId() != 0) { List<TaskInstance> taskInstanceList = this.findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance taskInstance : taskInstanceList) { taskInstance.setFlag(Flag.NO); this.updateTaskInstance(taskInstance); } } break; case REPEAT_RUNNING: // delete the recover task names from command parameter if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) { cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); } // delete all the valid tasks when repeat running List<TaskInstance> validTaskList = findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance taskInstance : validTaskList) { taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); } processInstance.setStartTime(new Date()); processInstance.setEndTime(null); processInstance.setRunTimes(runTime + 1); initComplementDataParam(processDefinition, processInstance, cmdParam); break; case SCHEDULER: break; default: break; } processInstance.setState(runStatus); return processInstance; } /** * get process definition by command * If it is a fault-tolerant command, get the specified version of ProcessDefinition through ProcessInstance * Otherwise, get the latest version of ProcessDefinition * * @return ProcessDefinition */ private ProcessDefinition getProcessDefinitionByCommand(long processDefinitionCode, Map<String, String> cmdParam) { if (cmdParam != null) { int processInstanceId = 0; if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING)); } else if (cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_SUB_PROCESS)); } else if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD)); } if (processInstanceId != 0) { ProcessInstance processInstance = this.findProcessInstanceDetailById(processInstanceId); if (processInstance == null) { return null; } return processDefineLogMapper.queryByDefinitionCodeAndVersion( processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); } } return processDefineMapper.queryByCode(processDefinitionCode); } /** * return complement data if the process start with complement data * * @param processInstance processInstance * @param command command * @return command type */ private CommandType getCommandTypeIfComplement(ProcessInstance processInstance, Command command) { if (CommandType.COMPLEMENT_DATA == processInstance.getCmdTypeIfComplement()) { return CommandType.COMPLEMENT_DATA; } else { return command.getCommandType(); } } /** * initialize complement data parameters * * @param processDefinition processDefinition * @param processInstance processInstance * @param cmdParam cmdParam */ private void initComplementDataParam(ProcessDefinition processDefinition, ProcessInstance processInstance, Map<String, String> cmdParam) { if (!processInstance.isComplementData()) { return; } Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE)); Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); List<Schedule> listSchedules = queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode()); List<Date> complementDate = CronUtils.getSelfFireDateList(start, end, listSchedules); if (complementDate.size() > 0 && Flag.NO == processInstance.getIsSubProcess()) { processInstance.setScheduleTime(complementDate.get(0)); } processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime())); } /** * set sub work process parameters. * handle sub work process instance, update relation table and command parameters * set sub work process flag, extends parent work process command parameters * * @param subProcessInstance subProcessInstance */ public void setSubProcessParam(ProcessInstance subProcessInstance) { String cmdParam = subProcessInstance.getCommandParam(); if (StringUtils.isEmpty(cmdParam)) { return; } Map<String, String> paramMap = JSONUtils.toMap(cmdParam); // write sub process id into cmd param. if (paramMap.containsKey(CMD_PARAM_SUB_PROCESS) && CMD_PARAM_EMPTY_SUB_PROCESS.equals(paramMap.get(CMD_PARAM_SUB_PROCESS))) { paramMap.remove(CMD_PARAM_SUB_PROCESS); paramMap.put(CMD_PARAM_SUB_PROCESS, String.valueOf(subProcessInstance.getId())); subProcessInstance.setCommandParam(JSONUtils.toJsonString(paramMap)); subProcessInstance.setIsSubProcess(Flag.YES); this.saveProcessInstance(subProcessInstance); } // copy parent instance user def params to sub process.. String parentInstanceId = paramMap.get(CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID); if (StringUtils.isNotEmpty(parentInstanceId)) { ProcessInstance parentInstance = findProcessInstanceDetailById(Integer.parseInt(parentInstanceId)); if (parentInstance != null) { subProcessInstance.setGlobalParams( joinGlobalParams(parentInstance.getGlobalParams(), subProcessInstance.getGlobalParams())); this.saveProcessInstance(subProcessInstance); } else { logger.error("sub process command params error, cannot find parent instance: {} ", cmdParam); } } ProcessInstanceMap processInstanceMap = JSONUtils.parseObject(cmdParam, ProcessInstanceMap.class); if (processInstanceMap == null || processInstanceMap.getParentProcessInstanceId() == 0) { return; } // update sub process id to process map table processInstanceMap.setProcessInstanceId(subProcessInstance.getId()); this.updateWorkProcessInstanceMap(processInstanceMap); } /** * join parent global params into sub process. * only the keys doesn't in sub process global would be joined. * * @param parentGlobalParams parentGlobalParams * @param subGlobalParams subGlobalParams * @return global params join */ private String joinGlobalParams(String parentGlobalParams, String subGlobalParams) { List<Property> parentPropertyList = JSONUtils.toList(parentGlobalParams, Property.class); List<Property> subPropertyList = JSONUtils.toList(subGlobalParams, Property.class); Map<String, String> subMap = subPropertyList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue)); for (Property parent : parentPropertyList) { if (!subMap.containsKey(parent.getProp())) { subPropertyList.add(parent); } } return JSONUtils.toJsonString(subPropertyList); } /** * initialize task instance * * @param taskInstance taskInstance */ private void initTaskInstance(TaskInstance taskInstance) { if (!taskInstance.isSubProcess() && (taskInstance.getState().typeIsCancel() || taskInstance.getState().typeIsFailure())) { taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); return; } taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS); updateTaskInstance(taskInstance); } /** * retry submit task to db */ public TaskInstance submitTaskWithRetry(ProcessInstance processInstance, TaskInstance taskInstance, int commitRetryTimes, int commitInterval) { int retryTimes = 1; TaskInstance task = null; while (retryTimes <= commitRetryTimes) { try { // submit task to db task = SpringApplicationContext.getBean(ProcessService.class).submitTask(processInstance, taskInstance); if (task != null && task.getId() != 0) { break; } logger.error("task commit to db failed , taskId {} has already retry {} times, please check the database", taskInstance.getId(), retryTimes); Thread.sleep(commitInterval); } catch (Exception e) { logger.error("task commit to mysql failed", e); } retryTimes += 1; } return task; } /** * submit task to db * submit sub process to command * * @param processInstance processInstance * @param taskInstance taskInstance * @return task instance */ @Transactional(rollbackFor = Exception.class) public TaskInstance submitTask(ProcessInstance processInstance, TaskInstance taskInstance) { logger.info("start submit task : {}, instance id:{}, state: {}", taskInstance.getName(), taskInstance.getProcessInstanceId(), processInstance.getState()); //submit to db TaskInstance task = submitTaskInstanceToDB(taskInstance, processInstance); if (task == null) { logger.error("end submit task to db error, task name:{}, process id:{} state: {} ", taskInstance.getName(), taskInstance.getProcessInstance(), processInstance.getState()); return null; } if (!task.getState().typeIsFinished()) { createSubWorkProcess(processInstance, task); } logger.info("end submit task to db successfully:{} {} state:{} complete, instance id:{} state: {} ", taskInstance.getId(), taskInstance.getName(), task.getState(), processInstance.getId(), processInstance.getState()); return task; } /** * set work process instance map * consider o * repeat running does not generate new sub process instance * set map {parent instance id, task instance id, 0(child instance id)} * * @param parentInstance parentInstance * @param parentTask parentTask * @return process instance map */ private ProcessInstanceMap setProcessInstanceMap(ProcessInstance parentInstance, TaskInstance parentTask) { ProcessInstanceMap processMap = findWorkProcessMapByParent(parentInstance.getId(), parentTask.getId()); if (processMap != null) { return processMap; } if (parentInstance.getCommandType() == CommandType.REPEAT_RUNNING) { // update current task id to map processMap = findPreviousTaskProcessMap(parentInstance, parentTask); if (processMap != null) { processMap.setParentTaskInstanceId(parentTask.getId()); updateWorkProcessInstanceMap(processMap); return processMap; } } // new task processMap = new ProcessInstanceMap(); processMap.setParentProcessInstanceId(parentInstance.getId()); processMap.setParentTaskInstanceId(parentTask.getId()); createWorkProcessInstanceMap(processMap); return processMap; } /** * find previous task work process map. * * @param parentProcessInstance parentProcessInstance * @param parentTask parentTask * @return process instance map */ private ProcessInstanceMap findPreviousTaskProcessMap(ProcessInstance parentProcessInstance, TaskInstance parentTask) { Integer preTaskId = 0; List<TaskInstance> preTaskList = this.findPreviousTaskListByWorkProcessId(parentProcessInstance.getId()); for (TaskInstance task : preTaskList) { if (task.getName().equals(parentTask.getName())) { preTaskId = task.getId(); ProcessInstanceMap map = findWorkProcessMapByParent(parentProcessInstance.getId(), preTaskId); if (map != null) { return map; } } } logger.info("sub process instance is not found,parent task:{},parent instance:{}", parentTask.getId(), parentProcessInstance.getId()); return null; } /** * create sub work process command * * @param parentProcessInstance parentProcessInstance * @param task task */ public void createSubWorkProcess(ProcessInstance parentProcessInstance, TaskInstance task) { if (!task.isSubProcess()) { return; } //check create sub work flow firstly ProcessInstanceMap instanceMap = findWorkProcessMapByParent(parentProcessInstance.getId(), task.getId()); if (null != instanceMap && CommandType.RECOVER_TOLERANCE_FAULT_PROCESS == parentProcessInstance.getCommandType()) { // recover failover tolerance would not create a new command when the sub command already have been created return; } instanceMap = setProcessInstanceMap(parentProcessInstance, task); ProcessInstance childInstance = null; if (instanceMap.getProcessInstanceId() != 0) { childInstance = findProcessInstanceById(instanceMap.getProcessInstanceId()); } Command subProcessCommand = createSubProcessCommand(parentProcessInstance, childInstance, instanceMap, task); updateSubProcessDefinitionByParent(parentProcessInstance, subProcessCommand.getProcessDefinitionCode()); initSubInstanceState(childInstance); createCommand(subProcessCommand); logger.info("sub process command created: {} ", subProcessCommand); } /** * complement data needs transform parent parameter to child. */ private String getSubWorkFlowParam(ProcessInstanceMap instanceMap, ProcessInstance parentProcessInstance, Map<String, String> fatherParams) { // set sub work process command String processMapStr = JSONUtils.toJsonString(instanceMap); Map<String, String> cmdParam = JSONUtils.toMap(processMapStr); if (parentProcessInstance.isComplementData()) { Map<String, String> parentParam = JSONUtils.toMap(parentProcessInstance.getCommandParam()); String endTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE); String startTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, endTime); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, startTime); processMapStr = JSONUtils.toJsonString(cmdParam); } if (fatherParams.size() != 0) { cmdParam.put(CMD_PARAM_FATHER_PARAMS, JSONUtils.toJsonString(fatherParams)); processMapStr = JSONUtils.toJsonString(cmdParam); } return processMapStr; } public Map<String, String> getGlobalParamMap(String globalParams) { List<Property> propList; Map<String, String> globalParamMap = new HashMap<>(); if (StringUtils.isNotEmpty(globalParams)) { propList = JSONUtils.toList(globalParams, Property.class); globalParamMap = propList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue)); } return globalParamMap; } /** * create sub work process command */ public Command createSubProcessCommand(ProcessInstance parentProcessInstance, ProcessInstance childInstance, ProcessInstanceMap instanceMap, TaskInstance task) { CommandType commandType = getSubCommandType(parentProcessInstance, childInstance); Map<String, String> subProcessParam = JSONUtils.toMap(task.getTaskParams()); long childDefineCode = 0L; if (subProcessParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE)) { childDefineCode = Long.parseLong(subProcessParam.get(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE)); } ProcessDefinition subProcessDefinition = processDefineMapper.queryByCode(childDefineCode); Object localParams = subProcessParam.get(Constants.LOCAL_PARAMS); List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class); Map<String, String> globalMap = this.getGlobalParamMap(parentProcessInstance.getGlobalParams()); Map<String, String> fatherParams = new HashMap<>(); if (CollectionUtils.isNotEmpty(allParam)) { for (Property info : allParam) { fatherParams.put(info.getProp(), globalMap.get(info.getProp())); } } String processParam = getSubWorkFlowParam(instanceMap, parentProcessInstance, fatherParams); int subProcessInstanceId = childInstance == null ? 0 : childInstance.getId(); return new Command( commandType, TaskDependType.TASK_POST, parentProcessInstance.getFailureStrategy(), parentProcessInstance.getExecutorId(), subProcessDefinition.getCode(), processParam, parentProcessInstance.getWarningType(), parentProcessInstance.getWarningGroupId(), parentProcessInstance.getScheduleTime(), task.getWorkerGroup(), task.getEnvironmentCode(), parentProcessInstance.getProcessInstancePriority(), parentProcessInstance.getDryRun(), subProcessInstanceId, subProcessDefinition.getVersion() ); } /** * initialize sub work flow state * child instance state would be initialized when 'recovery from pause/stop/failure' */ private void initSubInstanceState(ProcessInstance childInstance) { if (childInstance != null) { childInstance.setState(ExecutionStatus.RUNNING_EXECUTION); updateProcessInstance(childInstance); } } /** * get sub work flow command type * child instance exist: child command = fatherCommand * child instance not exists: child command = fatherCommand[0] */ private CommandType getSubCommandType(ProcessInstance parentProcessInstance, ProcessInstance childInstance) { CommandType commandType = parentProcessInstance.getCommandType(); if (childInstance == null) { String fatherHistoryCommand = parentProcessInstance.getHistoryCmd(); commandType = CommandType.valueOf(fatherHistoryCommand.split(Constants.COMMA)[0]); } return commandType; } /** * update sub process definition * * @param parentProcessInstance parentProcessInstance * @param childDefinitionCode childDefinitionId */ private void updateSubProcessDefinitionByParent(ProcessInstance parentProcessInstance, long childDefinitionCode) { ProcessDefinition fatherDefinition = this.findProcessDefinition(parentProcessInstance.getProcessDefinitionCode(), parentProcessInstance.getProcessDefinitionVersion()); ProcessDefinition childDefinition = this.findProcessDefinitionByCode(childDefinitionCode); if (childDefinition != null && fatherDefinition != null) { childDefinition.setWarningGroupId(fatherDefinition.getWarningGroupId()); processDefineMapper.updateById(childDefinition); } } /** * submit task to mysql * * @param taskInstance taskInstance * @param processInstance processInstance * @return task instance */ public TaskInstance submitTaskInstanceToDB(TaskInstance taskInstance, ProcessInstance processInstance) { ExecutionStatus processInstanceState = processInstance.getState(); if (taskInstance.getState().typeIsFailure()) { if (taskInstance.isSubProcess()) { taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1); } else { if (processInstanceState != ExecutionStatus.READY_STOP && processInstanceState != ExecutionStatus.READY_PAUSE) { // failure task set invalid taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); // crate new task instance if (taskInstance.getState() != ExecutionStatus.NEED_FAULT_TOLERANCE) { taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1); } taskInstance.setSubmitTime(null); taskInstance.setLogPath(null); taskInstance.setExecutePath(null); taskInstance.setStartTime(null); taskInstance.setEndTime(null); taskInstance.setFlag(Flag.YES); taskInstance.setHost(null); taskInstance.setId(0); } } } taskInstance.setExecutorId(processInstance.getExecutorId()); taskInstance.setProcessInstancePriority(processInstance.getProcessInstancePriority()); taskInstance.setState(getSubmitTaskState(taskInstance, processInstance)); if (taskInstance.getSubmitTime() == null) { taskInstance.setSubmitTime(new Date()); } if (taskInstance.getFirstSubmitTime() == null) { taskInstance.setFirstSubmitTime(taskInstance.getSubmitTime()); } boolean saveResult = saveTaskInstance(taskInstance); if (!saveResult) { return null; } return taskInstance; } /** * get submit task instance state by the work process state * cannot modify the task state when running/kill/submit success, or this * task instance is already exists in task queue . * return pause if work process state is ready pause * return stop if work process state is ready stop * if all of above are not satisfied, return submit success * * @param taskInstance taskInstance * @param processInstance processInstance * @return process instance state */ public ExecutionStatus getSubmitTaskState(TaskInstance taskInstance, ProcessInstance processInstance) { ExecutionStatus state = taskInstance.getState(); // running, delayed or killed // the task already exists in task queue // return state if ( state == ExecutionStatus.RUNNING_EXECUTION || state == ExecutionStatus.DELAY_EXECUTION || state == ExecutionStatus.KILL ) { return state; } //return pasue /stop if process instance state is ready pause / stop // or return submit success if (processInstance.getState() == ExecutionStatus.READY_PAUSE) { state = ExecutionStatus.PAUSE; } else if (processInstance.getState() == ExecutionStatus.READY_STOP || !checkProcessStrategy(taskInstance, processInstance)) { state = ExecutionStatus.KILL; } else { state = ExecutionStatus.SUBMITTED_SUCCESS; } return state; } /** * check process instance strategy * * @param taskInstance taskInstance * @return check strategy result */ private boolean checkProcessStrategy(TaskInstance taskInstance, ProcessInstance processInstance) { FailureStrategy failureStrategy = processInstance.getFailureStrategy(); if (failureStrategy == FailureStrategy.CONTINUE) { return true; } List<TaskInstance> taskInstances = this.findValidTaskListByProcessId(taskInstance.getProcessInstanceId()); for (TaskInstance task : taskInstances) { if (task.getState() == ExecutionStatus.FAILURE && task.getRetryTimes() >= task.getMaxRetryTimes()) { return false; } } return true; } /** * insert or update work process instance to data base * * @param processInstance processInstance */ public void saveProcessInstance(ProcessInstance processInstance) { if (processInstance == null) { logger.error("save error, process instance is null!"); return; } if (processInstance.getId() != 0) { processInstanceMapper.updateById(processInstance); } else { processInstanceMapper.insert(processInstance); } } /** * insert or update command * * @param command command * @return save command result */ public int saveCommand(Command command) { if (command.getId() != 0) { return commandMapper.updateById(command); } else { return commandMapper.insert(command); } } /** * insert or update task instance * * @param taskInstance taskInstance * @return save task instance result */ public boolean saveTaskInstance(TaskInstance taskInstance) { if (taskInstance.getId() != 0) { return updateTaskInstance(taskInstance); } else { return createTaskInstance(taskInstance); } } /** * insert task instance * * @param taskInstance taskInstance * @return create task instance result */ public boolean createTaskInstance(TaskInstance taskInstance) { int count = taskInstanceMapper.insert(taskInstance); return count > 0; } /** * update task instance * * @param taskInstance taskInstance * @return update task instance result */ public boolean updateTaskInstance(TaskInstance taskInstance) { int count = taskInstanceMapper.updateById(taskInstance); return count > 0; } /** * find task instance by id * * @param taskId task id * @return task intance */ public TaskInstance findTaskInstanceById(Integer taskId) { return taskInstanceMapper.selectById(taskId); } /** * package task instance */ public void packageTaskInstance(TaskInstance taskInstance, ProcessInstance processInstance) { taskInstance.setProcessInstance(processInstance); taskInstance.setProcessDefine(processInstance.getProcessDefinition()); TaskDefinition taskDefinition = this.findTaskDefinition( taskInstance.getTaskCode(), taskInstance.getTaskDefinitionVersion()); this.updateTaskDefinitionResources(taskDefinition); taskInstance.setTaskDefine(taskDefinition); } /** * Update {@link ResourceInfo} information in {@link TaskDefinition} * * @param taskDefinition the given {@link TaskDefinition} */ public void updateTaskDefinitionResources(TaskDefinition taskDefinition) { Map<String, Object> taskParameters = JSONUtils.parseObject( taskDefinition.getTaskParams(), new TypeReference<Map<String, Object>>() { }); if (taskParameters != null) { // if contains mainJar field, query resource from database // Flink, Spark, MR if (taskParameters.containsKey("mainJar")) { Object mainJarObj = taskParameters.get("mainJar"); ResourceInfo mainJar = JSONUtils.parseObject( JSONUtils.toJsonString(mainJarObj), ResourceInfo.class); ResourceInfo resourceInfo = updateResourceInfo(mainJar); if (resourceInfo != null) { taskParameters.put("mainJar", resourceInfo); } } // update resourceList information if (taskParameters.containsKey("resourceList")) { String resourceListStr = JSONUtils.toJsonString(taskParameters.get("resourceList")); List<ResourceInfo> resourceInfos = JSONUtils.toList(resourceListStr, ResourceInfo.class); List<ResourceInfo> updatedResourceInfos = resourceInfos .stream() .map(this::updateResourceInfo) .filter(Objects::nonNull) .collect(Collectors.toList()); taskParameters.put("resourceList", updatedResourceInfos); } // set task parameters taskDefinition.setTaskParams(JSONUtils.toJsonString(taskParameters)); } } /** * update {@link ResourceInfo} by given original ResourceInfo * * @param res origin resource info * @return {@link ResourceInfo} */ private ResourceInfo updateResourceInfo(ResourceInfo res) { ResourceInfo resourceInfo = null; // only if mainJar is not null and does not contains "resourceName" field if (res != null) { int resourceId = res.getId(); if (resourceId <= 0) { logger.error("invalid resourceId, {}", resourceId); return null; } resourceInfo = new ResourceInfo(); // get resource from database, only one resource should be returned Resource resource = getResourceById(resourceId); resourceInfo.setId(resourceId); resourceInfo.setRes(resource.getFileName()); resourceInfo.setResourceName(resource.getFullName()); if (logger.isInfoEnabled()) { logger.info("updated resource info {}", JSONUtils.toJsonString(resourceInfo)); } } return resourceInfo; } /** * get id list by task state * * @param instanceId instanceId * @param state state * @return task instance states */ public List<Integer> findTaskIdByInstanceState(int instanceId, ExecutionStatus state) { return taskInstanceMapper.queryTaskByProcessIdAndState(instanceId, state.ordinal()); } /** * find valid task list by process definition id * * @param processInstanceId processInstanceId * @return task instance list */ public List<TaskInstance> findValidTaskListByProcessId(Integer processInstanceId) { return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.YES); } /** * find previous task list by work process id * * @param processInstanceId processInstanceId * @return task instance list */ public List<TaskInstance> findPreviousTaskListByWorkProcessId(Integer processInstanceId) { return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.NO); } /** * update work process instance map * * @param processInstanceMap processInstanceMap * @return update process instance result */ public int updateWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) { return processInstanceMapMapper.updateById(processInstanceMap); } /** * create work process instance map * * @param processInstanceMap processInstanceMap * @return create process instance result */ public int createWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) { int count = 0; if (processInstanceMap != null) { return processInstanceMapMapper.insert(processInstanceMap); } return count; } /** * find work process map by parent process id and parent task id. * * @param parentWorkProcessId parentWorkProcessId * @param parentTaskId parentTaskId * @return process instance map */ public ProcessInstanceMap findWorkProcessMapByParent(Integer parentWorkProcessId, Integer parentTaskId) { return processInstanceMapMapper.queryByParentId(parentWorkProcessId, parentTaskId); } /** * delete work process map by parent process id * * @param parentWorkProcessId parentWorkProcessId * @return delete process map result */ public int deleteWorkProcessMapByParentId(int parentWorkProcessId) { return processInstanceMapMapper.deleteByParentProcessId(parentWorkProcessId); } /** * find sub process instance * * @param parentProcessId parentProcessId * @param parentTaskId parentTaskId * @return process instance */ public ProcessInstance findSubProcessInstance(Integer parentProcessId, Integer parentTaskId) { ProcessInstance processInstance = null; ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryByParentId(parentProcessId, parentTaskId); if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) { return processInstance; } processInstance = findProcessInstanceById(processInstanceMap.getProcessInstanceId()); return processInstance; } /** * find parent process instance * * @param subProcessId subProcessId * @return process instance */ public ProcessInstance findParentProcessInstance(Integer subProcessId) { ProcessInstance processInstance = null; ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(subProcessId); if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) { return processInstance; } processInstance = findProcessInstanceById(processInstanceMap.getParentProcessInstanceId()); return processInstance; } /** * change task state * * @param state state * @param startTime startTime * @param host host * @param executePath executePath * @param logPath logPath */ public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state, Date startTime, String host, String executePath, String logPath) { taskInstance.setState(state); taskInstance.setStartTime(startTime); taskInstance.setHost(host); taskInstance.setExecutePath(executePath); taskInstance.setLogPath(logPath); saveTaskInstance(taskInstance); } /** * update process instance * * @param processInstance processInstance * @return update process instance result */ public int updateProcessInstance(ProcessInstance processInstance) { return processInstanceMapper.updateById(processInstance); } /** * change task state * * @param state state * @param endTime endTime * @param varPool varPool */ public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state, Date endTime, int processId, String appIds, String varPool) { taskInstance.setPid(processId); taskInstance.setAppLink(appIds); taskInstance.setState(state); taskInstance.setEndTime(endTime); taskInstance.setVarPool(varPool); changeOutParam(taskInstance); saveTaskInstance(taskInstance); } /** * for show in page of taskInstance */ public void changeOutParam(TaskInstance taskInstance) { if (StringUtils.isEmpty(taskInstance.getVarPool())) { return; } List<Property> properties = JSONUtils.toList(taskInstance.getVarPool(), Property.class); if (CollectionUtils.isEmpty(properties)) { return; } //if the result more than one line,just get the first . Map<String, Object> taskParams = JSONUtils.parseObject(taskInstance.getTaskParams(), new TypeReference<Map<String, Object>>() { }); Object localParams = taskParams.get(LOCAL_PARAMS); if (localParams == null) { return; } List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class); Map<String, String> outProperty = new HashMap<>(); for (Property info : properties) { if (info.getDirect() == Direct.OUT) { outProperty.put(info.getProp(), info.getValue()); } } for (Property info : allParam) { if (info.getDirect() == Direct.OUT) { String paramName = info.getProp(); info.setValue(outProperty.get(paramName)); } } taskParams.put(LOCAL_PARAMS, allParam); taskInstance.setTaskParams(JSONUtils.toJsonString(taskParams)); } /** * convert integer list to string list * * @param intList intList * @return string list */ public List<String> convertIntListToString(List<Integer> intList) { if (intList == null) { return new ArrayList<>(); } List<String> result = new ArrayList<>(intList.size()); for (Integer intVar : intList) { result.add(String.valueOf(intVar)); } return result; } /** * query schedule by id * * @param id id * @return schedule */ public Schedule querySchedule(int id) { return scheduleMapper.selectById(id); } /** * query Schedule by processDefinitionCode * * @param processDefinitionCode processDefinitionCode * @see Schedule */ public List<Schedule> queryReleaseSchedulerListByProcessDefinitionCode(long processDefinitionCode) { return scheduleMapper.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode); } /** * query need failover process instance * * @param host host * @return process instance list */ public List<ProcessInstance> queryNeedFailoverProcessInstances(String host) { return processInstanceMapper.queryByHostAndStatus(host, stateArray); } /** * process need failover process instance * * @param processInstance processInstance */ @Transactional(rollbackFor = RuntimeException.class) public void processNeedFailoverProcessInstances(ProcessInstance processInstance) { //1 update processInstance host is null processInstance.setHost(Constants.NULL); processInstanceMapper.updateById(processInstance); ProcessDefinition processDefinition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); //2 insert into recover command Command cmd = new Command(); cmd.setProcessDefinitionCode(processDefinition.getCode()); cmd.setProcessDefinitionVersion(processDefinition.getVersion()); cmd.setProcessInstanceId(processInstance.getId()); cmd.setCommandParam(String.format("{\"%s\":%d}", Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING, processInstance.getId())); cmd.setExecutorId(processInstance.getExecutorId()); cmd.setCommandType(CommandType.RECOVER_TOLERANCE_FAULT_PROCESS); createCommand(cmd); } /** * query all need failover task instances by host * * @param host host * @return task instance list */ public List<TaskInstance> queryNeedFailoverTaskInstances(String host) { return taskInstanceMapper.queryByHostAndStatus(host, stateArray); } /** * find data source by id * * @param id id * @return datasource */ public DataSource findDataSourceById(int id) { return dataSourceMapper.selectById(id); } /** * update process instance state by id * * @param processInstanceId processInstanceId * @param executionStatus executionStatus * @return update process result */ public int updateProcessInstanceState(Integer processInstanceId, ExecutionStatus executionStatus) { ProcessInstance instance = processInstanceMapper.selectById(processInstanceId); instance.setState(executionStatus); return processInstanceMapper.updateById(instance); } /** * find process instance by the task id * * @param taskId taskId * @return process instance */ public ProcessInstance findProcessInstanceByTaskId(int taskId) { TaskInstance taskInstance = taskInstanceMapper.selectById(taskId); if (taskInstance != null) { return processInstanceMapper.selectById(taskInstance.getProcessInstanceId()); } return null; } /** * find udf function list by id list string * * @param ids ids * @return udf function list */ public List<UdfFunc> queryUdfFunListByIds(int[] ids) { return udfFuncMapper.queryUdfByIdStr(ids, null); } /** * find tenant code by resource name * * @param resName resource name * @param resourceType resource type * @return tenant code */ public String queryTenantCodeByResName(String resName, ResourceType resourceType) { // in order to query tenant code successful although the version is older String fullName = resName.startsWith("/") ? resName : String.format("/%s", resName); List<Resource> resourceList = resourceMapper.queryResource(fullName, resourceType.ordinal()); if (CollectionUtils.isEmpty(resourceList)) { return StringUtils.EMPTY; } int userId = resourceList.get(0).getUserId(); User user = userMapper.selectById(userId); if (Objects.isNull(user)) { return StringUtils.EMPTY; } Tenant tenant = tenantMapper.queryById(user.getTenantId()); if (Objects.isNull(tenant)) { return StringUtils.EMPTY; } return tenant.getTenantCode(); } /** * find schedule list by process define codes. * * @param codes codes * @return schedule list */ public List<Schedule> selectAllByProcessDefineCode(long[] codes) { return scheduleMapper.selectAllByProcessDefineArray(codes); } /** * find last scheduler process instance in the date interval * * @param definitionCode definitionCode * @param dateInterval dateInterval * @return process instance */ public ProcessInstance findLastSchedulerProcessInterval(Long definitionCode, DateInterval dateInterval) { return processInstanceMapper.queryLastSchedulerProcess(definitionCode, dateInterval.getStartTime(), dateInterval.getEndTime()); } /** * find last manual process instance interval * * @param definitionCode process definition code * @param dateInterval dateInterval * @return process instance */ public ProcessInstance findLastManualProcessInterval(Long definitionCode, DateInterval dateInterval) { return processInstanceMapper.queryLastManualProcess(definitionCode, dateInterval.getStartTime(), dateInterval.getEndTime()); } /** * find last running process instance * * @param definitionCode process definition code * @param startTime start time * @param endTime end time * @return process instance */ public ProcessInstance findLastRunningProcess(Long definitionCode, Date startTime, Date endTime) { return processInstanceMapper.queryLastRunningProcess(definitionCode, startTime, endTime, stateArray); } /** * query user queue by process instance * * @param processInstance processInstance * @return queue */ public String queryUserQueueByProcessInstance(ProcessInstance processInstance) { String queue = ""; if (processInstance == null) { return queue; } User executor = userMapper.selectById(processInstance.getExecutorId()); if (executor != null) { queue = executor.getQueue(); } return queue; } /** * query project name and user name by processInstanceId. * * @param processInstanceId processInstanceId * @return projectName and userName */ public ProjectUser queryProjectWithUserByProcessInstanceId(int processInstanceId) { return projectMapper.queryProjectWithUserByProcessInstanceId(processInstanceId); } /** * get task worker group * * @param taskInstance taskInstance * @return workerGroupId */ public String getTaskWorkerGroup(TaskInstance taskInstance) { String workerGroup = taskInstance.getWorkerGroup(); if (StringUtils.isNotBlank(workerGroup)) { return workerGroup; } int processInstanceId = taskInstance.getProcessInstanceId(); ProcessInstance processInstance = findProcessInstanceById(processInstanceId); if (processInstance != null) { return processInstance.getWorkerGroup(); } logger.info("task : {} will use default worker group", taskInstance.getId()); return Constants.DEFAULT_WORKER_GROUP; } /** * get have perm project list * * @param userId userId * @return project list */ public List<Project> getProjectListHavePerm(int userId) { List<Project> createProjects = projectMapper.queryProjectCreatedByUser(userId); List<Project> authedProjects = projectMapper.queryAuthedProjectListByUserId(userId); if (createProjects == null) { createProjects = new ArrayList<>(); } if (authedProjects != null) { createProjects.addAll(authedProjects); } return createProjects; } /** * list unauthorized udf function * * @param userId user id * @param needChecks data source id array * @return unauthorized udf function list */ public <T> List<T> listUnauthorized(int userId, T[] needChecks, AuthorizationType authorizationType) { List<T> resultList = new ArrayList<>(); if (Objects.nonNull(needChecks) && needChecks.length > 0) { Set<T> originResSet = new HashSet<>(Arrays.asList(needChecks)); switch (authorizationType) { case RESOURCE_FILE_ID: case UDF_FILE: List<Resource> ownUdfResources = resourceMapper.listAuthorizedResourceById(userId, needChecks); addAuthorizedResources(ownUdfResources, userId); Set<Integer> authorizedResourceFiles = ownUdfResources.stream().map(Resource::getId).collect(toSet()); originResSet.removeAll(authorizedResourceFiles); break; case RESOURCE_FILE_NAME: List<Resource> ownResources = resourceMapper.listAuthorizedResource(userId, needChecks); addAuthorizedResources(ownResources, userId); Set<String> authorizedResources = ownResources.stream().map(Resource::getFullName).collect(toSet()); originResSet.removeAll(authorizedResources); break; case DATASOURCE: Set<Integer> authorizedDatasources = dataSourceMapper.listAuthorizedDataSource(userId, needChecks).stream().map(DataSource::getId).collect(toSet()); originResSet.removeAll(authorizedDatasources); break; case UDF: Set<Integer> authorizedUdfs = udfFuncMapper.listAuthorizedUdfFunc(userId, needChecks).stream().map(UdfFunc::getId).collect(toSet()); originResSet.removeAll(authorizedUdfs); break; default: break; } resultList.addAll(originResSet); } return resultList; } /** * get user by user id * * @param userId user id * @return User */ public User getUserById(int userId) { return userMapper.selectById(userId); } /** * get resource by resource id * * @param resourceId resource id * @return Resource */ public Resource getResourceById(int resourceId) { return resourceMapper.selectById(resourceId); } /** * list resources by ids * * @param resIds resIds * @return resource list */ public List<Resource> listResourceByIds(Integer[] resIds) { return resourceMapper.listResourceByIds(resIds); } /** * format task app id in task instance */ public String formatTaskAppId(TaskInstance taskInstance) { ProcessInstance processInstance = findProcessInstanceById(taskInstance.getProcessInstanceId()); if (processInstance == null) { return ""; } ProcessDefinition definition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); if (definition == null) { return ""; } return String.format("%s_%s_%s", definition.getId(), processInstance.getId(), taskInstance.getId()); } /** * switch process definition version to process definition log version */ public int switchVersion(ProcessDefinition processDefinition, ProcessDefinitionLog processDefinitionLog) { if (null == processDefinition || null == processDefinitionLog) { return Constants.DEFINITION_FAILURE; } processDefinitionLog.setId(processDefinition.getId()); processDefinitionLog.setReleaseState(ReleaseState.OFFLINE); processDefinitionLog.setFlag(Flag.YES); int result = processDefineMapper.updateById(processDefinitionLog); if (result > 0) { result = switchProcessTaskRelationVersion(processDefinitionLog); if (result <= 0) { return Constants.DEFINITION_FAILURE; } } return result; } public int switchProcessTaskRelationVersion(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); if (!processTaskRelationList.isEmpty()) { processTaskRelationMapper.deleteByCode(processDefinition.getProjectCode(), processDefinition.getCode()); } List<ProcessTaskRelationLog> processTaskRelationLogList = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion()); return processTaskRelationMapper.batchInsert(processTaskRelationLogList); } /** * get resource ids * * @param taskDefinition taskDefinition * @return resource ids */ public String getResourceIds(TaskDefinition taskDefinition) { Set<Integer> resourceIds = null; AbstractParameters params = TaskParametersUtils.getParameters(taskDefinition.getTaskType(), taskDefinition.getTaskParams()); if (params != null && CollectionUtils.isNotEmpty(params.getResourceFilesList())) { resourceIds = params.getResourceFilesList(). stream() .filter(t -> t.getId() != 0) .map(ResourceInfo::getId) .collect(Collectors.toSet()); } if (CollectionUtils.isEmpty(resourceIds)) { return StringUtils.EMPTY; } return StringUtils.join(resourceIds, ","); } public int saveTaskDefine(User operator, long projectCode, List<TaskDefinitionLog> taskDefinitionLogs) { Date now = new Date(); List<TaskDefinitionLog> newTaskDefinitionLogs = new ArrayList<>(); List<TaskDefinitionLog> updateTaskDefinitionLogs = new ArrayList<>(); for (TaskDefinitionLog taskDefinitionLog : taskDefinitionLogs) { taskDefinitionLog.setProjectCode(projectCode); taskDefinitionLog.setUpdateTime(now); taskDefinitionLog.setOperateTime(now); taskDefinitionLog.setOperator(operator.getId()); taskDefinitionLog.setResourceIds(getResourceIds(taskDefinitionLog)); if (taskDefinitionLog.getCode() > 0 && taskDefinitionLog.getVersion() > 0) { TaskDefinitionLog definitionCodeAndVersion = taskDefinitionLogMapper .queryByDefinitionCodeAndVersion(taskDefinitionLog.getCode(), taskDefinitionLog.getVersion()); if (definitionCodeAndVersion != null) { if (!taskDefinitionLog.equals(definitionCodeAndVersion)) { taskDefinitionLog.setUserId(definitionCodeAndVersion.getUserId()); Integer version = taskDefinitionLogMapper.queryMaxVersionForDefinition(taskDefinitionLog.getCode()); taskDefinitionLog.setVersion(version + 1); taskDefinitionLog.setCreateTime(definitionCodeAndVersion.getCreateTime()); updateTaskDefinitionLogs.add(taskDefinitionLog); } continue; } } taskDefinitionLog.setUserId(operator.getId()); taskDefinitionLog.setVersion(Constants.VERSION_FIRST); taskDefinitionLog.setCreateTime(now); if (taskDefinitionLog.getCode() == 0) { try { taskDefinitionLog.setCode(CodeGenerateUtils.getInstance().genCode()); } catch (CodeGenerateException e) { logger.error("Task code get error, ", e); return Constants.DEFINITION_FAILURE; } } newTaskDefinitionLogs.add(taskDefinitionLog); } int insertResult = 0; int updateResult = 0; for (TaskDefinitionLog taskDefinitionToUpdate : updateTaskDefinitionLogs) { TaskDefinition task = taskDefinitionMapper.queryByCode(taskDefinitionToUpdate.getCode()); if (task == null) { newTaskDefinitionLogs.add(taskDefinitionToUpdate); } else { insertResult += taskDefinitionLogMapper.insert(taskDefinitionToUpdate); taskDefinitionToUpdate.setId(task.getId()); updateResult += taskDefinitionMapper.updateById(taskDefinitionToUpdate); } } if (!newTaskDefinitionLogs.isEmpty()) { updateResult += taskDefinitionMapper.batchInsert(newTaskDefinitionLogs); insertResult += taskDefinitionLogMapper.batchInsert(newTaskDefinitionLogs); } return (insertResult & updateResult) > 0 ? 1 : Constants.EXIT_CODE_SUCCESS; } /** * save processDefinition (including create or update processDefinition) */ public int saveProcessDefine(User operator, ProcessDefinition processDefinition, Boolean isFromProcessDefine) { ProcessDefinitionLog processDefinitionLog = new ProcessDefinitionLog(processDefinition); Integer version = processDefineLogMapper.queryMaxVersionForDefinition(processDefinition.getCode()); int insertVersion = version == null || version == 0 ? Constants.VERSION_FIRST : version + 1; processDefinitionLog.setVersion(insertVersion); processDefinitionLog.setReleaseState(isFromProcessDefine ? ReleaseState.OFFLINE : ReleaseState.ONLINE); processDefinitionLog.setOperator(operator.getId()); processDefinitionLog.setOperateTime(processDefinition.getUpdateTime()); int insertLog = processDefineLogMapper.insert(processDefinitionLog); int result; if (0 == processDefinition.getId()) { result = processDefineMapper.insert(processDefinitionLog); } else { processDefinitionLog.setId(processDefinition.getId()); result = processDefineMapper.updateById(processDefinitionLog); } return (insertLog & result) > 0 ? insertVersion : 0; } /** * save task relations */ public int saveTaskRelation(User operator, long projectCode, long processDefinitionCode, int processDefinitionVersion, List<ProcessTaskRelationLog> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) { if (taskRelationList.isEmpty()) { return Constants.EXIT_CODE_SUCCESS; } Map<Long, TaskDefinitionLog> taskDefinitionLogMap = null; if (CollectionUtils.isNotEmpty(taskDefinitionLogs)) { taskDefinitionLogMap = taskDefinitionLogs.stream() .collect(Collectors.toMap(TaskDefinition::getCode, taskDefinitionLog -> taskDefinitionLog)); } Date now = new Date(); for (ProcessTaskRelationLog processTaskRelationLog : taskRelationList) { processTaskRelationLog.setProjectCode(projectCode); processTaskRelationLog.setProcessDefinitionCode(processDefinitionCode); processTaskRelationLog.setProcessDefinitionVersion(processDefinitionVersion); if (taskDefinitionLogMap != null) { TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(processTaskRelationLog.getPreTaskCode()); if (taskDefinitionLog != null) { processTaskRelationLog.setPreTaskVersion(taskDefinitionLog.getVersion()); } processTaskRelationLog.setPostTaskVersion(taskDefinitionLogMap.get(processTaskRelationLog.getPostTaskCode()).getVersion()); } processTaskRelationLog.setCreateTime(now); processTaskRelationLog.setUpdateTime(now); processTaskRelationLog.setOperator(operator.getId()); processTaskRelationLog.setOperateTime(now); } List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode); if (!processTaskRelationList.isEmpty()) { Set<Integer> processTaskRelationSet = processTaskRelationList.stream().map(ProcessTaskRelation::hashCode).collect(toSet()); Set<Integer> taskRelationSet = taskRelationList.stream().map(ProcessTaskRelationLog::hashCode).collect(toSet()); boolean result = CollectionUtils.isEqualCollection(processTaskRelationSet, taskRelationSet); if (result) { return Constants.EXIT_CODE_SUCCESS; } processTaskRelationMapper.deleteByCode(projectCode, processDefinitionCode); } int result = processTaskRelationMapper.batchInsert(taskRelationList); int resultLog = processTaskRelationLogMapper.batchInsert(taskRelationList); return (result & resultLog) > 0 ? Constants.EXIT_CODE_SUCCESS : Constants.EXIT_CODE_FAILURE; } public boolean isTaskOnline(long taskCode) { List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByTaskCode(taskCode); if (!processTaskRelationList.isEmpty()) { Set<Long> processDefinitionCodes = processTaskRelationList .stream() .map(ProcessTaskRelation::getProcessDefinitionCode) .collect(Collectors.toSet()); List<ProcessDefinition> processDefinitionList = processDefineMapper.queryByCodes(processDefinitionCodes); // check process definition is already online for (ProcessDefinition processDefinition : processDefinitionList) { if (processDefinition.getReleaseState() == ReleaseState.ONLINE) { return true; } } } return false; } /** * Generate the DAG Graph based on the process definition id * * @param processDefinition process definition * @return dag graph */ public DAG<String, TaskNode, TaskNodeRelation> genDagGraph(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); List<TaskNode> taskNodeList = transformTask(processTaskRelations, Lists.newArrayList()); ProcessDag processDag = DagHelper.getProcessDag(taskNodeList, new ArrayList<>(processTaskRelations)); // Generate concrete Dag to be executed return DagHelper.buildDagGraph(processDag); } /** * generate DagData */ public DagData genDagData(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); List<TaskDefinitionLog> taskDefinitionLogList = genTaskDefineList(processTaskRelations); List<TaskDefinition> taskDefinitions = taskDefinitionLogList.stream() .map(taskDefinitionLog -> JSONUtils.parseObject(JSONUtils.toJsonString(taskDefinitionLog), TaskDefinition.class)) .collect(Collectors.toList()); return new DagData(processDefinition, processTaskRelations, taskDefinitions); } public List<TaskDefinitionLog> genTaskDefineList(List<ProcessTaskRelation> processTaskRelations) { Set<TaskDefinition> taskDefinitionSet = new HashSet<>(); for (ProcessTaskRelation processTaskRelation : processTaskRelations) { if (processTaskRelation.getPreTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPreTaskCode(), processTaskRelation.getPreTaskVersion())); } if (processTaskRelation.getPostTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion())); } } if (taskDefinitionSet.isEmpty()) { return Lists.newArrayList(); } return taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet); } public List<TaskDefinitionLog> getTaskDefineLogListByRelation(List<ProcessTaskRelation> processTaskRelations) { List<TaskDefinitionLog> taskDefinitionLogs = new ArrayList<>(); Map<Long, Integer> taskCodeVersionMap = new HashMap<>(); for (ProcessTaskRelation processTaskRelation : processTaskRelations) { if (processTaskRelation.getPreTaskCode() > 0) { taskCodeVersionMap.put(processTaskRelation.getPreTaskCode(), processTaskRelation.getPreTaskVersion()); } if (processTaskRelation.getPostTaskCode() > 0) { taskCodeVersionMap.put(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion()); } } taskCodeVersionMap.forEach((code,version) -> { taskDefinitionLogs.add((TaskDefinitionLog) this.findTaskDefinition(code, version)); }); return taskDefinitionLogs; } /** * find task definition by code and version */ public TaskDefinition findTaskDefinition(long taskCode, int taskDefinitionVersion) { return taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(taskCode, taskDefinitionVersion); } /** * find process task relation list by projectCode and processDefinitionCode */ public List<ProcessTaskRelation> findRelationByCode(long projectCode, long processDefinitionCode) { return processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode); } /** * add authorized resources * * @param ownResources own resources * @param userId userId */ private void addAuthorizedResources(List<Resource> ownResources, int userId) { List<Integer> relationResourceIds = resourceUserMapper.queryResourcesIdListByUserIdAndPerm(userId, 7); List<Resource> relationResources = CollectionUtils.isNotEmpty(relationResourceIds) ? resourceMapper.queryResourceListById(relationResourceIds) : new ArrayList<>(); ownResources.addAll(relationResources); } /** * Use temporarily before refactoring taskNode */ public List<TaskNode> transformTask(List<ProcessTaskRelation> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) { Map<Long, List<Long>> taskCodeMap = new HashMap<>(); for (ProcessTaskRelation processTaskRelation : taskRelationList) { taskCodeMap.compute(processTaskRelation.getPostTaskCode(), (k, v) -> { if (v == null) { v = new ArrayList<>(); } if (processTaskRelation.getPreTaskCode() != 0L) { v.add(processTaskRelation.getPreTaskCode()); } return v; }); } if (CollectionUtils.isEmpty(taskDefinitionLogs)) { taskDefinitionLogs = genTaskDefineList(taskRelationList); } Map<Long, TaskDefinitionLog> taskDefinitionLogMap = taskDefinitionLogs.stream() .collect(Collectors.toMap(TaskDefinitionLog::getCode, taskDefinitionLog -> taskDefinitionLog)); List<TaskNode> taskNodeList = new ArrayList<>(); for (Entry<Long, List<Long>> code : taskCodeMap.entrySet()) { TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(code.getKey()); if (taskDefinitionLog != null) { TaskNode taskNode = new TaskNode(); taskNode.setCode(taskDefinitionLog.getCode()); taskNode.setVersion(taskDefinitionLog.getVersion()); taskNode.setName(taskDefinitionLog.getName()); taskNode.setDesc(taskDefinitionLog.getDescription()); taskNode.setType(taskDefinitionLog.getTaskType().toUpperCase()); taskNode.setRunFlag(taskDefinitionLog.getFlag() == Flag.YES ? Constants.FLOWNODE_RUN_FLAG_NORMAL : Constants.FLOWNODE_RUN_FLAG_FORBIDDEN); taskNode.setMaxRetryTimes(taskDefinitionLog.getFailRetryTimes()); taskNode.setRetryInterval(taskDefinitionLog.getFailRetryInterval()); Map<String, Object> taskParamsMap = taskNode.taskParamsToJsonObj(taskDefinitionLog.getTaskParams()); taskNode.setConditionResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.CONDITION_RESULT))); taskNode.setSwitchResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.SWITCH_RESULT))); taskNode.setDependence(JSONUtils.toJsonString(taskParamsMap.get(Constants.DEPENDENCE))); taskParamsMap.remove(Constants.CONDITION_RESULT); taskParamsMap.remove(Constants.DEPENDENCE); taskNode.setParams(JSONUtils.toJsonString(taskParamsMap)); taskNode.setTaskInstancePriority(taskDefinitionLog.getTaskPriority()); taskNode.setWorkerGroup(taskDefinitionLog.getWorkerGroup()); taskNode.setEnvironmentCode(taskDefinitionLog.getEnvironmentCode()); taskNode.setTimeout(JSONUtils.toJsonString(new TaskTimeoutParameter(taskDefinitionLog.getTimeoutFlag() == TimeoutFlag.OPEN, taskDefinitionLog.getTimeoutNotifyStrategy(), taskDefinitionLog.getTimeout()))); taskNode.setDelayTime(taskDefinitionLog.getDelayTime()); taskNode.setPreTasks(JSONUtils.toJsonString(code.getValue().stream().map(taskDefinitionLogMap::get).map(TaskDefinition::getCode).collect(Collectors.toList()))); taskNode.setTaskGroupId(taskDefinitionLog.getTaskGroupId()); taskNode.setTaskGroupPriority(taskDefinitionLog.getTaskGroupPriority()); taskNodeList.add(taskNode); } } return taskNodeList; } public Map<ProcessInstance, TaskInstance> notifyProcessList(int processId) { HashMap<ProcessInstance, TaskInstance> processTaskMap = new HashMap<>(); //find sub tasks ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(processId); if (processInstanceMap == null) { return processTaskMap; } ProcessInstance fatherProcess = this.findProcessInstanceById(processInstanceMap.getParentProcessInstanceId()); TaskInstance fatherTask = this.findTaskInstanceById(processInstanceMap.getParentTaskInstanceId()); if (fatherProcess != null) { processTaskMap.put(fatherProcess, fatherTask); } return processTaskMap; } /** * the first time (when submit the task ) get the resource of the task group * @param taskId task id * @param taskName * @param groupId * @param processId * @param priority * @return */ public boolean acquireTaskGroup(int taskId, String taskName, int groupId, int processId, int priority) { TaskGroup taskGroup = taskGroupMapper.selectById(groupId); if (taskGroup == null) { return true; } // if task group is not applicable if (taskGroup.getStatus() == Flag.NO.getCode()) { return true; } TaskGroupQueue taskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskId); if (taskGroupQueue == null) { taskGroupQueue = insertIntoTaskGroupQueue(taskId, taskName, groupId, processId, priority, TaskGroupQueueStatus.WAIT_QUEUE); } else { if (taskGroupQueue.getStatus() == TaskGroupQueueStatus.ACQUIRE_SUCCESS) { return true; } taskGroupQueue.setInQueue(Flag.NO.getCode()); taskGroupQueue.setStatus(TaskGroupQueueStatus.WAIT_QUEUE); this.taskGroupQueueMapper.updateById(taskGroupQueue); } //check priority List<TaskGroupQueue> highPriorityTasks = taskGroupQueueMapper.queryHighPriorityTasks(groupId, priority, TaskGroupQueueStatus.WAIT_QUEUE.getCode()); if (CollectionUtils.isNotEmpty(highPriorityTasks)) { this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return false; } //try to get taskGroup int count = taskGroupMapper.selectAvailableCountById(groupId); if (count == 1 && robTaskGroupResouce(taskGroupQueue)) { return true; } this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return false; } /** * try to get the task group resource(when other task release the resource) * @param taskGroupQueue * @return */ public boolean robTaskGroupResouce(TaskGroupQueue taskGroupQueue) { TaskGroup taskGroup = taskGroupMapper.selectById(taskGroupQueue.getGroupId()); int affectedCount = taskGroupMapper.updateTaskGroupResource(taskGroup.getId(),taskGroupQueue.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode()); if (affectedCount > 0) { taskGroupQueue.setStatus(TaskGroupQueueStatus.ACQUIRE_SUCCESS); this.taskGroupQueueMapper.updateById(taskGroupQueue); this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return true; } return false; } public boolean acquireTaskGroupAgain(TaskGroupQueue taskGroupQueue) { return robTaskGroupResouce(taskGroupQueue); } public void releaseAllTaskGroup(int processInstanceId) { List<TaskInstance> taskInstances = this.taskInstanceMapper.loadAllInfosNoRelease(processInstanceId, TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode()); for (TaskInstance info : taskInstances) { releaseTaskGroup(info); } } /** * release the TGQ resource when the corresponding task is finished. * * @return the result code and msg */ public TaskInstance releaseTaskGroup(TaskInstance taskInstance) { TaskGroup taskGroup = taskGroupMapper.selectById(taskInstance.getTaskGroupId()); if (taskGroup == null) { return null; } TaskGroupQueue thisTaskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskInstance.getId()); if (thisTaskGroupQueue.getStatus() == TaskGroupQueueStatus.RELEASE) { return null; } try { while (taskGroupMapper.releaseTaskGroupResource(taskGroup.getId(), taskGroup.getUseSize() , thisTaskGroupQueue.getId(), TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode()) != 1) { thisTaskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskInstance.getId()); if (thisTaskGroupQueue.getStatus() == TaskGroupQueueStatus.RELEASE) { return null; } taskGroup = taskGroupMapper.selectById(taskInstance.getTaskGroupId()); } } catch (Exception e) { logger.error("release the task group error",e); } logger.info("updateTask:{}",taskInstance.getName()); changeTaskGroupQueueStatus(taskInstance.getId(), TaskGroupQueueStatus.RELEASE); TaskGroupQueue taskGroupQueue = this.taskGroupQueueMapper.queryTheHighestPriorityTasks(taskGroup.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode(), Flag.NO.getCode(), Flag.NO.getCode()); if (taskGroupQueue == null) { return null; } while (this.taskGroupQueueMapper.updateInQueueCAS(Flag.NO.getCode(), Flag.YES.getCode(), taskGroupQueue.getId()) != 1) { taskGroupQueue = this.taskGroupQueueMapper.queryTheHighestPriorityTasks(taskGroup.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode(), Flag.NO.getCode(), Flag.NO.getCode()); if (taskGroupQueue == null) { return null; } } return this.taskInstanceMapper.selectById(taskGroupQueue.getTaskId()); } /** * release the TGQ resource when the corresponding task is finished. * * @param taskId task id * @return the result code and msg */ public void changeTaskGroupQueueStatus(int taskId, TaskGroupQueueStatus status) { TaskGroupQueue taskGroupQueue = taskGroupQueueMapper.queryByTaskId(taskId); taskGroupQueue.setStatus(status); taskGroupQueue.setUpdateTime(new Date(System.currentTimeMillis())); taskGroupQueueMapper.updateById(taskGroupQueue); } /** * insert into task group queue * * @param taskId task id * @param taskName task name * @param groupId group id * @param processId process id * @param priority priority * @return result and msg code */ public TaskGroupQueue insertIntoTaskGroupQueue(Integer taskId, String taskName, Integer groupId, Integer processId, Integer priority, TaskGroupQueueStatus status) { TaskGroupQueue taskGroupQueue = new TaskGroupQueue(taskId, taskName, groupId, processId, priority, status); taskGroupQueueMapper.insert(taskGroupQueue); return taskGroupQueue; } public int updateTaskGroupQueueStatus(Integer taskId, int status) { return taskGroupQueueMapper.updateStatusByTaskId(taskId, status); } public int updateTaskGroupQueue(TaskGroupQueue taskGroupQueue) { return taskGroupQueueMapper.updateById(taskGroupQueue); } public TaskGroupQueue loadTaskGroupQueue(int taskId) { return this.taskGroupQueueMapper.queryByTaskId(taskId); } public void sendStartTask2Master(ProcessInstance processInstance,int taskId, org.apache.dolphinscheduler.remote.command.CommandType taskType) { String host = processInstance.getHost(); String address = host.split(":")[0]; int port = Integer.parseInt(host.split(":")[1]); TaskEventChangeCommand taskEventChangeCommand = new TaskEventChangeCommand( processInstance.getId(), taskId ); stateEventCallbackService.sendResult(address, port, taskEventChangeCommand.convert2Command(taskType)); } public ProcessInstance loadNextProcess4Serial(long code, int state) { return this.processInstanceMapper.loadNextProcess4Serial(code, state); } private void deleteCommandWithCheck(int commandId) { int delete = this.commandMapper.deleteById(commandId); if (delete != 1) { throw new ServiceException("delete command fail, id:" + commandId); } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,508
[Bug] [dolphinscheduler-api] The workflow is moved and copied to other projects in batches. Running the workflow does not generate task instances
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened 1.Batch move and batch copy the workflow of project A to project B 2.Running the workflow in project B,does not generate task instances ![image](https://user-images.githubusercontent.com/95271106/146861512-2b219bbd-fb23-4f94-80e1-7abfcbf96272.png) ![image](https://user-images.githubusercontent.com/95271106/146861526-800a5888-be7a-492f-8750-1a21f0739e6d.png) ### What you expected to happen . ### How to reproduce 1.Batch move and batch copy the workflow of project A to project B 2.Running the workflow in project B,which should generate task instances ### Anything else _No response_ ### Version 2.0.0 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7508
https://github.com/apache/dolphinscheduler/pull/7666
d75f22781be2bc33a399ded7aac5fb2665de8bcf
86137e05ea37dd3346dcf8c2e790b410533ac53c
"2021-12-21T02:40:49Z"
java
"2021-12-28T05:27:52Z"
dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/list.vue
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ <template> <div class="list-model" style="position: relative;"> <div class="table-box"> <el-table :data="list" size="mini" style="width: 100%" @selection-change="_arrDelChange" row-class-name="items"> <el-table-column type="selection" width="50" :selectable="selectable" class-name="select-all"></el-table-column> <el-table-column prop="id" :label="$t('#')" width="50"></el-table-column> <el-table-column :label="$t('Process Name')" min-width="200"> <template slot-scope="scope"> <el-popover trigger="hover" placement="top"> <p>{{ scope.row.name }}</p> <div slot="reference" class="name-wrapper"> <router-link :to="{ path: `/projects/${projectCode}/definition/list/${scope.row.code}` }" tag="a" class="links"> <span class="ellipsis name">{{scope.row.name}}</span> </router-link> </div> </el-popover> </template> </el-table-column> <el-table-column :label="$t('State')"> <template slot-scope="scope"> {{_rtPublishStatus(scope.row.releaseState)}} </template> </el-table-column> <el-table-column :label="$t('Create Time')" width="135"> <template slot-scope="scope"> <span>{{scope.row.createTime | formatDate}}</span> </template> </el-table-column> <el-table-column :label="$t('Update Time')" width="135"> <template slot-scope="scope"> <span>{{scope.row.updateTime | formatDate}}</span> </template> </el-table-column> <el-table-column :label="$t('Description')"> <template slot-scope="scope"> <span>{{scope.row.description | filterNull}}</span> </template> </el-table-column> <el-table-column prop="modifyBy" :label="$t('Modify User')"></el-table-column> <el-table-column :label="$t('Timing state')"> <template slot-scope="scope"> <span v-if="scope.row.scheduleReleaseState === 'OFFLINE'" class="time_offline">{{$t('offline')}}</span> <span v-if="scope.row.scheduleReleaseState === 'ONLINE'" class="time_online">{{$t('online')}}</span> <span v-if="!scope.row.scheduleReleaseState">-</span> </template> </el-table-column> <el-table-column :label="$t('Operation')" width="335" fixed="right"> <template slot-scope="scope"> <el-tooltip :content="$t('Edit')" placement="top" :enterable="false"> <span><el-button type="primary" size="mini" icon="el-icon-edit-outline" :disabled="scope.row.releaseState === 'ONLINE'" @click="_edit(scope.row)" circle></el-button></span> </el-tooltip> <el-tooltip :content="$t('Start')" placement="top" :enterable="false"> <span><el-button type="success" size="mini" :disabled="scope.row.releaseState !== 'ONLINE'" icon="el-icon-video-play" @click="_start(scope.row)" circle class="button-run"></el-button></span> </el-tooltip> <el-tooltip :content="$t('Timing')" placement="top" :enterable="false"> <span><el-button type="primary" size="mini" icon="el-icon-time" :disabled="scope.row.releaseState !== 'ONLINE' || scope.row.scheduleReleaseState !== null" @click="_timing(scope.row)" circle></el-button></span> </el-tooltip> <el-tooltip :content="$t('online')" placement="top" :enterable="false"> <span><el-button type="warning" size="mini" v-if="scope.row.releaseState === 'OFFLINE'" icon="el-icon-upload2" @click="_poponline(scope.row)" circle class="button-publish"></el-button></span> </el-tooltip> <el-tooltip :content="$t('offline')" placement="top" :enterable="false"> <span><el-button type="danger" size="mini" icon="el-icon-download" v-if="scope.row.releaseState === 'ONLINE'" @click="_downline(scope.row)" circle class="btn-cancel-publish"></el-button></span> </el-tooltip> <el-tooltip :content="$t('Copy Workflow')" placement="top" :enterable="false"> <span><el-button type="primary" size="mini" :disabled="scope.row.releaseState === 'ONLINE'" icon="el-icon-document-copy" @click="_copyProcess(scope.row)" circle></el-button></span> </el-tooltip> <el-tooltip :content="$t('Cron Manage')" placement="top" :enterable="false"> <span><el-button type="primary" size="mini" icon="el-icon-date" :disabled="scope.row.releaseState !== 'ONLINE'" @click="_timingManage(scope.row)" circle></el-button></span> </el-tooltip> <el-tooltip :content="$t('Delete')" placement="top" :enterable="false"> <el-popconfirm :confirmButtonText="$t('Confirm')" :cancelButtonText="$t('Cancel')" icon="el-icon-info" iconColor="red" :title="$t('Delete?')" @onConfirm="_delete(scope.row,scope.row.id)" > <el-button type="danger" size="mini" icon="el-icon-delete" :disabled="scope.row.releaseState === 'ONLINE'" circle slot="reference"></el-button> </el-popconfirm> </el-tooltip> <el-tooltip :content="$t('TreeView')" placement="top" :enterable="false"> <span><el-button type="primary" size="mini" icon="el-icon-s-data" @click="_treeView(scope.row)" circle></el-button></span> </el-tooltip> <el-tooltip :content="$t('Export')" placement="top" :enterable="false"> <span><el-button type="primary" size="mini" icon="el-icon-s-unfold" @click="_export(scope.row)" circle></el-button></span> </el-tooltip> <el-tooltip :content="$t('Version Info')" placement="top" :enterable="false"> <span><el-button type="primary" size="mini" icon="el-icon-info" @click="_version(scope.row)" circle></el-button></span> </el-tooltip> </template> </el-table-column> </el-table> </div> <el-tooltip :content="$t('Delete')" placement="top"> <el-popconfirm :confirmButtonText="$t('Confirm')" :cancelButtonText="$t('Cancel')" :title="$t('Delete?')" @onConfirm="_delete({},-1)" > <el-button style="position: absolute; bottom: -48px; left: 19px;" type="primary" size="mini" :disabled="!strSelectCodes" slot="reference" class="btn-delete-all">{{$t('Delete')}}</el-button> </el-popconfirm> </el-tooltip> <el-button type="primary" size="mini" :disabled="!strSelectCodes" style="position: absolute; bottom: -48px; left: 80px;" @click="_batchExport(item)" >{{$t('Export')}}</el-button> <span><el-button type="primary" size="mini" :disabled="!strSelectCodes" style="position: absolute; bottom: -48px; left: 140px;" @click="_batchCopy(item)" >{{$t('Batch copy')}}</el-button></span> <el-button type="primary" size="mini" :disabled="!strSelectCodes" style="position: absolute; bottom: -48px; left: 225px;" @click="_batchMove(item)" >{{$t('Batch move')}}</el-button> <el-drawer :visible.sync="drawer" size="" :with-header="false"> <m-versions :versionData = versionData @mVersionSwitchProcessDefinitionVersion="mVersionSwitchProcessDefinitionVersion" @mVersionGetProcessDefinitionVersionsPage="mVersionGetProcessDefinitionVersionsPage" @mVersionDeleteProcessDefinitionVersion="mVersionDeleteProcessDefinitionVersion" @closeVersion="closeVersion"></m-versions> </el-drawer> <el-dialog :title="$t('Please set the parameters before starting')" v-if="startDialog" :visible.sync="startDialog" width="auto"> <m-start :startData= "startData" @onUpdateStart="onUpdateStart" @closeStart="closeStart"></m-start> </el-dialog> <el-dialog :title="$t('Set parameters before timing')" :visible.sync="timingDialog" width="auto"> <m-timing :timingData="timingData" @onUpdateTiming="onUpdateTiming" @closeTiming="closeTiming"></m-timing> </el-dialog> <el-dialog :title="$t('Info')" :visible.sync="relatedItemsDialog" width="auto"> <m-related-items :tmp="tmp" @onBatchCopy="onBatchCopy" @onBatchMove="onBatchMove" @closeRelatedItems="closeRelatedItems"></m-related-items> </el-dialog> </div> </template> <script> import _ from 'lodash' import mStart from './start' import mTiming from './timing' import mRelatedItems from './relatedItems' import { mapActions, mapState } from 'vuex' import { publishStatus } from '@/conf/home/pages/dag/_source/config' import mVersions from './versions' export default { name: 'definition-list', data () { return { list: [], strSelectCodes: '', checkAll: false, drawer: false, versionData: { processDefinition: {}, processDefinitionVersions: [], total: null, pageNo: null, pageSize: null }, startDialog: false, startData: {}, timingDialog: false, timingData: { item: {}, type: '' }, relatedItemsDialog: false, tmp: false } }, props: { processList: Array, pageNo: Number, pageSize: Number }, methods: { ...mapActions('dag', ['editProcessState', 'getStartCheck', 'deleteDefinition', 'batchDeleteDefinition', 'exportDefinition', 'getProcessDefinitionVersionsPage', 'copyProcess', 'switchProcessDefinitionVersion', 'deleteProcessDefinitionVersion', 'moveProcess']), ...mapActions('security', ['getWorkerGroupsAll']), selectable (row, index) { if (row.releaseState === 'ONLINE') { return false } else { return true } }, _rtPublishStatus (code) { return _.filter(publishStatus, v => v.code === code)[0].desc }, _treeView (item) { this.$router.push({ path: `/projects/${this.projectCode}/definition/tree/${item.code}` }) }, /** * Start */ _start (item) { this.getWorkerGroupsAll() this.getStartCheck({ processDefinitionCode: item.code }).then(res => { this.startData = item this.startDialog = true }).catch(e => { this.$message.error(e.msg || '') }) }, onUpdateStart () { this._onUpdate() this.startDialog = false }, closeStart () { this.startDialog = false }, /** * timing */ _timing (item) { this.timingData.item = item this.timingData.type = 'timing' this.timingDialog = true }, onUpdateTiming () { this._onUpdate() this.timingDialog = false }, closeTiming () { this.timingDialog = false }, /** * Timing manage */ _timingManage (item) { this.$router.push({ path: `/projects/${this.projectCode}/definition/list/timing/${item.code}` }) }, /** * delete */ _delete (item, i) { // remove tow++ if (i < 0) { this._batchDelete() return } // remove one this.deleteDefinition({ code: item.code }).then(res => { this._onUpdate() this.$message.success(res.msg) }).catch(e => { this.$message.error(e.msg || '') }) }, /** * edit */ _edit (item) { this.$router.push({ path: `/projects/${this.projectCode}/definition/list/${item.code}` }) }, /** * Offline */ _downline (item) { this._upProcessState({ ...item, releaseState: 'OFFLINE' }) }, /** * online */ _poponline (item) { this._upProcessState({ ...item, releaseState: 'ONLINE' }) }, /** * copy */ _copyProcess (item) { this.copyProcess({ codes: item.code, targetProjectCode: item.projectCode }).then(res => { this.strSelectCodes = '' this.$message.success(res.msg) // $('body').find('.tooltip.fade.top.in').remove() this._onUpdate() }).catch(e => { this.$message.error(e.msg || '') }) }, /** * move */ _moveProcess (item) { this.moveProcess({ codes: item.code, targetProjectCode: item.projectCode }).then(res => { this.strSelectCodes = '' this.$message.success(res.msg) $('body').find('.tooltip.fade.top.in').remove() this._onUpdate() }).catch(e => { this.$message.error(e.msg || '') }) }, _export (item) { this.exportDefinition({ codes: item.code, fileName: item.name }).catch(e => { this.$message.error(e.msg || '') }) }, /** * switch version in process definition version list * * @param version the version user want to change * @param processDefinitionCode the process definition code * @param fromThis fromThis */ mVersionSwitchProcessDefinitionVersion ({ version, processDefinitionCode, fromThis }) { this.switchProcessDefinitionVersion({ version: version, code: processDefinitionCode }).then(res => { this.$message.success($t('Switch Version Successfully')) this.$router.push({ path: `/projects/${this.projectCode}/definition/list/${processDefinitionCode}` }) }).catch(e => { this.$message.error(e.msg || '') }) }, /** * Paging event of process definition versions * * @param pageNo page number * @param pageSize page size * @param processDefinitionCode the process definition Code of page version * @param fromThis fromThis */ mVersionGetProcessDefinitionVersionsPage ({ pageNo, pageSize, processDefinitionCode, fromThis }) { this.getProcessDefinitionVersionsPage({ pageNo: pageNo, pageSize: pageSize, code: processDefinitionCode }).then(res => { this.versionData.processDefinitionVersions = res.data.totalList this.versionData.total = res.data.total this.versionData.pageSize = res.data.pageSize this.versionData.pageNo = res.data.currentPage }).catch(e => { this.$message.error(e.msg || '') }) }, /** * delete one version of process definition * * @param version the version need to delete * @param processDefinitionCode the process definition code user want to delete * @param fromThis fromThis */ mVersionDeleteProcessDefinitionVersion ({ version, processDefinitionCode, fromThis }) { this.deleteProcessDefinitionVersion({ version: version, code: processDefinitionCode }).then(res => { this.$message.success(res.msg || '') this.mVersionGetProcessDefinitionVersionsPage({ pageNo: 1, pageSize: 10, processDefinitionCode: processDefinitionCode, fromThis: fromThis }) }).catch(e => { this.$message.error(e.msg || '') }) }, _version (item) { this.getProcessDefinitionVersionsPage({ pageNo: 1, pageSize: 10, code: item.code }).then(res => { let processDefinitionVersions = res.data.totalList let total = res.data.total let pageSize = res.data.pageSize let pageNo = res.data.currentPage this.versionData.processDefinition = item this.versionData.processDefinitionVersions = processDefinitionVersions this.versionData.total = total this.versionData.pageNo = pageNo this.versionData.pageSize = pageSize this.drawer = true }).catch(e => { this.$message.error(e.msg || '') }) }, closeVersion () { this.drawer = false }, _batchExport () { this.exportDefinition({ codes: this.strSelectCodes, fileName: 'process_' + new Date().getTime() }).then(res => { this._onUpdate() this.checkAll = false this.strSelectCodes = '' }).catch(e => { this.strSelectCodes = '' this.checkAll = false this.$message.error(e.msg) }) }, /** * Batch Copy */ _batchCopy () { this.relatedItemsDialog = true this.tmp = false }, onBatchCopy (projectCode) { this._copyProcess({ code: this.strSelectCodes, projectCode: projectCode }) this.relatedItemsDialog = false }, closeRelatedItems () { this.relatedItemsDialog = false }, /** * _batchMove */ _batchMove () { this.tmp = true this.relatedItemsDialog = true }, onBatchMove (projectCode) { this._moveProcess({ code: this.strSelectCodes, projectCode: projectCode }) this.relatedItemsDialog = false }, /** * Edit state */ _upProcessState (o) { this.editProcessState(o).then(res => { this.$message.success(res.msg) $('body').find('.tooltip.fade.top.in').remove() this._onUpdate() }).catch(e => { this.$message.error(e.msg || '') }) }, _onUpdate () { this.$emit('on-update') }, /** * the array that to be delete */ _arrDelChange (v) { let arr = [] arr = _.map(v, 'code') this.strSelectCodes = _.join(arr, ',') }, /** * batch delete */ _batchDelete () { this.batchDeleteDefinition({ codes: this.strSelectCodes }).then(res => { this._onUpdate() this.checkAll = false this.strSelectCodes = '' this.$message.success(res.msg) }).catch(e => { this.strSelectCodes = '' this.checkAll = false this.$message.error(e.msg || '') }) } }, watch: { processList: { handler (a) { this.checkAll = false this.list = [] setTimeout(() => { this.list = _.cloneDeep(a) }) }, immediate: true, deep: true }, pageNo () { this.strSelectCodes = '' } }, created () { }, mounted () { }, computed: { ...mapState('dag', ['projectCode']) }, components: { mVersions, mStart, mTiming, mRelatedItems } } </script> <style lang="scss" rel="stylesheet/scss"> .time_online { background-color: #5cb85c; color: #fff; padding: 3px; } .time_offline { background-color: #ffc107; color: #fff; padding: 3px; } </style>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,538
[Bug] [server] when there is a forbidden node in dag, the execution flow is abnormal
### Search before asking - [x] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened In a dag, after the node is forbidden to execute, there is a problem with the judgment of the execution of the node that depends on the forbidden node. ### What you expected to happen When a node depends on a forbidden node, it should be judged whether the parent node of the forbidden node has finished running. ### How to reproduce ![481639972312_ pic](https://user-images.githubusercontent.com/28946792/147027151-7d8cc9b9-4693-41a6-8908-f9c8a8d01746.jpg) As shown in the figure, test1_stop and test2_stop are forbidden nodes. The test3 node depends on test1_stop and test2_stop. In my opinion, whether test3 is executed depends on whether test1 and test2 are executed. But in my test, although the test1 node has been executed, but test2 is being executed, I think test3 should not be executed at this time, but he was executed. Is this a special design? If this is a bug, I would be happy to contribute my code, thank you. ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7538
https://github.com/apache/dolphinscheduler/pull/7613
605767e47b29f7dc6f2c1c69e13848a8268fcd1c
3af4d765c254b1ef72d1d3aeefd9b4a427cb281d
"2021-12-22T02:56:03Z"
java
"2021-12-28T10:37:59Z"
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/runner/WorkflowExecuteThread.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.server.master.runner; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVERY_START_NODE_STRING; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_START_NODES; import static org.apache.dolphinscheduler.common.Constants.DEFAULT_WORKER_GROUP; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.CommandType; import org.apache.dolphinscheduler.common.enums.DependResult; import org.apache.dolphinscheduler.common.enums.Direct; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.FailureStrategy; import org.apache.dolphinscheduler.common.enums.Flag; import org.apache.dolphinscheduler.common.enums.Priority; import org.apache.dolphinscheduler.common.enums.StateEvent; import org.apache.dolphinscheduler.common.enums.StateEventType; import org.apache.dolphinscheduler.common.enums.TaskDependType; import org.apache.dolphinscheduler.common.enums.TaskGroupQueueStatus; import org.apache.dolphinscheduler.common.enums.TaskTimeoutStrategy; import org.apache.dolphinscheduler.common.enums.TimeoutFlag; import org.apache.dolphinscheduler.common.graph.DAG; import org.apache.dolphinscheduler.common.model.TaskNode; import org.apache.dolphinscheduler.common.model.TaskNodeRelation; import org.apache.dolphinscheduler.common.process.ProcessDag; import org.apache.dolphinscheduler.common.process.Property; import org.apache.dolphinscheduler.common.utils.DateUtils; import org.apache.dolphinscheduler.common.utils.JSONUtils; import org.apache.dolphinscheduler.common.utils.NetUtils; import org.apache.dolphinscheduler.common.utils.ParameterUtils; import org.apache.dolphinscheduler.dao.entity.Command; import org.apache.dolphinscheduler.dao.entity.Environment; import org.apache.dolphinscheduler.dao.entity.ProcessDefinition; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation; import org.apache.dolphinscheduler.dao.entity.ProjectUser; import org.apache.dolphinscheduler.dao.entity.Schedule; import org.apache.dolphinscheduler.dao.entity.TaskDefinition; import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog; import org.apache.dolphinscheduler.dao.entity.TaskGroupQueue; import org.apache.dolphinscheduler.dao.entity.TaskInstance; import org.apache.dolphinscheduler.dao.utils.DagHelper; import org.apache.dolphinscheduler.remote.command.HostUpdateCommand; import org.apache.dolphinscheduler.remote.utils.Host; import org.apache.dolphinscheduler.server.master.config.MasterConfig; import org.apache.dolphinscheduler.server.master.dispatch.executor.NettyExecutorManager; import org.apache.dolphinscheduler.server.master.runner.task.ITaskProcessor; import org.apache.dolphinscheduler.server.master.runner.task.TaskAction; import org.apache.dolphinscheduler.server.master.runner.task.TaskProcessorFactory; import org.apache.dolphinscheduler.service.alert.ProcessAlertManager; import org.apache.dolphinscheduler.service.process.ProcessService; import org.apache.dolphinscheduler.service.quartz.cron.CronUtils; import org.apache.dolphinscheduler.service.queue.PeerTaskInstancePriorityQueue; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.lang.StringUtils; import java.util.ArrayList; import java.util.Arrays; import java.util.Collection; import java.util.Date; import java.util.HashMap; import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Objects; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentLinkedQueue; import java.util.concurrent.atomic.AtomicBoolean; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.common.collect.Lists; /** * master exec thread,split dag */ public class WorkflowExecuteThread { /** * logger of WorkflowExecuteThread */ private static final Logger logger = LoggerFactory.getLogger(WorkflowExecuteThread.class); /** * master config */ private MasterConfig masterConfig; /** * process service */ private ProcessService processService; /** * alert manager */ private ProcessAlertManager processAlertManager; /** * netty executor manager */ private NettyExecutorManager nettyExecutorManager; /** * process instance */ private ProcessInstance processInstance; /** * process definition */ private ProcessDefinition processDefinition; /** * the object of DAG */ private DAG<String, TaskNode, TaskNodeRelation> dag; /** * key of workflow */ private String key; /** * start flag, true: start nodes submit completely */ private boolean isStart = false; /** * submit failure nodes */ private boolean taskFailedSubmit = false; /** * task instance hash map, taskId as key */ private Map<Integer, TaskInstance> taskInstanceMap = new ConcurrentHashMap<>(); /** * running TaskNode, taskId as key */ private final Map<Integer, ITaskProcessor> activeTaskProcessorMaps = new ConcurrentHashMap<>(); /** * valid task map, taskCode as key, taskId as value */ private Map<String, Integer> validTaskMap = new ConcurrentHashMap<>(); /** * error task map, taskCode as key, taskId as value */ private Map<String, Integer> errorTaskMap = new ConcurrentHashMap<>(); /** * complete task map, taskCode as key, taskId as value */ private Map<String, Integer> completeTaskMap = new ConcurrentHashMap<>(); /** * depend failed task map, taskCode as key, taskId as value */ private Map<String, Integer> dependFailedTaskMap = new ConcurrentHashMap<>(); /** * forbidden task map, code as key */ private Map<String, TaskNode> forbiddenTaskMap = new ConcurrentHashMap<>(); /** * skip task map, code as key */ private Map<String, TaskNode> skipTaskNodeMap = new ConcurrentHashMap<>(); /** * complement date list */ private List<Date> complementListDate = Lists.newLinkedList(); /** * state event queue */ private ConcurrentLinkedQueue<StateEvent> stateEvents = new ConcurrentLinkedQueue<>(); /** * ready to submit task queue */ private PeerTaskInstancePriorityQueue readyToSubmitTaskQueue = new PeerTaskInstancePriorityQueue(); /** * state wheel execute thread */ private StateWheelExecuteThread stateWheelExecuteThread; /** * constructor of WorkflowExecuteThread * * @param processInstance processInstance * @param processService processService * @param nettyExecutorManager nettyExecutorManager * @param processAlertManager processAlertManager * @param masterConfig masterConfig * @param stateWheelExecuteThread stateWheelExecuteThread */ public WorkflowExecuteThread(ProcessInstance processInstance , ProcessService processService , NettyExecutorManager nettyExecutorManager , ProcessAlertManager processAlertManager , MasterConfig masterConfig , StateWheelExecuteThread stateWheelExecuteThread) { this.processService = processService; this.processInstance = processInstance; this.masterConfig = masterConfig; this.nettyExecutorManager = nettyExecutorManager; this.processAlertManager = processAlertManager; this.stateWheelExecuteThread = stateWheelExecuteThread; } /** * the process start nodes are submitted completely. */ public boolean isStart() { return this.isStart; } /** * handle event */ public void handleEvents() { if (!isStart) { return; } while (!this.stateEvents.isEmpty()) { try { StateEvent stateEvent = this.stateEvents.peek(); if (stateEventHandler(stateEvent)) { this.stateEvents.remove(stateEvent); } } catch (Exception e) { logger.error("state handle error:", e); } } } public String getKey() { if (StringUtils.isNotEmpty(key) || this.processDefinition == null) { return key; } key = String.format("%d_%d_%d", this.processDefinition.getCode(), this.processDefinition.getVersion(), this.processInstance.getId()); return key; } public boolean addStateEvent(StateEvent stateEvent) { if (processInstance.getId() != stateEvent.getProcessInstanceId()) { logger.info("state event would be abounded :{}", stateEvent.toString()); return false; } this.stateEvents.add(stateEvent); return true; } public int eventSize() { return this.stateEvents.size(); } public ProcessInstance getProcessInstance() { return this.processInstance; } private boolean stateEventHandler(StateEvent stateEvent) { logger.info("process event: {}", stateEvent.toString()); if (!checkProcessInstance(stateEvent)) { return false; } boolean result = false; switch (stateEvent.getType()) { case PROCESS_STATE_CHANGE: result = processStateChangeHandler(stateEvent); break; case TASK_STATE_CHANGE: result = taskStateChangeHandler(stateEvent); break; case PROCESS_TIMEOUT: result = processTimeout(); break; case TASK_TIMEOUT: result = taskTimeout(stateEvent); break; case WAIT_TASK_GROUP: result = checkForceStartAndWakeUp(stateEvent); break; default: break; } if (result) { this.stateEvents.remove(stateEvent); } return result; } private boolean checkForceStartAndWakeUp(StateEvent stateEvent) { TaskGroupQueue taskGroupQueue = this.processService.loadTaskGroupQueue(stateEvent.getTaskInstanceId()); if (taskGroupQueue.getForceStart() == Flag.YES.getCode()) { ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId()); TaskInstance taskInstance = this.processService.findTaskInstanceById(stateEvent.getTaskInstanceId()); ProcessInstance processInstance = this.processService.findProcessInstanceById(taskInstance.getProcessInstanceId()); taskProcessor.dispatch(taskInstance, processInstance); this.processService.updateTaskGroupQueueStatus(taskGroupQueue.getId(), TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode()); return true; } if (taskGroupQueue.getInQueue() == Flag.YES.getCode()) { boolean acquireTaskGroup = processService.acquireTaskGroupAgain(taskGroupQueue); if (acquireTaskGroup) { ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId()); TaskInstance taskInstance = this.processService.findTaskInstanceById(stateEvent.getTaskInstanceId()); ProcessInstance processInstance = this.processService.findProcessInstanceById(taskInstance.getProcessInstanceId()); taskProcessor.dispatch(taskInstance, processInstance); return true; } } return false; } private boolean taskTimeout(StateEvent stateEvent) { if (!checkTaskInstanceByStateEvent(stateEvent)) { return true; } TaskInstance taskInstance = taskInstanceMap.get(stateEvent.getTaskInstanceId()); if (TimeoutFlag.CLOSE == taskInstance.getTaskDefine().getTimeoutFlag()) { return true; } TaskTimeoutStrategy taskTimeoutStrategy = taskInstance.getTaskDefine().getTimeoutNotifyStrategy(); if (TaskTimeoutStrategy.FAILED == taskTimeoutStrategy) { ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId()); taskProcessor.action(TaskAction.TIMEOUT); } else { processAlertManager.sendTaskTimeoutAlert(processInstance, taskInstance, taskInstance.getTaskDefine()); } return true; } private boolean processTimeout() { this.processAlertManager.sendProcessTimeoutAlert(this.processInstance, this.processDefinition); return true; } private boolean taskStateChangeHandler(StateEvent stateEvent) { if (!checkTaskInstanceByStateEvent(stateEvent)) { return true; } TaskInstance task = getTaskInstance(stateEvent.getTaskInstanceId()); if (task.getState() == null) { logger.error("task state is null, state handler error: {}", stateEvent); return true; } if (task.getState().typeIsFinished() && !completeTaskMap.containsKey(Long.toString(task.getTaskCode()))) { taskFinished(task); if (task.getTaskGroupId() > 0) { //release task group TaskInstance nextTaskInstance = this.processService.releaseTaskGroup(task); if (nextTaskInstance != null) { if (nextTaskInstance.getProcessInstanceId() == task.getProcessInstanceId()) { StateEvent nextEvent = new StateEvent(); nextEvent.setProcessInstanceId(this.processInstance.getId()); nextEvent.setTaskInstanceId(nextTaskInstance.getId()); nextEvent.setType(StateEventType.WAIT_TASK_GROUP); this.stateEvents.add(nextEvent); } else { ProcessInstance processInstance = this.processService.findProcessInstanceById(nextTaskInstance.getProcessInstanceId()); this.processService.sendStartTask2Master(processInstance, nextTaskInstance.getId(), org.apache.dolphinscheduler.remote.command.CommandType.TASK_WAKEUP_EVENT_REQUEST); } } } } else if (activeTaskProcessorMaps.containsKey(stateEvent.getTaskInstanceId())) { ITaskProcessor iTaskProcessor = activeTaskProcessorMaps.get(stateEvent.getTaskInstanceId()); iTaskProcessor.run(); if (iTaskProcessor.taskState().typeIsFinished()) { task = processService.findTaskInstanceById(stateEvent.getTaskInstanceId()); taskFinished(task); } } else { logger.error("state handler error: {}", stateEvent); } return true; } private void taskFinished(TaskInstance task) { logger.info("work flow {} task {} state:{} ", processInstance.getId(), task.getId(), task.getState()); if (task.taskCanRetry()) { addTaskToStandByList(task); if (!task.retryTaskIntervalOverTime()) { logger.info("failure task will be submitted: process id: {}, task instance id: {} state:{} retry times:{} / {}, interval:{}", processInstance.getId(), task.getId(), task.getState(), task.getRetryTimes(), task.getMaxRetryTimes(), task.getRetryInterval()); stateWheelExecuteThread.addTask4TimeoutCheck(task); stateWheelExecuteThread.addTask4RetryCheck(task); } else { submitStandByTask(); stateWheelExecuteThread.removeTask4TimeoutCheck(task); stateWheelExecuteThread.removeTask4RetryCheck(task); } return; } completeTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); activeTaskProcessorMaps.remove(task.getId()); stateWheelExecuteThread.removeTask4TimeoutCheck(task); stateWheelExecuteThread.removeTask4RetryCheck(task); if (task.getState().typeIsSuccess()) { processInstance.setVarPool(task.getVarPool()); processService.saveProcessInstance(processInstance); submitPostNode(Long.toString(task.getTaskCode())); } else if (task.getState().typeIsFailure()) { if (task.isConditionsTask() || DagHelper.haveConditionsAfterNode(Long.toString(task.getTaskCode()), dag)) { submitPostNode(Long.toString(task.getTaskCode())); } else { errorTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); if (processInstance.getFailureStrategy() == FailureStrategy.END) { killAllTasks(); } } } this.updateProcessInstanceState(); } /** * update process instance */ public void refreshProcessInstance(int processInstanceId) { logger.info("process instance update: {}", processInstanceId); processInstance = processService.findProcessInstanceById(processInstanceId); processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); processInstance.setProcessDefinition(processDefinition); } /** * update task instance */ public void refreshTaskInstance(int taskInstanceId) { logger.info("task instance update: {} ", taskInstanceId); TaskInstance taskInstance = processService.findTaskInstanceById(taskInstanceId); if (taskInstance == null) { logger.error("can not find task instance, id:{}", taskInstanceId); return; } processService.packageTaskInstance(taskInstance, processInstance); taskInstanceMap.put(taskInstance.getId(), taskInstance); validTaskMap.remove(Long.toString(taskInstance.getTaskCode())); if (Flag.YES == taskInstance.getFlag()) { validTaskMap.put(Long.toString(taskInstance.getTaskCode()), taskInstance.getId()); } } /** * check process instance by state event */ public boolean checkProcessInstance(StateEvent stateEvent) { if (this.processInstance.getId() != stateEvent.getProcessInstanceId()) { logger.error("mismatch process instance id: {}, state event:{}", this.processInstance.getId(), stateEvent); return false; } return true; } /** * check if task instance exist by state event */ public boolean checkTaskInstanceByStateEvent(StateEvent stateEvent) { if (stateEvent.getTaskInstanceId() == 0) { logger.error("task instance id null, state event:{}", stateEvent); return false; } if (!taskInstanceMap.containsKey(stateEvent.getTaskInstanceId())) { logger.error("mismatch task instance id, event:{}", stateEvent); return false; } return true; } /** * check if task instance exist by task code */ public boolean checkTaskInstanceByCode(long taskCode) { if (taskInstanceMap == null || taskInstanceMap.size() == 0) { return false; } for (TaskInstance taskInstance : taskInstanceMap.values()) { if (taskInstance.getTaskCode() == taskCode) { return true; } } return false; } /** * check if task instance exist by id */ public boolean checkTaskInstanceById(int taskInstanceId) { if (taskInstanceMap == null || taskInstanceMap.size() == 0) { return false; } return taskInstanceMap.containsKey(taskInstanceId); } /** * get task instance from memory */ public TaskInstance getTaskInstance(int taskInstanceId) { if (taskInstanceMap.containsKey(taskInstanceId)) { return taskInstanceMap.get(taskInstanceId); } return null; } private boolean processStateChangeHandler(StateEvent stateEvent) { try { logger.info("process:{} state {} change to {}", processInstance.getId(), processInstance.getState(), stateEvent.getExecutionStatus()); if (processComplementData()) { return true; } if (stateEvent.getExecutionStatus().typeIsFinished()) { endProcess(); } if (processInstance.getState() == ExecutionStatus.READY_STOP) { killAllTasks(); } return true; } catch (Exception e) { logger.error("process state change error:", e); } return true; } private boolean processComplementData() throws Exception { if (!needComplementProcess()) { return false; } if (processInstance.getState() == ExecutionStatus.READY_STOP) { return false; } Date scheduleDate = processInstance.getScheduleTime(); if (scheduleDate == null) { scheduleDate = complementListDate.get(0); } else if (processInstance.getState().typeIsFinished()) { endProcess(); if (complementListDate.size() <= 0) { logger.info("process complement end. process id:{}", processInstance.getId()); return true; } int index = complementListDate.indexOf(scheduleDate); if (index >= complementListDate.size() - 1 || !processInstance.getState().typeIsSuccess()) { logger.info("process complement end. process id:{}", processInstance.getId()); // complement data ends || no success return true; } logger.info("process complement continue. process id:{}, schedule time:{} complementListDate:{}", processInstance.getId(), processInstance.getScheduleTime(), complementListDate.toString()); scheduleDate = complementListDate.get(index + 1); //the next process complement processInstance.setId(0); } processInstance.setScheduleTime(scheduleDate); Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam()); if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) { cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); } processInstance.setState(ExecutionStatus.RUNNING_EXECUTION); processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime())); processInstance.setStartTime(new Date()); processInstance.setEndTime(null); processService.saveProcessInstance(processInstance); this.taskInstanceMap.clear(); startProcess(); return true; } private boolean needComplementProcess() { if (processInstance.isComplementData() && Flag.NO == processInstance.getIsSubProcess()) { return true; } return false; } /** * process start handle */ public void startProcess() { if (this.taskInstanceMap.size() > 0) { return; } try { isStart = false; buildFlowDag(); initTaskQueue(); submitPostNode(null); isStart = true; } catch (Exception e) { logger.error("start process error, process instance id:{}", processInstance.getId(), e); } } /** * process end handle */ private void endProcess() { this.stateEvents.clear(); if (processDefinition.getExecutionType().typeIsSerialWait()) { checkSerialProcess(processDefinition); } if (processInstance.getState().typeIsWaitingThread()) { processService.createRecoveryWaitingThreadCommand(null, processInstance); } if (processAlertManager.isNeedToSendWarning(processInstance)) { ProjectUser projectUser = processService.queryProjectWithUserByProcessInstanceId(processInstance.getId()); processAlertManager.sendAlertProcessInstance(processInstance, getValidTaskList(), projectUser); } if (checkTaskQueue()) { //release task group processService.releaseAllTaskGroup(processInstance.getId()); } } public void checkSerialProcess(ProcessDefinition processDefinition) { int nextInstanceId = processInstance.getNextProcessInstanceId(); if (nextInstanceId == 0) { ProcessInstance nextProcessInstance = this.processService.loadNextProcess4Serial(processInstance.getProcessDefinition().getCode(), ExecutionStatus.SERIAL_WAIT.getCode()); if (nextProcessInstance == null) { return; } nextInstanceId = nextProcessInstance.getId(); } ProcessInstance nextProcessInstance = this.processService.findProcessInstanceById(nextInstanceId); if (nextProcessInstance.getState().typeIsFinished() || nextProcessInstance.getState().typeIsRunning()) { return; } Map<String, Object> cmdParam = new HashMap<>(); cmdParam.put(CMD_PARAM_RECOVER_PROCESS_ID_STRING, nextInstanceId); Command command = new Command(); command.setCommandType(CommandType.RECOVER_SERIAL_WAIT); command.setProcessDefinitionCode(processDefinition.getCode()); command.setCommandParam(JSONUtils.toJsonString(cmdParam)); processService.createCommand(command); } /** * generate process dag * * @throws Exception exception */ private void buildFlowDag() throws Exception { if (this.dag != null) { return; } processDefinition = processService.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); processInstance.setProcessDefinition(processDefinition); List<TaskInstance> recoverNodeList = getStartTaskInstanceList(processInstance.getCommandParam()); List<ProcessTaskRelation> processTaskRelations = processService.findRelationByCode(processDefinition.getProjectCode(), processDefinition.getCode()); List<TaskDefinitionLog> taskDefinitionLogs = processService.getTaskDefineLogListByRelation(processTaskRelations); List<TaskNode> taskNodeList = processService.transformTask(processTaskRelations, taskDefinitionLogs); forbiddenTaskMap.clear(); taskNodeList.forEach(taskNode -> { if (taskNode.isForbidden()) { forbiddenTaskMap.put(Long.toString(taskNode.getCode()), taskNode); } }); // generate process to get DAG info List<String> recoveryNodeCodeList = getRecoveryNodeCodeList(recoverNodeList); List<String> startNodeNameList = parseStartNodeName(processInstance.getCommandParam()); ProcessDag processDag = generateFlowDag(taskNodeList, startNodeNameList, recoveryNodeCodeList, processInstance.getTaskDependType()); if (processDag == null) { logger.error("processDag is null"); return; } // generate process dag dag = DagHelper.buildDagGraph(processDag); } /** * init task queue */ private void initTaskQueue() { taskFailedSubmit = false; activeTaskProcessorMaps.clear(); dependFailedTaskMap.clear(); completeTaskMap.clear(); errorTaskMap.clear(); if (!isNewProcessInstance()) { List<TaskInstance> validTaskInstanceList = processService.findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance task : validTaskInstanceList) { validTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); taskInstanceMap.put(task.getId(), task); if (task.isTaskComplete()) { completeTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); } if (task.isConditionsTask() || DagHelper.haveConditionsAfterNode(Long.toString(task.getTaskCode()), dag)) { continue; } if (task.getState().typeIsFailure() && !task.taskCanRetry()) { errorTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); } } } if (processInstance.isComplementData() && complementListDate.size() == 0) { Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam()); if (cmdParam != null && cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) { Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE)); Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); List<Schedule> schedules = processService.queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode()); if (complementListDate.size() == 0 && needComplementProcess()) { complementListDate = CronUtils.getSelfFireDateList(start, end, schedules); logger.info(" process definition code:{} complement data: {}", processInstance.getProcessDefinitionCode(), complementListDate.toString()); if (complementListDate.size() > 0 && Flag.NO == processInstance.getIsSubProcess()) { processInstance.setScheduleTime(complementListDate.get(0)); processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime())); processService.updateProcessInstance(processInstance); } } } } } /** * submit task to execute * * @param taskInstance task instance * @return TaskInstance */ private TaskInstance submitTaskExec(TaskInstance taskInstance) { try { ITaskProcessor taskProcessor = TaskProcessorFactory.getTaskProcessor(taskInstance.getTaskType()); if (taskInstance.getState() == ExecutionStatus.RUNNING_EXECUTION && taskProcessor.getType().equalsIgnoreCase(Constants.COMMON_TASK_TYPE)) { notifyProcessHostUpdate(taskInstance); } // package task instance before submit processService.packageTaskInstance(taskInstance, processInstance); boolean submit = taskProcessor.submit(taskInstance, processInstance, masterConfig.getTaskCommitRetryTimes(), masterConfig.getTaskCommitInterval(), masterConfig.isTaskLogger()); if (!submit) { logger.error("process id:{} name:{} submit standby task id:{} name:{} failed!", processInstance.getId(), processInstance.getName(), taskInstance.getId(), taskInstance.getName()); return null; } validTaskMap.put(Long.toString(taskInstance.getTaskCode()), taskInstance.getId()); taskInstanceMap.put(taskInstance.getId(), taskInstance); activeTaskProcessorMaps.put(taskInstance.getId(), taskProcessor); taskProcessor.run(); stateWheelExecuteThread.addTask4TimeoutCheck(taskInstance); stateWheelExecuteThread.addTask4RetryCheck(taskInstance); if (taskProcessor.taskState().typeIsFinished()) { StateEvent stateEvent = new StateEvent(); stateEvent.setProcessInstanceId(this.processInstance.getId()); stateEvent.setTaskInstanceId(taskInstance.getId()); stateEvent.setExecutionStatus(taskProcessor.taskState()); stateEvent.setType(StateEventType.TASK_STATE_CHANGE); this.stateEvents.add(stateEvent); } return taskInstance; } catch (Exception e) { logger.error("submit standby task error", e); return null; } } private void notifyProcessHostUpdate(TaskInstance taskInstance) { if (StringUtils.isEmpty(taskInstance.getHost())) { return; } try { HostUpdateCommand hostUpdateCommand = new HostUpdateCommand(); hostUpdateCommand.setProcessHost(NetUtils.getAddr(masterConfig.getListenPort())); hostUpdateCommand.setTaskInstanceId(taskInstance.getId()); Host host = new Host(taskInstance.getHost()); nettyExecutorManager.doExecute(host, hostUpdateCommand.convert2Command()); } catch (Exception e) { logger.error("notify process host update", e); } } /** * find task instance in db. * in case submit more than one same name task in the same time. * * @param taskCode task code * @param taskVersion task version * @return TaskInstance */ private TaskInstance findTaskIfExists(Long taskCode, int taskVersion) { List<TaskInstance> validTaskInstanceList = getValidTaskList(); for (TaskInstance taskInstance : validTaskInstanceList) { if (taskInstance.getTaskCode() == taskCode && taskInstance.getTaskDefinitionVersion() == taskVersion) { return taskInstance; } } return null; } /** * encapsulation task * * @param processInstance process instance * @param taskNode taskNode * @return TaskInstance */ private TaskInstance createTaskInstance(ProcessInstance processInstance, TaskNode taskNode) { TaskInstance taskInstance = findTaskIfExists(taskNode.getCode(), taskNode.getVersion()); if (taskInstance == null) { taskInstance = new TaskInstance(); taskInstance.setTaskCode(taskNode.getCode()); taskInstance.setTaskDefinitionVersion(taskNode.getVersion()); // task name taskInstance.setName(taskNode.getName()); // task instance state taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS); // process instance id taskInstance.setProcessInstanceId(processInstance.getId()); // task instance type taskInstance.setTaskType(taskNode.getType().toUpperCase()); // task instance whether alert taskInstance.setAlertFlag(Flag.NO); // task instance start time taskInstance.setStartTime(null); // task instance flag taskInstance.setFlag(Flag.YES); // task dry run flag taskInstance.setDryRun(processInstance.getDryRun()); // task instance retry times taskInstance.setRetryTimes(0); // max task instance retry times taskInstance.setMaxRetryTimes(taskNode.getMaxRetryTimes()); // retry task instance interval taskInstance.setRetryInterval(taskNode.getRetryInterval()); //set task param taskInstance.setTaskParams(taskNode.getTaskParams()); //set task group and priority taskInstance.setTaskGroupId(taskNode.getTaskGroupId()); taskInstance.setTaskGroupPriority(taskNode.getTaskGroupPriority()); // task instance priority if (taskNode.getTaskInstancePriority() == null) { taskInstance.setTaskInstancePriority(Priority.MEDIUM); } else { taskInstance.setTaskInstancePriority(taskNode.getTaskInstancePriority()); } String processWorkerGroup = processInstance.getWorkerGroup(); processWorkerGroup = StringUtils.isBlank(processWorkerGroup) ? DEFAULT_WORKER_GROUP : processWorkerGroup; String taskWorkerGroup = StringUtils.isBlank(taskNode.getWorkerGroup()) ? processWorkerGroup : taskNode.getWorkerGroup(); Long processEnvironmentCode = Objects.isNull(processInstance.getEnvironmentCode()) ? -1 : processInstance.getEnvironmentCode(); Long taskEnvironmentCode = Objects.isNull(taskNode.getEnvironmentCode()) ? processEnvironmentCode : taskNode.getEnvironmentCode(); if (!processWorkerGroup.equals(DEFAULT_WORKER_GROUP) && taskWorkerGroup.equals(DEFAULT_WORKER_GROUP)) { taskInstance.setWorkerGroup(processWorkerGroup); taskInstance.setEnvironmentCode(processEnvironmentCode); } else { taskInstance.setWorkerGroup(taskWorkerGroup); taskInstance.setEnvironmentCode(taskEnvironmentCode); } if (!taskInstance.getEnvironmentCode().equals(-1L)) { Environment environment = processService.findEnvironmentByCode(taskInstance.getEnvironmentCode()); if (Objects.nonNull(environment) && StringUtils.isNotEmpty(environment.getConfig())) { taskInstance.setEnvironmentConfig(environment.getConfig()); } } // delay execution time taskInstance.setDelayTime(taskNode.getDelayTime()); } return taskInstance; } public void getPreVarPool(TaskInstance taskInstance, Set<String> preTask) { Map<String, Property> allProperty = new HashMap<>(); Map<String, TaskInstance> allTaskInstance = new HashMap<>(); if (CollectionUtils.isNotEmpty(preTask)) { for (String preTaskCode : preTask) { Integer taskId = completeTaskMap.get(preTaskCode); if (taskId == null) { continue; } TaskInstance preTaskInstance = taskInstanceMap.get(taskId); if (preTaskInstance == null) { continue; } String preVarPool = preTaskInstance.getVarPool(); if (StringUtils.isNotEmpty(preVarPool)) { List<Property> properties = JSONUtils.toList(preVarPool, Property.class); for (Property info : properties) { setVarPoolValue(allProperty, allTaskInstance, preTaskInstance, info); } } } if (allProperty.size() > 0) { taskInstance.setVarPool(JSONUtils.toJsonString(allProperty.values())); } } } private void setVarPoolValue(Map<String, Property> allProperty, Map<String, TaskInstance> allTaskInstance, TaskInstance preTaskInstance, Property thisProperty) { //for this taskInstance all the param in this part is IN. thisProperty.setDirect(Direct.IN); //get the pre taskInstance Property's name String proName = thisProperty.getProp(); //if the Previous nodes have the Property of same name if (allProperty.containsKey(proName)) { //comparison the value of two Property Property otherPro = allProperty.get(proName); //if this property'value of loop is empty,use the other,whether the other's value is empty or not if (StringUtils.isEmpty(thisProperty.getValue())) { allProperty.put(proName, otherPro); //if property'value of loop is not empty,and the other's value is not empty too, use the earlier value } else if (StringUtils.isNotEmpty(otherPro.getValue())) { TaskInstance otherTask = allTaskInstance.get(proName); if (otherTask.getEndTime().getTime() > preTaskInstance.getEndTime().getTime()) { allProperty.put(proName, thisProperty); allTaskInstance.put(proName, preTaskInstance); } else { allProperty.put(proName, otherPro); } } else { allProperty.put(proName, thisProperty); allTaskInstance.put(proName, preTaskInstance); } } else { allProperty.put(proName, thisProperty); allTaskInstance.put(proName, preTaskInstance); } } /** * get complete task instance map, taskCode as key */ private Map<String, TaskInstance> getCompleteTaskInstanceMap() { Map<String, TaskInstance> completeTaskInstanceMap = new HashMap<>(); for (Integer taskInstanceId : completeTaskMap.values()) { TaskInstance taskInstance = taskInstanceMap.get(taskInstanceId); completeTaskInstanceMap.put(Long.toString(taskInstance.getTaskCode()), taskInstance); } return completeTaskInstanceMap; } /** * get valid task list */ private List<TaskInstance> getValidTaskList() { List<TaskInstance> validTaskInstanceList = new ArrayList<>(); for (Integer taskInstanceId : validTaskMap.values()) { validTaskInstanceList.add(taskInstanceMap.get(taskInstanceId)); } return validTaskInstanceList; } private void submitPostNode(String parentNodeCode) { Set<String> submitTaskNodeList = DagHelper.parsePostNodes(parentNodeCode, skipTaskNodeMap, dag, getCompleteTaskInstanceMap()); List<TaskInstance> taskInstances = new ArrayList<>(); for (String taskNode : submitTaskNodeList) { TaskNode taskNodeObject = dag.getNode(taskNode); if (checkTaskInstanceByCode(taskNodeObject.getCode())) { continue; } TaskInstance task = createTaskInstance(processInstance, taskNodeObject); taskInstances.add(task); } // if previous node success , post node submit for (TaskInstance task : taskInstances) { if (readyToSubmitTaskQueue.contains(task)) { continue; } if (completeTaskMap.containsKey(Long.toString(task.getTaskCode()))) { logger.info("task {} has already run success", task.getName()); continue; } if (task.getState().typeIsPause() || task.getState().typeIsCancel()) { logger.info("task {} stopped, the state is {}", task.getName(), task.getState()); continue; } addTaskToStandByList(task); } submitStandByTask(); updateProcessInstanceState(); } /** * determine whether the dependencies of the task node are complete * * @return DependResult */ private DependResult isTaskDepsComplete(String taskCode) { Collection<String> startNodes = dag.getBeginNode(); // if vertex,returns true directly if (startNodes.contains(taskCode)) { return DependResult.SUCCESS; } TaskNode taskNode = dag.getNode(taskCode); List<String> depCodeList = taskNode.getDepList(); for (String depsNode : depCodeList) { if (!dag.containsNode(depsNode) || forbiddenTaskMap.containsKey(depsNode) || skipTaskNodeMap.containsKey(depsNode)) { continue; } // dependencies must be fully completed if (!completeTaskMap.containsKey(depsNode)) { return DependResult.WAITING; } Integer depsTaskId = completeTaskMap.get(depsNode); ExecutionStatus depTaskState = taskInstanceMap.get(depsTaskId).getState(); if (depTaskState.typeIsPause() || depTaskState.typeIsCancel()) { return DependResult.NON_EXEC; } // ignore task state if current task is condition if (taskNode.isConditionsTask()) { continue; } if (!dependTaskSuccess(depsNode, taskCode)) { return DependResult.FAILED; } } logger.info("taskCode: {} completeDependTaskList: {}", taskCode, Arrays.toString(completeTaskMap.keySet().toArray())); return DependResult.SUCCESS; } /** * depend node is completed, but here need check the condition task branch is the next node */ private boolean dependTaskSuccess(String dependNodeName, String nextNodeName) { if (dag.getNode(dependNodeName).isConditionsTask()) { //condition task need check the branch to run List<String> nextTaskList = DagHelper.parseConditionTask(dependNodeName, skipTaskNodeMap, dag, getCompleteTaskInstanceMap()); if (!nextTaskList.contains(nextNodeName)) { return false; } } else { Integer taskInstanceId = completeTaskMap.get(dependNodeName); ExecutionStatus depTaskState = taskInstanceMap.get(taskInstanceId).getState(); if (depTaskState.typeIsFailure()) { return false; } } return true; } /** * query task instance by complete state * * @param state state * @return task instance list */ private List<TaskInstance> getCompleteTaskByState(ExecutionStatus state) { List<TaskInstance> resultList = new ArrayList<>(); for (Integer taskInstanceId : completeTaskMap.values()) { TaskInstance taskInstance = taskInstanceMap.get(taskInstanceId); if (taskInstance != null && taskInstance.getState() == state) { resultList.add(taskInstance); } } return resultList; } /** * where there are ongoing tasks * * @param state state * @return ExecutionStatus */ private ExecutionStatus runningState(ExecutionStatus state) { if (state == ExecutionStatus.READY_STOP || state == ExecutionStatus.READY_PAUSE || state == ExecutionStatus.WAITING_THREAD || state == ExecutionStatus.DELAY_EXECUTION) { // if the running task is not completed, the state remains unchanged return state; } else { return ExecutionStatus.RUNNING_EXECUTION; } } /** * exists failure task,contains submit failure、dependency failure,execute failure(retry after) * * @return Boolean whether has failed task */ private boolean hasFailedTask() { if (this.taskFailedSubmit) { return true; } if (this.errorTaskMap.size() > 0) { return true; } return this.dependFailedTaskMap.size() > 0; } /** * process instance failure * * @return Boolean whether process instance failed */ private boolean processFailed() { if (hasFailedTask()) { if (processInstance.getFailureStrategy() == FailureStrategy.END) { return true; } if (processInstance.getFailureStrategy() == FailureStrategy.CONTINUE) { return readyToSubmitTaskQueue.size() == 0 && activeTaskProcessorMaps.size() == 0; } } return false; } /** * whether task for waiting thread * * @return Boolean whether has waiting thread task */ private boolean hasWaitingThreadTask() { List<TaskInstance> waitingList = getCompleteTaskByState(ExecutionStatus.WAITING_THREAD); return CollectionUtils.isNotEmpty(waitingList); } /** * prepare for pause * 1,failed retry task in the preparation queue , returns to failure directly * 2,exists pause task,complement not completed, pending submission of tasks, return to suspension * 3,success * * @return ExecutionStatus */ private ExecutionStatus processReadyPause() { if (hasRetryTaskInStandBy()) { return ExecutionStatus.FAILURE; } List<TaskInstance> pauseList = getCompleteTaskByState(ExecutionStatus.PAUSE); if (CollectionUtils.isNotEmpty(pauseList) || !isComplementEnd() || readyToSubmitTaskQueue.size() > 0) { return ExecutionStatus.PAUSE; } else { return ExecutionStatus.SUCCESS; } } /** * generate the latest process instance status by the tasks state * * @return process instance execution status */ private ExecutionStatus getProcessInstanceState(ProcessInstance instance) { ExecutionStatus state = instance.getState(); if (activeTaskProcessorMaps.size() > 0 || hasRetryTaskInStandBy()) { // active task and retry task exists return runningState(state); } // process failure if (processFailed()) { return ExecutionStatus.FAILURE; } // waiting thread if (hasWaitingThreadTask()) { return ExecutionStatus.WAITING_THREAD; } // pause if (state == ExecutionStatus.READY_PAUSE) { return processReadyPause(); } // stop if (state == ExecutionStatus.READY_STOP) { List<TaskInstance> stopList = getCompleteTaskByState(ExecutionStatus.STOP); List<TaskInstance> killList = getCompleteTaskByState(ExecutionStatus.KILL); if (CollectionUtils.isNotEmpty(stopList) || CollectionUtils.isNotEmpty(killList) || !isComplementEnd()) { return ExecutionStatus.STOP; } else { return ExecutionStatus.SUCCESS; } } // success if (state == ExecutionStatus.RUNNING_EXECUTION) { List<TaskInstance> killTasks = getCompleteTaskByState(ExecutionStatus.KILL); if (readyToSubmitTaskQueue.size() > 0) { //tasks currently pending submission, no retries, indicating that depend is waiting to complete return ExecutionStatus.RUNNING_EXECUTION; } else if (CollectionUtils.isNotEmpty(killTasks)) { // tasks maybe killed manually return ExecutionStatus.FAILURE; } else { // if the waiting queue is empty and the status is in progress, then success return ExecutionStatus.SUCCESS; } } return state; } /** * whether complement end * * @return Boolean whether is complement end */ private boolean isComplementEnd() { if (!processInstance.isComplementData()) { return true; } try { Map<String, String> cmdParam = JSONUtils.toMap(processInstance.getCommandParam()); Date endTime = DateUtils.getScheduleDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); return processInstance.getScheduleTime().equals(endTime); } catch (Exception e) { logger.error("complement end failed ", e); return false; } } /** * updateProcessInstance process instance state * after each batch of tasks is executed, the status of the process instance is updated */ private void updateProcessInstanceState() { ExecutionStatus state = getProcessInstanceState(processInstance); if (processInstance.getState() != state) { logger.info( "work flow process instance [id: {}, name:{}], state change from {} to {}, cmd type: {}", processInstance.getId(), processInstance.getName(), processInstance.getState(), state, processInstance.getCommandType()); processInstance.setState(state); if (state.typeIsFinished()) { processInstance.setEndTime(new Date()); } processService.updateProcessInstance(processInstance); StateEvent stateEvent = new StateEvent(); stateEvent.setExecutionStatus(processInstance.getState()); stateEvent.setProcessInstanceId(this.processInstance.getId()); stateEvent.setType(StateEventType.PROCESS_STATE_CHANGE); this.processStateChangeHandler(stateEvent); } } /** * get task dependency result * * @param taskInstance task instance * @return DependResult */ private DependResult getDependResultForTask(TaskInstance taskInstance) { return isTaskDepsComplete(Long.toString(taskInstance.getTaskCode())); } /** * add task to standby list * * @param taskInstance task instance */ private void addTaskToStandByList(TaskInstance taskInstance) { logger.info("add task to stand by list: {}", taskInstance.getName()); try { if (!readyToSubmitTaskQueue.contains(taskInstance)) { readyToSubmitTaskQueue.put(taskInstance); } } catch (Exception e) { logger.error("add task instance to readyToSubmitTaskQueue error, taskName: {}", taskInstance.getName(), e); } } /** * remove task from stand by list * * @param taskInstance task instance */ private void removeTaskFromStandbyList(TaskInstance taskInstance) { logger.info("remove task from stand by list, id: {} name:{}", taskInstance.getId(), taskInstance.getName()); try { readyToSubmitTaskQueue.remove(taskInstance); } catch (Exception e) { logger.error("remove task instance from readyToSubmitTaskQueue error, task id:{}, Name: {}", taskInstance.getId(), taskInstance.getName(), e); } } /** * has retry task in standby * * @return Boolean whether has retry task in standby */ private boolean hasRetryTaskInStandBy() { for (Iterator<TaskInstance> iter = readyToSubmitTaskQueue.iterator(); iter.hasNext(); ) { if (iter.next().getState().typeIsFailure()) { return true; } } return false; } /** * close the on going tasks */ private void killAllTasks() { logger.info("kill called on process instance id: {}, num: {}", processInstance.getId(), activeTaskProcessorMaps.size()); for (int taskId : activeTaskProcessorMaps.keySet()) { TaskInstance taskInstance = processService.findTaskInstanceById(taskId); if (taskInstance == null || taskInstance.getState().typeIsFinished()) { continue; } ITaskProcessor taskProcessor = activeTaskProcessorMaps.get(taskId); taskProcessor.action(TaskAction.STOP); if (taskProcessor.taskState().typeIsFinished()) { StateEvent stateEvent = new StateEvent(); stateEvent.setType(StateEventType.TASK_STATE_CHANGE); stateEvent.setProcessInstanceId(this.processInstance.getId()); stateEvent.setTaskInstanceId(taskInstance.getId()); stateEvent.setExecutionStatus(taskProcessor.taskState()); this.addStateEvent(stateEvent); } } } public boolean workFlowFinish() { return this.processInstance.getState().typeIsFinished(); } /** * handling the list of tasks to be submitted */ private void submitStandByTask() { try { int length = readyToSubmitTaskQueue.size(); for (int i = 0; i < length; i++) { TaskInstance task = readyToSubmitTaskQueue.peek(); if (task == null) { continue; } // stop tasks which is retrying if forced success happens if (task.taskCanRetry()) { TaskInstance retryTask = processService.findTaskInstanceById(task.getId()); if (retryTask != null && retryTask.getState().equals(ExecutionStatus.FORCED_SUCCESS)) { task.setState(retryTask.getState()); logger.info("task: {} has been forced success, put it into complete task list and stop retrying", task.getName()); removeTaskFromStandbyList(task); completeTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); taskInstanceMap.put(task.getId(), task); submitPostNode(Long.toString(task.getTaskCode())); continue; } } //init varPool only this task is the first time running if (task.isFirstRun()) { //get pre task ,get all the task varPool to this task Set<String> preTask = dag.getPreviousNodes(Long.toString(task.getTaskCode())); getPreVarPool(task, preTask); } DependResult dependResult = getDependResultForTask(task); if (DependResult.SUCCESS == dependResult) { if (task.retryTaskIntervalOverTime()) { int originalId = task.getId(); TaskInstance taskInstance = submitTaskExec(task); if (taskInstance == null) { this.taskFailedSubmit = true; } else { removeTaskFromStandbyList(task); if (taskInstance.getId() != originalId) { activeTaskProcessorMaps.remove(originalId); } } } } else if (DependResult.FAILED == dependResult) { // if the dependency fails, the current node is not submitted and the state changes to failure. dependFailedTaskMap.put(Long.toString(task.getTaskCode()), task.getId()); removeTaskFromStandbyList(task); logger.info("task {},id:{} depend result : {}", task.getName(), task.getId(), dependResult); } else if (DependResult.NON_EXEC == dependResult) { // for some reasons(depend task pause/stop) this task would not be submit removeTaskFromStandbyList(task); logger.info("remove task {},id:{} , because depend result : {}", task.getName(), task.getId(), dependResult); } } } catch (Exception e) { logger.error("submit standby task error", e); } } /** * get recovery task instance * * @param taskId task id * @return recovery task instance */ private TaskInstance getRecoveryTaskInstance(String taskId) { if (!StringUtils.isNotEmpty(taskId)) { return null; } try { Integer intId = Integer.valueOf(taskId); TaskInstance task = processService.findTaskInstanceById(intId); if (task == null) { logger.error("start node id cannot be found: {}", taskId); } else { return task; } } catch (Exception e) { logger.error("get recovery task instance failed ", e); } return null; } /** * get start task instance list * * @param cmdParam command param * @return task instance list */ private List<TaskInstance> getStartTaskInstanceList(String cmdParam) { List<TaskInstance> instanceList = new ArrayList<>(); Map<String, String> paramMap = JSONUtils.toMap(cmdParam); if (paramMap != null && paramMap.containsKey(CMD_PARAM_RECOVERY_START_NODE_STRING)) { String[] idList = paramMap.get(CMD_PARAM_RECOVERY_START_NODE_STRING).split(Constants.COMMA); for (String nodeId : idList) { TaskInstance task = getRecoveryTaskInstance(nodeId); if (task != null) { instanceList.add(task); } } } return instanceList; } /** * parse "StartNodeNameList" from cmd param * * @param cmdParam command param * @return start node name list */ private List<String> parseStartNodeName(String cmdParam) { List<String> startNodeNameList = new ArrayList<>(); Map<String, String> paramMap = JSONUtils.toMap(cmdParam); if (paramMap == null) { return startNodeNameList; } if (paramMap.containsKey(CMD_PARAM_START_NODES)) { startNodeNameList = Arrays.asList(paramMap.get(CMD_PARAM_START_NODES).split(Constants.COMMA)); } return startNodeNameList; } /** * generate start node code list from parsing command param; * if "StartNodeIdList" exists in command param, return StartNodeIdList * * @return recovery node code list */ private List<String> getRecoveryNodeCodeList(List<TaskInstance> recoverNodeList) { List<String> recoveryNodeCodeList = new ArrayList<>(); if (CollectionUtils.isNotEmpty(recoverNodeList)) { for (TaskInstance task : recoverNodeList) { recoveryNodeCodeList.add(Long.toString(task.getTaskCode())); } } return recoveryNodeCodeList; } /** * generate flow dag * * @param totalTaskNodeList total task node list * @param startNodeNameList start node name list * @param recoveryNodeCodeList recovery node code list * @param depNodeType depend node type * @return ProcessDag process dag * @throws Exception exception */ public ProcessDag generateFlowDag(List<TaskNode> totalTaskNodeList, List<String> startNodeNameList, List<String> recoveryNodeCodeList, TaskDependType depNodeType) throws Exception { return DagHelper.generateFlowDag(totalTaskNodeList, startNodeNameList, recoveryNodeCodeList, depNodeType); } /** * check task queue */ private boolean checkTaskQueue() { AtomicBoolean result = new AtomicBoolean(false); taskInstanceMap.forEach((id, taskInstance) -> { if (taskInstance != null && taskInstance.getTaskGroupId() > 0) { result.set(true); } }); return result.get(); } /** * is new process instance */ private boolean isNewProcessInstance() { if (ExecutionStatus.RUNNING_EXECUTION == processInstance.getState() && processInstance.getRunTimes() == 1) { return true; } else { return false; } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,607
[Bug] [dolphinscheduler-api] Failed to execute PROCEDURE node
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened 1.An error is reported when the data storage IN mode references a user-defined parameter ![image](https://user-images.githubusercontent.com/95271106/147314112-2de413d6-8a26-4273-9498-2a2d3b267b2c.png) ![image](https://user-images.githubusercontent.com/95271106/147314340-4f353213-0f1c-41e5-ad74-301c7edee50c.png) 2.An error is reported when the data storage OUTmode references a user-defined parameter ![image](https://user-images.githubusercontent.com/95271106/147314276-6184d141-c402-4f52-84f0-3bcd110f798c.png) ![image](https://user-images.githubusercontent.com/95271106/147314303-d7794de0-ed75-45a6-94ff-35d76fb87728.png) ### What you expected to happen . ### How to reproduce . ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7607
https://github.com/apache/dolphinscheduler/pull/7680
2b05ebf47e3f6c3d02810a8c0dd18c32be4ffde0
cc77963522536526618e19216d38dcd4fd8da472
"2021-12-24T03:58:29Z"
java
"2021-12-28T11:19:30Z"
dolphinscheduler-task-plugin/dolphinscheduler-task-api/src/main/java/org/apache/dolphinscheduler/plugin/task/api/AbstractTaskExecutor.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.task.api; import org.apache.dolphinscheduler.spi.task.AbstractTask; import org.apache.dolphinscheduler.spi.task.request.TaskRequest; import java.util.StringJoiner; import java.util.concurrent.LinkedBlockingQueue; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.Marker; import org.slf4j.MarkerFactory; public abstract class AbstractTaskExecutor extends AbstractTask { public static final Marker FINALIZE_SESSION_MARKER = MarkerFactory.getMarker("FINALIZE_SESSION"); protected Logger logger; /** * constructor * * @param taskRequest taskRequest */ protected AbstractTaskExecutor(TaskRequest taskRequest) { super(taskRequest); logger = LoggerFactory.getLogger(taskRequest.getTaskLogName()); } /** * log handle * * @param logs log list */ public void logHandle(LinkedBlockingQueue<String> logs) { // note that the "new line" is added here to facilitate log parsing if (logs.contains(FINALIZE_SESSION_MARKER.toString())) { logger.info(FINALIZE_SESSION_MARKER, FINALIZE_SESSION_MARKER.toString()); } else { StringJoiner joiner = new StringJoiner("\n\t"); while (!logs.isEmpty()) { joiner.add(logs.poll()); } logger.info(" -> {}", joiner); } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,607
[Bug] [dolphinscheduler-api] Failed to execute PROCEDURE node
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened 1.An error is reported when the data storage IN mode references a user-defined parameter ![image](https://user-images.githubusercontent.com/95271106/147314112-2de413d6-8a26-4273-9498-2a2d3b267b2c.png) ![image](https://user-images.githubusercontent.com/95271106/147314340-4f353213-0f1c-41e5-ad74-301c7edee50c.png) 2.An error is reported when the data storage OUTmode references a user-defined parameter ![image](https://user-images.githubusercontent.com/95271106/147314276-6184d141-c402-4f52-84f0-3bcd110f798c.png) ![image](https://user-images.githubusercontent.com/95271106/147314303-d7794de0-ed75-45a6-94ff-35d76fb87728.png) ### What you expected to happen . ### How to reproduce . ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7607
https://github.com/apache/dolphinscheduler/pull/7680
2b05ebf47e3f6c3d02810a8c0dd18c32be4ffde0
cc77963522536526618e19216d38dcd4fd8da472
"2021-12-24T03:58:29Z"
java
"2021-12-28T11:19:30Z"
dolphinscheduler-task-plugin/dolphinscheduler-task-procedure/src/main/java/org/apache/dolphinscheduler/plugin/task/procedure/ProcedureParameters.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.task.procedure; import org.apache.dolphinscheduler.spi.task.AbstractParameters; import org.apache.dolphinscheduler.spi.task.ResourceInfo; import org.apache.dolphinscheduler.spi.utils.StringUtils; import java.util.ArrayList; import java.util.List; /** * procedure parameter */ public class ProcedureParameters extends AbstractParameters { /** * data source type,eg MYSQL, POSTGRES, HIVE ... */ private String type; /** * data source id */ private int datasource; /** * procedure name */ private String method; public String getType() { return type; } public void setType(String type) { this.type = type; } public int getDatasource() { return datasource; } public void setDatasource(int datasource) { this.datasource = datasource; } public String getMethod() { return method; } public void setMethod(String method) { this.method = method; } @Override public boolean checkParameters() { return datasource != 0 && StringUtils.isNotEmpty(type) && StringUtils.isNotEmpty(method); } @Override public List<ResourceInfo> getResourceFilesList() { return new ArrayList<>(); } @Override public String toString() { return "ProcessdureParam{" + "type='" + type + '\'' + ", datasource=" + datasource + ", method='" + method + '\'' + '}'; } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,607
[Bug] [dolphinscheduler-api] Failed to execute PROCEDURE node
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened 1.An error is reported when the data storage IN mode references a user-defined parameter ![image](https://user-images.githubusercontent.com/95271106/147314112-2de413d6-8a26-4273-9498-2a2d3b267b2c.png) ![image](https://user-images.githubusercontent.com/95271106/147314340-4f353213-0f1c-41e5-ad74-301c7edee50c.png) 2.An error is reported when the data storage OUTmode references a user-defined parameter ![image](https://user-images.githubusercontent.com/95271106/147314276-6184d141-c402-4f52-84f0-3bcd110f798c.png) ![image](https://user-images.githubusercontent.com/95271106/147314303-d7794de0-ed75-45a6-94ff-35d76fb87728.png) ### What you expected to happen . ### How to reproduce . ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7607
https://github.com/apache/dolphinscheduler/pull/7680
2b05ebf47e3f6c3d02810a8c0dd18c32be4ffde0
cc77963522536526618e19216d38dcd4fd8da472
"2021-12-24T03:58:29Z"
java
"2021-12-28T11:19:30Z"
dolphinscheduler-task-plugin/dolphinscheduler-task-procedure/src/main/java/org/apache/dolphinscheduler/plugin/task/procedure/ProcedureTask.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.task.procedure; import static org.apache.dolphinscheduler.spi.task.TaskConstants.EXIT_CODE_FAILURE; import static org.apache.dolphinscheduler.spi.task.TaskConstants.EXIT_CODE_SUCCESS; import static org.apache.dolphinscheduler.spi.task.TaskConstants.TASK_LOG_INFO_FORMAT; import org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider; import org.apache.dolphinscheduler.plugin.datasource.api.utils.DataSourceUtils; import org.apache.dolphinscheduler.plugin.task.api.AbstractTaskExecutor; import org.apache.dolphinscheduler.spi.datasource.ConnectionParam; import org.apache.dolphinscheduler.spi.enums.DataType; import org.apache.dolphinscheduler.spi.enums.DbType; import org.apache.dolphinscheduler.spi.enums.TaskTimeoutStrategy; import org.apache.dolphinscheduler.spi.task.AbstractParameters; import org.apache.dolphinscheduler.spi.task.Direct; import org.apache.dolphinscheduler.spi.task.Property; import org.apache.dolphinscheduler.spi.task.paramparser.ParamUtils; import org.apache.dolphinscheduler.spi.task.paramparser.ParameterUtils; import org.apache.dolphinscheduler.spi.task.request.TaskRequest; import org.apache.dolphinscheduler.spi.utils.JSONUtils; import org.apache.dolphinscheduler.spi.utils.StringUtils; import org.apache.commons.collections4.CollectionUtils; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.SQLException; import java.sql.Types; import java.util.Collection; import java.util.HashMap; import java.util.Map; /** * procedure task */ public class ProcedureTask extends AbstractTaskExecutor { /** * procedure parameters */ private ProcedureParameters procedureParameters; /** * taskExecutionContext */ private TaskRequest taskExecutionContext; /** * constructor * * @param taskExecutionContext taskExecutionContext */ public ProcedureTask(TaskRequest taskExecutionContext) { super(taskExecutionContext); this.taskExecutionContext = taskExecutionContext; logger.info("procedure task params {}", taskExecutionContext.getTaskParams()); this.procedureParameters = JSONUtils.parseObject(taskExecutionContext.getTaskParams(), ProcedureParameters.class); // check parameters if (!procedureParameters.checkParameters()) { throw new RuntimeException("procedure task params is not valid"); } } @Override public void handle() throws Exception { // set the name of the current thread String threadLoggerInfoName = String.format(TASK_LOG_INFO_FORMAT, taskExecutionContext.getTaskAppId()); Thread.currentThread().setName(threadLoggerInfoName); logger.info("procedure type : {}, datasource : {}, method : {} , localParams : {}", procedureParameters.getType(), procedureParameters.getDatasource(), procedureParameters.getMethod(), procedureParameters.getLocalParams()); Connection connection = null; CallableStatement stmt = null; try { // load class DbType dbType = DbType.valueOf(procedureParameters.getType()); // get datasource ConnectionParam connectionParam = DataSourceUtils.buildConnectionParams(DbType.valueOf(procedureParameters.getType()), taskExecutionContext.getProcedureTaskExecutionContext().getConnectionParams()); // get jdbc connection connection = DataSourceClientProvider.getInstance().getConnection(dbType, connectionParam); // combining local and global parameters Map<String, Property> paramsMap = ParamUtils.convert(taskExecutionContext,getParameters()); // call method stmt = connection.prepareCall(procedureParameters.getMethod()); // set timeout setTimeout(stmt); // outParameterMap Map<Integer, Property> outParameterMap = getOutParameterMap(stmt, paramsMap); stmt.executeUpdate(); // print the output parameters to the log printOutParameter(stmt, outParameterMap); setExitStatusCode(EXIT_CODE_SUCCESS); } catch (Exception e) { setExitStatusCode(EXIT_CODE_FAILURE); logger.error("procedure task error", e); throw e; } finally { close(stmt, connection); } } /** * print outParameter * * @param stmt CallableStatement * @param outParameterMap outParameterMap * @throws SQLException SQLException */ private void printOutParameter(CallableStatement stmt, Map<Integer, Property> outParameterMap) throws SQLException { for (Map.Entry<Integer, Property> en : outParameterMap.entrySet()) { int index = en.getKey(); Property property = en.getValue(); String prop = property.getProp(); DataType dataType = property.getType(); // get output parameter getOutputParameter(stmt, index, prop, dataType); } } /** * get output parameter * * @param stmt CallableStatement * @param paramsMap paramsMap * @return outParameterMap * @throws Exception Exception */ private Map<Integer, Property> getOutParameterMap(CallableStatement stmt, Map<String, Property> paramsMap) throws Exception { Map<Integer, Property> outParameterMap = new HashMap<>(); if (procedureParameters.getLocalParametersMap() == null) { return outParameterMap; } Collection<Property> userDefParamsList = procedureParameters.getLocalParametersMap().values(); if (CollectionUtils.isEmpty(userDefParamsList)) { return outParameterMap; } int index = 1; for (Property property : userDefParamsList) { logger.info("localParams : prop : {} , dirct : {} , type : {} , value : {}" , property.getProp(), property.getDirect(), property.getType(), property.getValue()); // set parameters if (property.getDirect().equals(Direct.IN)) { ParameterUtils.setInParameter(index, stmt, property.getType(), paramsMap.get(property.getProp()).getValue()); } else if (property.getDirect().equals(Direct.OUT)) { setOutParameter(index, stmt, property.getType(), paramsMap.get(property.getProp()).getValue()); property.setValue(paramsMap.get(property.getProp()).getValue()); outParameterMap.put(index, property); } index++; } return outParameterMap; } /** * set timeout * * @param stmt CallableStatement */ private void setTimeout(CallableStatement stmt) throws SQLException { Boolean failed = taskExecutionContext.getTaskTimeoutStrategy() == TaskTimeoutStrategy.FAILED; Boolean warnFailed = taskExecutionContext.getTaskTimeoutStrategy() == TaskTimeoutStrategy.WARNFAILED; if (failed || warnFailed) { stmt.setQueryTimeout(taskExecutionContext.getTaskTimeout()); } } /** * close jdbc resource * * @param stmt stmt * @param connection connection */ private void close(PreparedStatement stmt, Connection connection) { if (stmt != null) { try { stmt.close(); } catch (SQLException e) { logger.error("close prepared statement error : {}", e.getMessage(), e); } } if (connection != null) { try { connection.close(); } catch (SQLException e) { logger.error("close connection error : {}", e.getMessage(), e); } } } /** * get output parameter * * @param stmt stmt * @param index index * @param prop prop * @param dataType dataType * @throws SQLException SQLException */ private void getOutputParameter(CallableStatement stmt, int index, String prop, DataType dataType) throws SQLException { switch (dataType) { case VARCHAR: logger.info("out prameter varchar key : {} , value : {}", prop, stmt.getString(index)); break; case INTEGER: logger.info("out prameter integer key : {} , value : {}", prop, stmt.getInt(index)); break; case LONG: logger.info("out prameter long key : {} , value : {}", prop, stmt.getLong(index)); break; case FLOAT: logger.info("out prameter float key : {} , value : {}", prop, stmt.getFloat(index)); break; case DOUBLE: logger.info("out prameter double key : {} , value : {}", prop, stmt.getDouble(index)); break; case DATE: logger.info("out prameter date key : {} , value : {}", prop, stmt.getDate(index)); break; case TIME: logger.info("out prameter time key : {} , value : {}", prop, stmt.getTime(index)); break; case TIMESTAMP: logger.info("out prameter timestamp key : {} , value : {}", prop, stmt.getTimestamp(index)); break; case BOOLEAN: logger.info("out prameter boolean key : {} , value : {}", prop, stmt.getBoolean(index)); break; default: break; } } @Override public AbstractParameters getParameters() { return procedureParameters; } /** * set out parameter * * @param index index * @param stmt stmt * @param dataType dataType * @param value value * @throws Exception exception */ private void setOutParameter(int index, CallableStatement stmt, DataType dataType, String value) throws Exception { int sqlType; switch (dataType) { case VARCHAR: sqlType = Types.VARCHAR; break; case INTEGER: case LONG: sqlType = Types.INTEGER; break; case FLOAT: sqlType = Types.FLOAT; break; case DOUBLE: sqlType = Types.DOUBLE; break; case DATE: sqlType = Types.DATE; break; case TIME: sqlType = Types.TIME; break; case TIMESTAMP: sqlType = Types.TIMESTAMP; break; case BOOLEAN: sqlType = Types.BOOLEAN; break; default: throw new IllegalStateException("Unexpected value: " + dataType); } if (StringUtils.isEmpty(value)) { stmt.registerOutParameter(index, sqlType); } else { stmt.registerOutParameter(index, sqlType, value); } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,607
[Bug] [dolphinscheduler-api] Failed to execute PROCEDURE node
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened 1.An error is reported when the data storage IN mode references a user-defined parameter ![image](https://user-images.githubusercontent.com/95271106/147314112-2de413d6-8a26-4273-9498-2a2d3b267b2c.png) ![image](https://user-images.githubusercontent.com/95271106/147314340-4f353213-0f1c-41e5-ad74-301c7edee50c.png) 2.An error is reported when the data storage OUTmode references a user-defined parameter ![image](https://user-images.githubusercontent.com/95271106/147314276-6184d141-c402-4f52-84f0-3bcd110f798c.png) ![image](https://user-images.githubusercontent.com/95271106/147314303-d7794de0-ed75-45a6-94ff-35d76fb87728.png) ### What you expected to happen . ### How to reproduce . ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7607
https://github.com/apache/dolphinscheduler/pull/7680
2b05ebf47e3f6c3d02810a8c0dd18c32be4ffde0
cc77963522536526618e19216d38dcd4fd8da472
"2021-12-24T03:58:29Z"
java
"2021-12-28T11:19:30Z"
dolphinscheduler-task-plugin/dolphinscheduler-task-sql/src/main/java/org/apache/dolphinscheduler/plugin/task/sql/SqlTask.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.task.sql; import org.apache.dolphinscheduler.plugin.datasource.api.plugin.DataSourceClientProvider; import org.apache.dolphinscheduler.plugin.datasource.api.utils.CommonUtils; import org.apache.dolphinscheduler.plugin.datasource.api.utils.DataSourceUtils; import org.apache.dolphinscheduler.plugin.task.api.AbstractTaskExecutor; import org.apache.dolphinscheduler.plugin.task.api.TaskException; import org.apache.dolphinscheduler.plugin.task.util.MapUtils; import org.apache.dolphinscheduler.spi.datasource.BaseConnectionParam; import org.apache.dolphinscheduler.spi.enums.DbType; import org.apache.dolphinscheduler.spi.enums.TaskTimeoutStrategy; import org.apache.dolphinscheduler.spi.task.AbstractParameters; import org.apache.dolphinscheduler.spi.task.Direct; import org.apache.dolphinscheduler.spi.task.Property; import org.apache.dolphinscheduler.spi.task.TaskAlertInfo; import org.apache.dolphinscheduler.spi.task.TaskConstants; import org.apache.dolphinscheduler.spi.task.paramparser.ParamUtils; import org.apache.dolphinscheduler.spi.task.paramparser.ParameterUtils; import org.apache.dolphinscheduler.spi.task.request.SQLTaskExecutionContext; import org.apache.dolphinscheduler.spi.task.request.TaskRequest; import org.apache.dolphinscheduler.spi.task.request.UdfFuncRequest; import org.apache.dolphinscheduler.spi.utils.JSONUtils; import org.apache.dolphinscheduler.spi.utils.StringUtils; import org.apache.commons.collections.CollectionUtils; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.ResultSetMetaData; import java.sql.SQLException; import java.sql.Statement; import java.text.MessageFormat; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Optional; import java.util.Set; import java.util.regex.Matcher; import java.util.regex.Pattern; import java.util.stream.Collectors; import org.slf4j.Logger; import com.fasterxml.jackson.databind.node.ArrayNode; import com.fasterxml.jackson.databind.node.ObjectNode; public class SqlTask extends AbstractTaskExecutor { /** * taskExecutionContext */ private TaskRequest taskExecutionContext; /** * sql parameters */ private SqlParameters sqlParameters; /** * base datasource */ private BaseConnectionParam baseConnectionParam; /** * create function format */ private static final String CREATE_FUNCTION_FORMAT = "create temporary function {0} as ''{1}''"; /** * default query sql limit */ private static final int QUERY_LIMIT = 10000; /** * Abstract Yarn Task * * @param taskRequest taskRequest */ public SqlTask(TaskRequest taskRequest) { super(taskRequest); this.taskExecutionContext = taskRequest; this.sqlParameters = JSONUtils.parseObject(taskExecutionContext.getTaskParams(), SqlParameters.class); assert sqlParameters != null; if (!sqlParameters.checkParameters()) { throw new RuntimeException("sql task params is not valid"); } } @Override public AbstractParameters getParameters() { return sqlParameters; } @Override public void handle() throws Exception { // set the name of the current thread String threadLoggerInfoName = String.format(TaskConstants.TASK_LOG_INFO_FORMAT, taskExecutionContext.getTaskAppId()); Thread.currentThread().setName(threadLoggerInfoName); logger.info("Full sql parameters: {}", sqlParameters); logger.info("sql type : {}, datasource : {}, sql : {} , localParams : {},udfs : {},showType : {},connParams : {},varPool : {} ,query max result limit {}", sqlParameters.getType(), sqlParameters.getDatasource(), sqlParameters.getSql(), sqlParameters.getLocalParams(), sqlParameters.getUdfs(), sqlParameters.getShowType(), sqlParameters.getConnParams(), sqlParameters.getVarPool(), sqlParameters.getLimit()); try { SQLTaskExecutionContext sqlTaskExecutionContext = taskExecutionContext.getSqlTaskExecutionContext(); // get datasource baseConnectionParam = (BaseConnectionParam) DataSourceUtils.buildConnectionParams( DbType.valueOf(sqlParameters.getType()), sqlTaskExecutionContext.getConnectionParams()); // ready to execute SQL and parameter entity Map SqlBinds mainSqlBinds = getSqlAndSqlParamsMap(sqlParameters.getSql()); List<SqlBinds> preStatementSqlBinds = Optional.ofNullable(sqlParameters.getPreStatements()) .orElse(new ArrayList<>()) .stream() .map(this::getSqlAndSqlParamsMap) .collect(Collectors.toList()); List<SqlBinds> postStatementSqlBinds = Optional.ofNullable(sqlParameters.getPostStatements()) .orElse(new ArrayList<>()) .stream() .map(this::getSqlAndSqlParamsMap) .collect(Collectors.toList()); List<String> createFuncs = createFuncs(sqlTaskExecutionContext.getUdfFuncTenantCodeMap(), sqlTaskExecutionContext.getDefaultFS(), logger); // execute sql task executeFuncAndSql(mainSqlBinds, preStatementSqlBinds, postStatementSqlBinds, createFuncs); setExitStatusCode(TaskConstants.EXIT_CODE_SUCCESS); } catch (Exception e) { setExitStatusCode(TaskConstants.EXIT_CODE_FAILURE); logger.error("sql task error: {}", e.toString()); throw e; } } /** * execute function and sql * * @param mainSqlBinds main sql binds * @param preStatementsBinds pre statements binds * @param postStatementsBinds post statements binds * @param createFuncs create functions */ public void executeFuncAndSql(SqlBinds mainSqlBinds, List<SqlBinds> preStatementsBinds, List<SqlBinds> postStatementsBinds, List<String> createFuncs) throws Exception { Connection connection = null; PreparedStatement stmt = null; ResultSet resultSet = null; try { // create connection connection = DataSourceClientProvider.getInstance().getConnection(DbType.valueOf(sqlParameters.getType()), baseConnectionParam); // create temp function if (CollectionUtils.isNotEmpty(createFuncs)) { createTempFunction(connection, createFuncs); } // pre sql preSql(connection, preStatementsBinds); stmt = prepareStatementAndBind(connection, mainSqlBinds); String result = null; // decide whether to executeQuery or executeUpdate based on sqlType if (sqlParameters.getSqlType() == SqlType.QUERY.ordinal()) { // query statements need to be convert to JsonArray and inserted into Alert to send resultSet = stmt.executeQuery(); result = resultProcess(resultSet); } else if (sqlParameters.getSqlType() == SqlType.NON_QUERY.ordinal()) { // non query statement String updateResult = String.valueOf(stmt.executeUpdate()); result = setNonQuerySqlReturn(updateResult, sqlParameters.getLocalParams()); } //deal out params sqlParameters.dealOutParam(result); postSql(connection, postStatementsBinds); } catch (Exception e) { logger.error("execute sql error: {}", e.getMessage()); throw e; } finally { close(resultSet, stmt, connection); } } private String setNonQuerySqlReturn(String updateResult, List<Property> properties) { String result = null; for (Property info : properties) { if (Direct.OUT == info.getDirect()) { List<Map<String, String>> updateRL = new ArrayList<>(); Map<String, String> updateRM = new HashMap<>(); updateRM.put(info.getProp(), updateResult); updateRL.add(updateRM); result = JSONUtils.toJsonString(updateRL); break; } } return result; } /** * result process * * @param resultSet resultSet * @throws Exception Exception */ private String resultProcess(ResultSet resultSet) throws Exception { ArrayNode resultJSONArray = JSONUtils.createArrayNode(); if (resultSet != null) { ResultSetMetaData md = resultSet.getMetaData(); int num = md.getColumnCount(); int rowCount = 0; int limit = sqlParameters.getLimit() == 0 ? QUERY_LIMIT : sqlParameters.getLimit(); while (rowCount < limit && resultSet.next()) { ObjectNode mapOfColValues = JSONUtils.createObjectNode(); for (int i = 1; i <= num; i++) { mapOfColValues.set(md.getColumnLabel(i), JSONUtils.toJsonNode(resultSet.getObject(i))); } resultJSONArray.add(mapOfColValues); rowCount++; } int displayRows = sqlParameters.getDisplayRows() > 0 ? sqlParameters.getDisplayRows() : TaskConstants.DEFAULT_DISPLAY_ROWS; displayRows = Math.min(displayRows, resultJSONArray.size()); logger.info("display sql result {} rows as follows:", displayRows); for (int i = 0; i < displayRows; i++) { String row = JSONUtils.toJsonString(resultJSONArray.get(i)); logger.info("row {} : {}", i + 1, row); } if (resultSet.next()) { logger.info("sql result limit : {} exceeding results are filtered", limit); String log = String.format("sql result limit : %d exceeding results are filtered", limit); resultJSONArray.add(JSONUtils.toJsonNode(log)); } } String result = JSONUtils.toJsonString(resultJSONArray); if (sqlParameters.getSendEmail() == null || sqlParameters.getSendEmail()) { sendAttachment(sqlParameters.getGroupId(), StringUtils.isNotEmpty(sqlParameters.getTitle()) ? sqlParameters.getTitle() : taskExecutionContext.getTaskName() + " query result sets", result); } logger.debug("execute sql result : {}", result); return result; } /** * send alert as an attachment * * @param title title * @param content content */ private void sendAttachment(int groupId, String title, String content) { setNeedAlert(Boolean.TRUE); TaskAlertInfo taskAlertInfo = new TaskAlertInfo(); taskAlertInfo.setAlertGroupId(groupId); taskAlertInfo.setContent(content); taskAlertInfo.setTitle(title); setTaskAlertInfo(taskAlertInfo); } /** * pre sql * * @param connection connection * @param preStatementsBinds preStatementsBinds */ private void preSql(Connection connection, List<SqlBinds> preStatementsBinds) throws Exception { for (SqlBinds sqlBind : preStatementsBinds) { try (PreparedStatement pstmt = prepareStatementAndBind(connection, sqlBind)) { int result = pstmt.executeUpdate(); logger.info("pre statement execute result: {}, for sql: {}", result, sqlBind.getSql()); } } } /** * post sql * * @param connection connection * @param postStatementsBinds postStatementsBinds */ private void postSql(Connection connection, List<SqlBinds> postStatementsBinds) throws Exception { for (SqlBinds sqlBind : postStatementsBinds) { try (PreparedStatement pstmt = prepareStatementAndBind(connection, sqlBind)) { int result = pstmt.executeUpdate(); logger.info("post statement execute result: {},for sql: {}", result, sqlBind.getSql()); } } } /** * create temp function * * @param connection connection * @param createFuncs createFuncs */ private void createTempFunction(Connection connection, List<String> createFuncs) throws Exception { try (Statement funcStmt = connection.createStatement()) { for (String createFunc : createFuncs) { logger.info("hive create function sql: {}", createFunc); funcStmt.execute(createFunc); } } } /** * close jdbc resource * * @param resultSet resultSet * @param pstmt pstmt * @param connection connection */ private void close(ResultSet resultSet, PreparedStatement pstmt, Connection connection) { if (resultSet != null) { try { resultSet.close(); } catch (SQLException e) { logger.error("close result set error : {}", e.getMessage(), e); } } if (pstmt != null) { try { pstmt.close(); } catch (SQLException e) { logger.error("close prepared statement error : {}", e.getMessage(), e); } } if (connection != null) { try { connection.close(); } catch (SQLException e) { logger.error("close connection error : {}", e.getMessage(), e); } } } /** * preparedStatement bind * * @param connection connection * @param sqlBinds sqlBinds * @return PreparedStatement * @throws Exception Exception */ private PreparedStatement prepareStatementAndBind(Connection connection, SqlBinds sqlBinds) { // is the timeout set boolean timeoutFlag = taskExecutionContext.getTaskTimeoutStrategy() == TaskTimeoutStrategy.FAILED || taskExecutionContext.getTaskTimeoutStrategy() == TaskTimeoutStrategy.WARNFAILED; try { PreparedStatement stmt = connection.prepareStatement(sqlBinds.getSql()); if (timeoutFlag) { stmt.setQueryTimeout(taskExecutionContext.getTaskTimeout()); } Map<Integer, Property> params = sqlBinds.getParamsMap(); if (params != null) { for (Map.Entry<Integer, Property> entry : params.entrySet()) { Property prop = entry.getValue(); ParameterUtils.setInParameter(entry.getKey(), stmt, prop.getType(), prop.getValue()); } } logger.info("prepare statement replace sql : {} ", stmt); return stmt; } catch (Exception exception) { throw new TaskException("SQL task prepareStatementAndBind error", exception); } } /** * regular expressions match the contents between two specified strings * * @param content content * @param rgex rgex * @param sqlParamsMap sql params map * @param paramsPropsMap params props map */ public void setSqlParamsMap(String content, String rgex, Map<Integer, Property> sqlParamsMap, Map<String, Property> paramsPropsMap) { Pattern pattern = Pattern.compile(rgex); Matcher m = pattern.matcher(content); int index = 1; while (m.find()) { String paramName = m.group(1); Property prop = paramsPropsMap.get(paramName); if (prop == null) { logger.error("setSqlParamsMap: No Property with paramName: {} is found in paramsPropsMap of task instance" + " with id: {}. So couldn't put Property in sqlParamsMap.", paramName, taskExecutionContext.getTaskInstanceId()); } else { sqlParamsMap.put(index, prop); index++; logger.info("setSqlParamsMap: Property with paramName: {} put in sqlParamsMap of content {} successfully.", paramName, content); } } } /** * print replace sql * * @param content content * @param formatSql format sql * @param rgex rgex * @param sqlParamsMap sql params map */ private void printReplacedSql(String content, String formatSql, String rgex, Map<Integer, Property> sqlParamsMap) { //parameter print style logger.info("after replace sql , preparing : {}", formatSql); StringBuilder logPrint = new StringBuilder("replaced sql , parameters:"); if (sqlParamsMap == null) { logger.info("printReplacedSql: sqlParamsMap is null."); } else { for (int i = 1; i <= sqlParamsMap.size(); i++) { logPrint.append(sqlParamsMap.get(i).getValue()).append("(").append(sqlParamsMap.get(i).getType()).append(")"); } } logger.info("Sql Params are {}", logPrint); } /** * ready to execute SQL and parameter entity Map * * @return SqlBinds */ private SqlBinds getSqlAndSqlParamsMap(String sql) { Map<Integer, Property> sqlParamsMap = new HashMap<>(); StringBuilder sqlBuilder = new StringBuilder(); // combining local and global parameters Map<String, Property> paramsMap = ParamUtils.convert(taskExecutionContext, getParameters()); // spell SQL according to the final user-defined variable if (paramsMap == null) { sqlBuilder.append(sql); return new SqlBinds(sqlBuilder.toString(), sqlParamsMap); } if (StringUtils.isNotEmpty(sqlParameters.getTitle())) { String title = ParameterUtils.convertParameterPlaceholders(sqlParameters.getTitle(), ParamUtils.convert(paramsMap)); logger.info("SQL title : {}", title); sqlParameters.setTitle(title); } //new //replace variable TIME with $[YYYYmmddd...] in sql when history run job and batch complement job sql = ParameterUtils.replaceScheduleTime(sql, taskExecutionContext.getScheduleTime()); // special characters need to be escaped, ${} needs to be escaped String rgex = "['\"]*\\$\\{(.*?)\\}['\"]*"; setSqlParamsMap(sql, rgex, sqlParamsMap, paramsMap); //Replace the original value in sql !{...} ,Does not participate in precompilation String rgexo = "['\"]*\\!\\{(.*?)\\}['\"]*"; sql = replaceOriginalValue(sql, rgexo, paramsMap); // replace the ${} of the SQL statement with the Placeholder String formatSql = sql.replaceAll(rgex, "?"); sqlBuilder.append(formatSql); // print repalce sql printReplacedSql(sql, formatSql, rgex, sqlParamsMap); return new SqlBinds(sqlBuilder.toString(), sqlParamsMap); } private String replaceOriginalValue(String content, String rgex, Map<String, Property> sqlParamsMap) { Pattern pattern = Pattern.compile(rgex); while (true) { Matcher m = pattern.matcher(content); if (!m.find()) { break; } String paramName = m.group(1); String paramValue = sqlParamsMap.get(paramName).getValue(); content = m.replaceFirst(paramValue); } return content; } /** * create function list * * @param udfFuncTenantCodeMap key is udf function,value is tenant code * @param logger logger * @return create function list */ public static List<String> createFuncs(Map<UdfFuncRequest, String> udfFuncTenantCodeMap, String defaultFS, Logger logger) { if (MapUtils.isEmpty(udfFuncTenantCodeMap)) { logger.info("can't find udf function resource"); return null; } List<String> funcList = new ArrayList<>(); // build jar sql buildJarSql(funcList, udfFuncTenantCodeMap, defaultFS); // build temp function sql buildTempFuncSql(funcList, new ArrayList<>(udfFuncTenantCodeMap.keySet())); return funcList; } /** * build temp function sql * * @param sqls sql list * @param udfFuncRequests udf function list */ private static void buildTempFuncSql(List<String> sqls, List<UdfFuncRequest> udfFuncRequests) { if (CollectionUtils.isNotEmpty(udfFuncRequests)) { for (UdfFuncRequest udfFuncRequest : udfFuncRequests) { sqls.add(MessageFormat .format(CREATE_FUNCTION_FORMAT, udfFuncRequest.getFuncName(), udfFuncRequest.getClassName())); } } } /** * build jar sql * @param sqls sql list * @param udfFuncTenantCodeMap key is udf function,value is tenant code */ private static void buildJarSql(List<String> sqls, Map<UdfFuncRequest,String> udfFuncTenantCodeMap, String defaultFS) { String resourceFullName; Set<Entry<UdfFuncRequest, String>> entries = udfFuncTenantCodeMap.entrySet(); for (Map.Entry<UdfFuncRequest, String> entry : entries) { String prefixPath = defaultFS.startsWith("file://") ? "file://" : defaultFS; String uploadPath = CommonUtils.getHdfsUdfDir(entry.getValue()); resourceFullName = entry.getKey().getResourceName(); resourceFullName = resourceFullName.startsWith("/") ? resourceFullName : String.format("/%s", resourceFullName); sqls.add(String.format("add jar %s%s%s", prefixPath, uploadPath, resourceFullName)); } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,682
[Bug] [API] After creating a task group the values of both fields 'create_time' and 'update_time' are empty.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened After creating a task group the values of both fields 'create_time' and 'update_time' are empty. ![image](https://user-images.githubusercontent.com/4928204/147560349-48515489-3cac-453e-8ad9-17d54537dd29.png) ### What you expected to happen I expect that the value of both fields 'create_time' and 'update_time' can be assigned with current time. ### How to reproduce You can reproduce this issue when creating a task group in the page or through api. ![image](https://user-images.githubusercontent.com/4928204/147560389-afe6d839-da2d-43e5-8998-1cc80ebf5f8b.png) ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7682
https://github.com/apache/dolphinscheduler/pull/7683
8808c0a700d367a1d408c1ec07c2a6fbeb675d33
6b5db0ac5b1d19aa30667b7edab9573c262ecb60
"2021-12-28T11:11:30Z"
java
"2021-12-28T12:01:46Z"
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TaskGroupServiceImpl.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.api.service.impl; import com.baomidou.mybatisplus.core.metadata.IPage; import com.baomidou.mybatisplus.extension.plugins.pagination.Page; import org.apache.dolphinscheduler.api.enums.Status; import org.apache.dolphinscheduler.api.service.TaskGroupQueueService; import org.apache.dolphinscheduler.api.service.TaskGroupService; import org.apache.dolphinscheduler.api.utils.PageInfo; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.Flag; import org.apache.dolphinscheduler.dao.entity.TaskGroup; import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.dao.mapper.TaskGroupMapper; import org.apache.dolphinscheduler.service.process.ProcessService; import org.apache.dolphinscheduler.spi.utils.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; /** * task Group Service */ @Service public class TaskGroupServiceImpl extends BaseServiceImpl implements TaskGroupService { @Autowired private TaskGroupMapper taskGroupMapper; @Autowired private TaskGroupQueueService taskGroupQueueService; @Autowired private ProcessService processService; private static final Logger logger = LoggerFactory.getLogger(TaskGroupServiceImpl.class); /** * create a Task group * * @param loginUser login user * @param name task group name * @param description task group description * @param groupSize task group total size * @return the result code and msg */ @Override public Map<String, Object> createTaskGroup(User loginUser, long projectcode, String name, String description, int groupSize) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } if (name == null) { putMsg(result, Status.NAME_NULL); return result; } if (groupSize <= 0) { putMsg(result, Status.TASK_GROUP_SIZE_ERROR); return result; } TaskGroup taskGroup1 = taskGroupMapper.queryByName(loginUser.getId(), name); if (taskGroup1 != null) { putMsg(result, Status.TASK_GROUP_NAME_EXSIT); return result; } TaskGroup taskGroup = new TaskGroup(name, projectcode, description, groupSize, loginUser.getId(), Flag.YES.getCode()); if (taskGroupMapper.insert(taskGroup) > 0) { putMsg(result, Status.SUCCESS); } else { putMsg(result, Status.CREATE_TASK_GROUP_ERROR); return result; } return result; } /** * update the task group * * @param loginUser login user * @param name task group name * @param description task group description * @param groupSize task group total size * @return the result code and msg */ @Override public Map<String, Object> updateTaskGroup(User loginUser, int id, String name, String description, int groupSize) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } TaskGroup taskGroup = taskGroupMapper.selectById(id); if (taskGroup.getStatus() != Flag.YES.getCode()) { putMsg(result, Status.TASK_GROUP_STATUS_ERROR); return result; } taskGroup.setGroupSize(groupSize); taskGroup.setDescription(description); if (StringUtils.isNotEmpty(name)) { taskGroup.setName(name); } int i = taskGroupMapper.updateById(taskGroup); logger.info("update result:{}", i); putMsg(result, Status.SUCCESS); return result; } /** * get task group status * * @param id task group id * @return is the task group available */ @Override public boolean isTheTaskGroupAvailable(int id) { return taskGroupMapper.selectCountByIdStatus(id, Flag.YES.getCode()) == 1; } /** * query all task group by user id * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @return the result code and msg */ @Override public Map<String, Object> queryAllTaskGroup(User loginUser, String name, Integer status, int pageNo, int pageSize) { return this.doQuery(loginUser, pageNo, pageSize, loginUser.getId(), name, status); } /** * query all task group by status * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @param status status * @return the result code and msg */ @Override public Map<String, Object> queryTaskGroupByStatus(User loginUser, int pageNo, int pageSize, int status) { return this.doQuery(loginUser, pageNo, pageSize, loginUser.getId(), null, status); } /** * query all task group by name * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @param name name * @return the result code and msg */ @Override public Map<String, Object> queryTaskGroupByProjectCode(User loginUser, int pageNo, int pageSize, Long projectCode) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } Page<TaskGroup> page = new Page<>(pageNo, pageSize); IPage<TaskGroup> taskGroupPaging = taskGroupMapper.queryTaskGroupPagingByProjectCode(page, projectCode); return getStringObjectMap(pageNo, pageSize, result, taskGroupPaging); } private Map<String, Object> getStringObjectMap(int pageNo, int pageSize, Map<String, Object> result, IPage<TaskGroup> taskGroupPaging) { PageInfo<TaskGroup> pageInfo = new PageInfo<>(pageNo, pageSize); int total = taskGroupPaging == null ? 0 : (int) taskGroupPaging.getTotal(); List<TaskGroup> list = taskGroupPaging == null ? new ArrayList<TaskGroup>() : taskGroupPaging.getRecords(); pageInfo.setTotal(total); pageInfo.setTotalList(list); result.put(Constants.DATA_LIST, pageInfo); logger.info("select result:{}", taskGroupPaging); putMsg(result, Status.SUCCESS); return result; } /** * query all task group by id * * @param loginUser login user * @param id id * @return the result code and msg */ @Override public Map<String, Object> queryTaskGroupById(User loginUser, int id) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } TaskGroup taskGroup = taskGroupMapper.selectById(id); result.put(Constants.DATA_LIST, taskGroup); putMsg(result, Status.SUCCESS); return result; } /** * query * * @param pageNo page no * @param pageSize page size * @param userId user id * @param name name * @param status status * @return the result code and msg */ @Override public Map<String, Object> doQuery(User loginUser, int pageNo, int pageSize, int userId, String name, Integer status) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } Page<TaskGroup> page = new Page<>(pageNo, pageSize); IPage<TaskGroup> taskGroupPaging = taskGroupMapper.queryTaskGroupPaging(page, userId, name, status); return getStringObjectMap(pageNo, pageSize, result, taskGroupPaging); } /** * close a task group * * @param loginUser login user * @param id task group id * @return the result code and msg */ @Override public Map<String, Object> closeTaskGroup(User loginUser, int id) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } TaskGroup taskGroup = taskGroupMapper.selectById(id); taskGroup.setStatus(Flag.NO.getCode()); taskGroupMapper.updateById(taskGroup); putMsg(result, Status.SUCCESS); return result; } /** * start a task group * * @param loginUser login user * @param id task group id * @return the result code and msg */ @Override public Map<String, Object> startTaskGroup(User loginUser, int id) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } TaskGroup taskGroup = taskGroupMapper.selectById(id); if (taskGroup.getStatus() == 1) { putMsg(result, Status.TASK_GROUP_STATUS_ERROR); return result; } taskGroup.setStatus(1); taskGroup.setUpdateTime(new Date(System.currentTimeMillis())); int update = taskGroupMapper.updateById(taskGroup); putMsg(result, Status.SUCCESS); return result; } /** * wake a task manually * * @param loginUser * @param queueId task group queue id * @return result */ @Override public Map<String, Object> forceStartTask(User loginUser, int queueId) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } taskGroupQueueService.forceStartTask(queueId, Flag.YES.getCode()); putMsg(result, Status.SUCCESS); return result; } @Override public Map<String, Object> modifyPriority(User loginUser, Integer queueId, Integer priority) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } taskGroupQueueService.modifyPriority(queueId, priority); putMsg(result, Status.SUCCESS); return result; } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,682
[Bug] [API] After creating a task group the values of both fields 'create_time' and 'update_time' are empty.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened After creating a task group the values of both fields 'create_time' and 'update_time' are empty. ![image](https://user-images.githubusercontent.com/4928204/147560349-48515489-3cac-453e-8ad9-17d54537dd29.png) ### What you expected to happen I expect that the value of both fields 'create_time' and 'update_time' can be assigned with current time. ### How to reproduce You can reproduce this issue when creating a task group in the page or through api. ![image](https://user-images.githubusercontent.com/4928204/147560389-afe6d839-da2d-43e5-8998-1cc80ebf5f8b.png) ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7682
https://github.com/apache/dolphinscheduler/pull/7683
8808c0a700d367a1d408c1ec07c2a6fbeb675d33
6b5db0ac5b1d19aa30667b7edab9573c262ecb60
"2021-12-28T11:11:30Z"
java
"2021-12-28T12:01:46Z"
dolphinscheduler-service/src/main/java/org/apache/dolphinscheduler/service/process/ProcessService.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.service.process; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_END_DATE; import static org.apache.dolphinscheduler.common.Constants.CMDPARAM_COMPLEMENT_DATA_START_DATE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_EMPTY_SUB_PROCESS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_FATHER_PARAMS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE; import static org.apache.dolphinscheduler.common.Constants.CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID; import static org.apache.dolphinscheduler.common.Constants.LOCAL_PARAMS; import static java.util.stream.Collectors.toSet; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.AuthorizationType; import org.apache.dolphinscheduler.common.enums.CommandType; import org.apache.dolphinscheduler.common.enums.Direct; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.FailureStrategy; import org.apache.dolphinscheduler.common.enums.Flag; import org.apache.dolphinscheduler.common.enums.ReleaseState; import org.apache.dolphinscheduler.common.enums.TaskDependType; import org.apache.dolphinscheduler.common.enums.TaskGroupQueueStatus; import org.apache.dolphinscheduler.common.enums.TimeoutFlag; import org.apache.dolphinscheduler.common.enums.WarningType; import org.apache.dolphinscheduler.common.graph.DAG; import org.apache.dolphinscheduler.common.model.DateInterval; import org.apache.dolphinscheduler.common.model.TaskNode; import org.apache.dolphinscheduler.common.model.TaskNodeRelation; import org.apache.dolphinscheduler.common.process.ProcessDag; import org.apache.dolphinscheduler.common.process.Property; import org.apache.dolphinscheduler.common.process.ResourceInfo; import org.apache.dolphinscheduler.common.task.AbstractParameters; import org.apache.dolphinscheduler.common.task.TaskTimeoutParameter; import org.apache.dolphinscheduler.common.task.subprocess.SubProcessParameters; import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils; import org.apache.dolphinscheduler.common.utils.CodeGenerateUtils.CodeGenerateException; import org.apache.dolphinscheduler.common.utils.DateUtils; import org.apache.dolphinscheduler.common.utils.JSONUtils; import org.apache.dolphinscheduler.common.utils.ParameterUtils; import org.apache.dolphinscheduler.common.utils.PropertyUtils; import org.apache.dolphinscheduler.common.utils.TaskParametersUtils; import org.apache.dolphinscheduler.dao.entity.Command; import org.apache.dolphinscheduler.dao.entity.DagData; import org.apache.dolphinscheduler.dao.entity.DataSource; import org.apache.dolphinscheduler.dao.entity.Environment; import org.apache.dolphinscheduler.dao.entity.ErrorCommand; import org.apache.dolphinscheduler.dao.entity.ProcessDefinition; import org.apache.dolphinscheduler.dao.entity.ProcessDefinitionLog; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.ProcessInstanceMap; import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelation; import org.apache.dolphinscheduler.dao.entity.ProcessTaskRelationLog; import org.apache.dolphinscheduler.dao.entity.Project; import org.apache.dolphinscheduler.dao.entity.ProjectUser; import org.apache.dolphinscheduler.dao.entity.Resource; import org.apache.dolphinscheduler.dao.entity.Schedule; import org.apache.dolphinscheduler.dao.entity.TaskDefinition; import org.apache.dolphinscheduler.dao.entity.TaskDefinitionLog; import org.apache.dolphinscheduler.dao.entity.TaskGroup; import org.apache.dolphinscheduler.dao.entity.TaskGroupQueue; import org.apache.dolphinscheduler.dao.entity.TaskInstance; import org.apache.dolphinscheduler.dao.entity.Tenant; import org.apache.dolphinscheduler.dao.entity.UdfFunc; import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.dao.mapper.CommandMapper; import org.apache.dolphinscheduler.dao.mapper.DataSourceMapper; import org.apache.dolphinscheduler.dao.mapper.EnvironmentMapper; import org.apache.dolphinscheduler.dao.mapper.ErrorCommandMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionLogMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessInstanceMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationLogMapper; import org.apache.dolphinscheduler.dao.mapper.ProcessTaskRelationMapper; import org.apache.dolphinscheduler.dao.mapper.ProjectMapper; import org.apache.dolphinscheduler.dao.mapper.ResourceMapper; import org.apache.dolphinscheduler.dao.mapper.ResourceUserMapper; import org.apache.dolphinscheduler.dao.mapper.ScheduleMapper; import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionLogMapper; import org.apache.dolphinscheduler.dao.mapper.TaskDefinitionMapper; import org.apache.dolphinscheduler.dao.mapper.TaskGroupMapper; import org.apache.dolphinscheduler.dao.mapper.TaskGroupQueueMapper; import org.apache.dolphinscheduler.dao.mapper.TaskInstanceMapper; import org.apache.dolphinscheduler.dao.mapper.TenantMapper; import org.apache.dolphinscheduler.dao.mapper.UdfFuncMapper; import org.apache.dolphinscheduler.dao.mapper.UserMapper; import org.apache.dolphinscheduler.dao.utils.DagHelper; import org.apache.dolphinscheduler.remote.command.StateEventChangeCommand; import org.apache.dolphinscheduler.remote.command.TaskEventChangeCommand; import org.apache.dolphinscheduler.remote.processor.StateEventCallbackService; import org.apache.dolphinscheduler.remote.utils.Host; import org.apache.dolphinscheduler.service.bean.SpringApplicationContext; import org.apache.dolphinscheduler.service.exceptions.ServiceException; import org.apache.dolphinscheduler.service.log.LogClientService; import org.apache.dolphinscheduler.service.quartz.cron.CronUtils; import org.apache.dolphinscheduler.spi.enums.ResourceType; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.lang.StringUtils; import java.util.ArrayList; import java.util.Arrays; import java.util.Date; import java.util.EnumMap; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Objects; import java.util.Set; import java.util.stream.Collectors; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import org.springframework.transaction.annotation.Transactional; import com.fasterxml.jackson.core.type.TypeReference; import com.fasterxml.jackson.databind.node.ObjectNode; import com.google.common.collect.Lists; /** * process relative dao that some mappers in this. */ @Component public class ProcessService { private final Logger logger = LoggerFactory.getLogger(getClass()); private final int[] stateArray = new int[]{ExecutionStatus.SUBMITTED_SUCCESS.ordinal(), ExecutionStatus.RUNNING_EXECUTION.ordinal(), ExecutionStatus.DELAY_EXECUTION.ordinal(), ExecutionStatus.READY_PAUSE.ordinal(), ExecutionStatus.READY_STOP.ordinal()}; @Autowired private UserMapper userMapper; @Autowired private ProcessDefinitionMapper processDefineMapper; @Autowired private ProcessDefinitionLogMapper processDefineLogMapper; @Autowired private ProcessInstanceMapper processInstanceMapper; @Autowired private DataSourceMapper dataSourceMapper; @Autowired private ProcessInstanceMapMapper processInstanceMapMapper; @Autowired private TaskInstanceMapper taskInstanceMapper; @Autowired private CommandMapper commandMapper; @Autowired private ScheduleMapper scheduleMapper; @Autowired private UdfFuncMapper udfFuncMapper; @Autowired private ResourceMapper resourceMapper; @Autowired private ResourceUserMapper resourceUserMapper; @Autowired private ErrorCommandMapper errorCommandMapper; @Autowired private TenantMapper tenantMapper; @Autowired private ProjectMapper projectMapper; @Autowired private TaskDefinitionMapper taskDefinitionMapper; @Autowired private TaskDefinitionLogMapper taskDefinitionLogMapper; @Autowired private ProcessTaskRelationMapper processTaskRelationMapper; @Autowired private ProcessTaskRelationLogMapper processTaskRelationLogMapper; @Autowired StateEventCallbackService stateEventCallbackService; @Autowired private EnvironmentMapper environmentMapper; @Autowired private TaskGroupQueueMapper taskGroupQueueMapper; @Autowired private TaskGroupMapper taskGroupMapper; /** * handle Command (construct ProcessInstance from Command) , wrapped in transaction * * @param logger logger * @param host host * @param command found command * @return process instance */ @Transactional public ProcessInstance handleCommand(Logger logger, String host, Command command) { ProcessInstance processInstance = constructProcessInstance(command, host); // cannot construct process instance, return null if (processInstance == null) { logger.error("scan command, command parameter is error: {}", command); moveToErrorCommand(command, "process instance is null"); return null; } processInstance.setCommandType(command.getCommandType()); processInstance.addHistoryCmd(command.getCommandType()); //if the processDefination is serial ProcessDefinition processDefinition = this.findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); if (processDefinition.getExecutionType().typeIsSerial()) { saveSerialProcess(processInstance, processDefinition); if (processInstance.getState() != ExecutionStatus.SUBMITTED_SUCCESS) { setSubProcessParam(processInstance); deleteCommandWithCheck(command.getId()); return null; } } else { saveProcessInstance(processInstance); } setSubProcessParam(processInstance); deleteCommandWithCheck(command.getId()); return processInstance; } private void saveSerialProcess(ProcessInstance processInstance, ProcessDefinition processDefinition) { processInstance.setState(ExecutionStatus.SERIAL_WAIT); saveProcessInstance(processInstance); //serial wait //when we get the running instance(or waiting instance) only get the priority instance(by id) if (processDefinition.getExecutionType().typeIsSerialWait()) { while (true) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isEmpty(runningProcessInstances)) { processInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS); saveProcessInstance(processInstance); return; } ProcessInstance runningProcess = runningProcessInstances.get(0); if (this.processInstanceMapper.updateNextProcessIdById(processInstance.getId(), runningProcess.getId())) { return; } } } else if (processDefinition.getExecutionType().typeIsSerialDiscard()) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isEmpty(runningProcessInstances)) { processInstance.setState(ExecutionStatus.STOP); saveProcessInstance(processInstance); } } else if (processDefinition.getExecutionType().typeIsSerialPriority()) { List<ProcessInstance> runningProcessInstances = this.processInstanceMapper.queryByProcessDefineCodeAndStatusAndNextId(processInstance.getProcessDefinitionCode(), Constants.RUNNING_PROCESS_STATE, processInstance.getId()); if (CollectionUtils.isNotEmpty(runningProcessInstances)) { for (ProcessInstance info : runningProcessInstances) { info.setCommandType(CommandType.STOP); info.addHistoryCmd(CommandType.STOP); info.setState(ExecutionStatus.READY_STOP); int update = updateProcessInstance(info); // determine whether the process is normal if (update > 0) { String host = info.getHost(); String address = host.split(":")[0]; int port = Integer.parseInt(host.split(":")[1]); StateEventChangeCommand stateEventChangeCommand = new StateEventChangeCommand( info.getId(), 0, info.getState(), info.getId(), 0 ); try { stateEventCallbackService.sendResult(address, port, stateEventChangeCommand.convert2Command()); } catch (Exception e) { logger.error("sendResultError"); } } } } } } /** * save error command, and delete original command * * @param command command * @param message message */ public void moveToErrorCommand(Command command, String message) { ErrorCommand errorCommand = new ErrorCommand(command, message); this.errorCommandMapper.insert(errorCommand); this.commandMapper.deleteById(command.getId()); } /** * set process waiting thread * * @param command command * @param processInstance processInstance * @return process instance */ private ProcessInstance setWaitingThreadProcess(Command command, ProcessInstance processInstance) { processInstance.setState(ExecutionStatus.WAITING_THREAD); if (command.getCommandType() != CommandType.RECOVER_WAITING_THREAD) { processInstance.addHistoryCmd(command.getCommandType()); } saveProcessInstance(processInstance); this.setSubProcessParam(processInstance); createRecoveryWaitingThreadCommand(command, processInstance); return null; } /** * insert one command * * @param command command * @return create result */ public int createCommand(Command command) { int result = 0; if (command != null) { result = commandMapper.insert(command); } return result; } /** * get command page */ public List<Command> findCommandPage(int pageSize, int pageNumber) { return commandMapper.queryCommandPage(pageSize, pageNumber * pageSize); } /** * check the input command exists in queue list * * @param command command * @return create command result */ public boolean verifyIsNeedCreateCommand(Command command) { boolean isNeedCreate = true; EnumMap<CommandType, Integer> cmdTypeMap = new EnumMap<>(CommandType.class); cmdTypeMap.put(CommandType.REPEAT_RUNNING, 1); cmdTypeMap.put(CommandType.RECOVER_SUSPENDED_PROCESS, 1); cmdTypeMap.put(CommandType.START_FAILURE_TASK_PROCESS, 1); CommandType commandType = command.getCommandType(); if (cmdTypeMap.containsKey(commandType)) { ObjectNode cmdParamObj = JSONUtils.parseObject(command.getCommandParam()); int processInstanceId = cmdParamObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt(); List<Command> commands = commandMapper.selectList(null); // for all commands for (Command tmpCommand : commands) { if (cmdTypeMap.containsKey(tmpCommand.getCommandType())) { ObjectNode tempObj = JSONUtils.parseObject(tmpCommand.getCommandParam()); if (tempObj != null && processInstanceId == tempObj.path(CMD_PARAM_RECOVER_PROCESS_ID_STRING).asInt()) { isNeedCreate = false; break; } } } } return isNeedCreate; } /** * find process instance detail by id * * @param processId processId * @return process instance */ public ProcessInstance findProcessInstanceDetailById(int processId) { return processInstanceMapper.queryDetailById(processId); } /** * get task node list by definitionId */ public List<TaskDefinition> getTaskNodeListByDefinition(long defineCode) { ProcessDefinition processDefinition = processDefineMapper.queryByCode(defineCode); if (processDefinition == null) { logger.error("process define not exists"); return Lists.newArrayList(); } List<ProcessTaskRelationLog> processTaskRelations = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion()); Set<TaskDefinition> taskDefinitionSet = new HashSet<>(); for (ProcessTaskRelationLog processTaskRelation : processTaskRelations) { if (processTaskRelation.getPostTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion())); } } if (taskDefinitionSet.isEmpty()) { return Lists.newArrayList(); } List<TaskDefinitionLog> taskDefinitionLogs = taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet); return Lists.newArrayList(taskDefinitionLogs); } /** * find process instance by id * * @param processId processId * @return process instance */ public ProcessInstance findProcessInstanceById(int processId) { return processInstanceMapper.selectById(processId); } /** * find process define by id. * * @param processDefinitionId processDefinitionId * @return process definition */ public ProcessDefinition findProcessDefineById(int processDefinitionId) { return processDefineMapper.selectById(processDefinitionId); } /** * find process define by code and version. * * @param processDefinitionCode processDefinitionCode * @return process definition */ public ProcessDefinition findProcessDefinition(Long processDefinitionCode, int version) { ProcessDefinition processDefinition = processDefineMapper.queryByCode(processDefinitionCode); if (processDefinition == null || processDefinition.getVersion() != version) { processDefinition = processDefineLogMapper.queryByDefinitionCodeAndVersion(processDefinitionCode, version); if (processDefinition != null) { processDefinition.setId(0); } } return processDefinition; } /** * find process define by code. * * @param processDefinitionCode processDefinitionCode * @return process definition */ public ProcessDefinition findProcessDefinitionByCode(Long processDefinitionCode) { return processDefineMapper.queryByCode(processDefinitionCode); } /** * delete work process instance by id * * @param processInstanceId processInstanceId * @return delete process instance result */ public int deleteWorkProcessInstanceById(int processInstanceId) { return processInstanceMapper.deleteById(processInstanceId); } /** * delete all sub process by parent instance id * * @param processInstanceId processInstanceId * @return delete all sub process instance result */ public int deleteAllSubWorkProcessByParentId(int processInstanceId) { List<Integer> subProcessIdList = processInstanceMapMapper.querySubIdListByParentId(processInstanceId); for (Integer subId : subProcessIdList) { deleteAllSubWorkProcessByParentId(subId); deleteWorkProcessMapByParentId(subId); removeTaskLogFile(subId); deleteWorkProcessInstanceById(subId); } return 1; } /** * remove task log file * * @param processInstanceId processInstanceId */ public void removeTaskLogFile(Integer processInstanceId) { List<TaskInstance> taskInstanceList = findValidTaskListByProcessId(processInstanceId); if (CollectionUtils.isEmpty(taskInstanceList)) { return; } try (LogClientService logClient = new LogClientService()) { for (TaskInstance taskInstance : taskInstanceList) { String taskLogPath = taskInstance.getLogPath(); if (StringUtils.isEmpty(taskInstance.getHost())) { continue; } int port = PropertyUtils.getInt(Constants.RPC_PORT, 50051); String ip = ""; try { ip = Host.of(taskInstance.getHost()).getIp(); } catch (Exception e) { // compatible old version ip = taskInstance.getHost(); } // remove task log from loggerserver logClient.removeTaskLog(ip, port, taskLogPath); } } } /** * recursive query sub process definition id by parent id. * * @param parentCode parentCode * @param ids ids */ public void recurseFindSubProcess(long parentCode, List<Long> ids) { List<TaskDefinition> taskNodeList = this.getTaskNodeListByDefinition(parentCode); if (taskNodeList != null && !taskNodeList.isEmpty()) { for (TaskDefinition taskNode : taskNodeList) { String parameter = taskNode.getTaskParams(); ObjectNode parameterJson = JSONUtils.parseObject(parameter); if (parameterJson.get(CMD_PARAM_SUB_PROCESS_DEFINE_CODE) != null) { SubProcessParameters subProcessParam = JSONUtils.parseObject(parameter, SubProcessParameters.class); ids.add(subProcessParam.getProcessDefinitionCode()); recurseFindSubProcess(subProcessParam.getProcessDefinitionCode(), ids); } } } } /** * create recovery waiting thread command when thread pool is not enough for the process instance. * sub work process instance need not to create recovery command. * create recovery waiting thread command and delete origin command at the same time. * if the recovery command is exists, only update the field update_time * * @param originCommand originCommand * @param processInstance processInstance */ public void createRecoveryWaitingThreadCommand(Command originCommand, ProcessInstance processInstance) { // sub process doesnot need to create wait command if (processInstance.getIsSubProcess() == Flag.YES) { if (originCommand != null) { commandMapper.deleteById(originCommand.getId()); } return; } Map<String, String> cmdParam = new HashMap<>(); cmdParam.put(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD, String.valueOf(processInstance.getId())); // process instance quit by "waiting thread" state if (originCommand == null) { Command command = new Command( CommandType.RECOVER_WAITING_THREAD, processInstance.getTaskDependType(), processInstance.getFailureStrategy(), processInstance.getExecutorId(), processInstance.getProcessDefinition().getCode(), JSONUtils.toJsonString(cmdParam), processInstance.getWarningType(), processInstance.getWarningGroupId(), processInstance.getScheduleTime(), processInstance.getWorkerGroup(), processInstance.getEnvironmentCode(), processInstance.getProcessInstancePriority(), processInstance.getDryRun(), processInstance.getId(), processInstance.getProcessDefinitionVersion() ); saveCommand(command); return; } // update the command time if current command if recover from waiting if (originCommand.getCommandType() == CommandType.RECOVER_WAITING_THREAD) { originCommand.setUpdateTime(new Date()); saveCommand(originCommand); } else { // delete old command and create new waiting thread command commandMapper.deleteById(originCommand.getId()); originCommand.setId(0); originCommand.setCommandType(CommandType.RECOVER_WAITING_THREAD); originCommand.setUpdateTime(new Date()); originCommand.setCommandParam(JSONUtils.toJsonString(cmdParam)); originCommand.setProcessInstancePriority(processInstance.getProcessInstancePriority()); saveCommand(originCommand); } } /** * get schedule time from command * * @param command command * @param cmdParam cmdParam map * @return date */ private Date getScheduleTime(Command command, Map<String, String> cmdParam) { Date scheduleTime = command.getScheduleTime(); if (scheduleTime == null && cmdParam != null && cmdParam.containsKey(CMDPARAM_COMPLEMENT_DATA_START_DATE)) { Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE)); Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); List<Schedule> schedules = queryReleaseSchedulerListByProcessDefinitionCode(command.getProcessDefinitionCode()); List<Date> complementDateList = CronUtils.getSelfFireDateList(start, end, schedules); if (complementDateList.size() > 0) { scheduleTime = complementDateList.get(0); } else { logger.error("set scheduler time error: complement date list is empty, command: {}", command.toString()); } } return scheduleTime; } /** * generate a new work process instance from command. * * @param processDefinition processDefinition * @param command command * @param cmdParam cmdParam map * @return process instance */ private ProcessInstance generateNewProcessInstance(ProcessDefinition processDefinition, Command command, Map<String, String> cmdParam) { ProcessInstance processInstance = new ProcessInstance(processDefinition); processInstance.setProcessDefinitionCode(processDefinition.getCode()); processInstance.setProcessDefinitionVersion(processDefinition.getVersion()); processInstance.setState(ExecutionStatus.RUNNING_EXECUTION); processInstance.setRecovery(Flag.NO); processInstance.setStartTime(new Date()); processInstance.setRestartTime(processInstance.getStartTime()); processInstance.setRunTimes(1); processInstance.setMaxTryTimes(0); processInstance.setCommandParam(command.getCommandParam()); processInstance.setCommandType(command.getCommandType()); processInstance.setIsSubProcess(Flag.NO); processInstance.setTaskDependType(command.getTaskDependType()); processInstance.setFailureStrategy(command.getFailureStrategy()); processInstance.setExecutorId(command.getExecutorId()); WarningType warningType = command.getWarningType() == null ? WarningType.NONE : command.getWarningType(); processInstance.setWarningType(warningType); Integer warningGroupId = command.getWarningGroupId() == null ? 0 : command.getWarningGroupId(); processInstance.setWarningGroupId(warningGroupId); processInstance.setDryRun(command.getDryRun()); if (command.getScheduleTime() != null) { processInstance.setScheduleTime(command.getScheduleTime()); } processInstance.setCommandStartTime(command.getStartTime()); processInstance.setLocations(processDefinition.getLocations()); // reset global params while there are start parameters setGlobalParamIfCommanded(processDefinition, cmdParam); // curing global params processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), getCommandTypeIfComplement(processInstance, command), processInstance.getScheduleTime())); // set process instance priority processInstance.setProcessInstancePriority(command.getProcessInstancePriority()); String workerGroup = StringUtils.isBlank(command.getWorkerGroup()) ? Constants.DEFAULT_WORKER_GROUP : command.getWorkerGroup(); processInstance.setWorkerGroup(workerGroup); processInstance.setEnvironmentCode(Objects.isNull(command.getEnvironmentCode()) ? -1 : command.getEnvironmentCode()); processInstance.setTimeout(processDefinition.getTimeout()); processInstance.setTenantId(processDefinition.getTenantId()); return processInstance; } private void setGlobalParamIfCommanded(ProcessDefinition processDefinition, Map<String, String> cmdParam) { // get start params from command param Map<String, String> startParamMap = new HashMap<>(); if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_START_PARAMS)) { String startParamJson = cmdParam.get(Constants.CMD_PARAM_START_PARAMS); startParamMap = JSONUtils.toMap(startParamJson); } Map<String, String> fatherParamMap = new HashMap<>(); if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_FATHER_PARAMS)) { String fatherParamJson = cmdParam.get(Constants.CMD_PARAM_FATHER_PARAMS); fatherParamMap = JSONUtils.toMap(fatherParamJson); } startParamMap.putAll(fatherParamMap); // set start param into global params if (startParamMap.size() > 0 && processDefinition.getGlobalParamMap() != null) { for (Map.Entry<String, String> param : processDefinition.getGlobalParamMap().entrySet()) { String val = startParamMap.get(param.getKey()); if (val != null) { param.setValue(val); } } } } /** * get process tenant * there is tenant id in definition, use the tenant of the definition. * if there is not tenant id in the definiton or the tenant not exist * use definition creator's tenant. * * @param tenantId tenantId * @param userId userId * @return tenant */ public Tenant getTenantForProcess(int tenantId, int userId) { Tenant tenant = null; if (tenantId >= 0) { tenant = tenantMapper.queryById(tenantId); } if (userId == 0) { return null; } if (tenant == null) { User user = userMapper.selectById(userId); tenant = tenantMapper.queryById(user.getTenantId()); } return tenant; } /** * get an environment * use the code of the environment to find a environment. * * @param environmentCode environmentCode * @return Environment */ public Environment findEnvironmentByCode(Long environmentCode) { Environment environment = null; if (environmentCode >= 0) { environment = environmentMapper.queryByEnvironmentCode(environmentCode); } return environment; } /** * check command parameters is valid * * @param command command * @param cmdParam cmdParam map * @return whether command param is valid */ private Boolean checkCmdParam(Command command, Map<String, String> cmdParam) { if (command.getTaskDependType() == TaskDependType.TASK_ONLY || command.getTaskDependType() == TaskDependType.TASK_PRE) { if (cmdParam == null || !cmdParam.containsKey(Constants.CMD_PARAM_START_NODES) || cmdParam.get(Constants.CMD_PARAM_START_NODES).isEmpty()) { logger.error("command node depend type is {}, but start nodes is null ", command.getTaskDependType()); return false; } } return true; } /** * construct process instance according to one command. * * @param command command * @param host host * @return process instance */ private ProcessInstance constructProcessInstance(Command command, String host) { ProcessInstance processInstance; ProcessDefinition processDefinition; CommandType commandType = command.getCommandType(); processDefinition = this.findProcessDefinition(command.getProcessDefinitionCode(), command.getProcessDefinitionVersion()); if (processDefinition == null) { logger.error("cannot find the work process define! define code : {}", command.getProcessDefinitionCode()); return null; } Map<String, String> cmdParam = JSONUtils.toMap(command.getCommandParam()); int processInstanceId = command.getProcessInstanceId(); if (processInstanceId == 0) { processInstance = generateNewProcessInstance(processDefinition, command, cmdParam); } else { processInstance = this.findProcessInstanceDetailById(processInstanceId); if (processInstance == null) { return processInstance; } } if (cmdParam != null) { CommandType commandTypeIfComplement = getCommandTypeIfComplement(processInstance, command); // reset global params while repeat running is needed by cmdParam if (commandTypeIfComplement == CommandType.REPEAT_RUNNING) { setGlobalParamIfCommanded(processDefinition, cmdParam); } // Recalculate global parameters after rerun. processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), commandTypeIfComplement, processInstance.getScheduleTime())); processInstance.setProcessDefinition(processDefinition); } //reset command parameter if (processInstance.getCommandParam() != null) { Map<String, String> processCmdParam = JSONUtils.toMap(processInstance.getCommandParam()); for (Map.Entry<String, String> entry : processCmdParam.entrySet()) { if (!cmdParam.containsKey(entry.getKey())) { cmdParam.put(entry.getKey(), entry.getValue()); } } } // reset command parameter if sub process if (cmdParam != null && cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) { processInstance.setCommandParam(command.getCommandParam()); } if (Boolean.FALSE.equals(checkCmdParam(command, cmdParam))) { logger.error("command parameter check failed!"); return null; } if (command.getScheduleTime() != null) { processInstance.setScheduleTime(command.getScheduleTime()); } processInstance.setHost(host); processInstance.setRestartTime(new Date()); ExecutionStatus runStatus = ExecutionStatus.RUNNING_EXECUTION; int runTime = processInstance.getRunTimes(); switch (commandType) { case START_PROCESS: break; case START_FAILURE_TASK_PROCESS: // find failed tasks and init these tasks List<Integer> failedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.FAILURE); List<Integer> toleranceList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.NEED_FAULT_TOLERANCE); List<Integer> killedList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.KILL); cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); failedList.addAll(killedList); failedList.addAll(toleranceList); for (Integer taskId : failedList) { initTaskInstance(this.findTaskInstanceById(taskId)); } cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING, String.join(Constants.COMMA, convertIntListToString(failedList))); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); processInstance.setRunTimes(runTime + 1); break; case START_CURRENT_TASK_PROCESS: break; case RECOVER_WAITING_THREAD: break; case RECOVER_SUSPENDED_PROCESS: // find pause tasks and init task's state cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); List<Integer> suspendedNodeList = this.findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.PAUSE); List<Integer> stopNodeList = findTaskIdByInstanceState(processInstance.getId(), ExecutionStatus.KILL); suspendedNodeList.addAll(stopNodeList); for (Integer taskId : suspendedNodeList) { // initialize the pause state initTaskInstance(this.findTaskInstanceById(taskId)); } cmdParam.put(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING, String.join(",", convertIntListToString(suspendedNodeList))); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); processInstance.setRunTimes(runTime + 1); break; case RECOVER_TOLERANCE_FAULT_PROCESS: // recover tolerance fault process processInstance.setRecovery(Flag.YES); runStatus = processInstance.getState(); break; case COMPLEMENT_DATA: // delete all the valid tasks when complement data if id is not null if (processInstance.getId() != 0) { List<TaskInstance> taskInstanceList = this.findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance taskInstance : taskInstanceList) { taskInstance.setFlag(Flag.NO); this.updateTaskInstance(taskInstance); } } break; case REPEAT_RUNNING: // delete the recover task names from command parameter if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING)) { cmdParam.remove(Constants.CMD_PARAM_RECOVERY_START_NODE_STRING); processInstance.setCommandParam(JSONUtils.toJsonString(cmdParam)); } // delete all the valid tasks when repeat running List<TaskInstance> validTaskList = findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance taskInstance : validTaskList) { taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); } processInstance.setStartTime(new Date()); processInstance.setRestartTime(processInstance.getStartTime()); processInstance.setEndTime(null); processInstance.setRunTimes(runTime + 1); initComplementDataParam(processDefinition, processInstance, cmdParam); break; case SCHEDULER: break; default: break; } processInstance.setState(runStatus); return processInstance; } /** * get process definition by command * If it is a fault-tolerant command, get the specified version of ProcessDefinition through ProcessInstance * Otherwise, get the latest version of ProcessDefinition * * @return ProcessDefinition */ private ProcessDefinition getProcessDefinitionByCommand(long processDefinitionCode, Map<String, String> cmdParam) { if (cmdParam != null) { int processInstanceId = 0; if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING)); } else if (cmdParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_SUB_PROCESS)); } else if (cmdParam.containsKey(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD)) { processInstanceId = Integer.parseInt(cmdParam.get(Constants.CMD_PARAM_RECOVERY_WAITING_THREAD)); } if (processInstanceId != 0) { ProcessInstance processInstance = this.findProcessInstanceDetailById(processInstanceId); if (processInstance == null) { return null; } return processDefineLogMapper.queryByDefinitionCodeAndVersion( processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); } } return processDefineMapper.queryByCode(processDefinitionCode); } /** * return complement data if the process start with complement data * * @param processInstance processInstance * @param command command * @return command type */ private CommandType getCommandTypeIfComplement(ProcessInstance processInstance, Command command) { if (CommandType.COMPLEMENT_DATA == processInstance.getCmdTypeIfComplement()) { return CommandType.COMPLEMENT_DATA; } else { return command.getCommandType(); } } /** * initialize complement data parameters * * @param processDefinition processDefinition * @param processInstance processInstance * @param cmdParam cmdParam */ private void initComplementDataParam(ProcessDefinition processDefinition, ProcessInstance processInstance, Map<String, String> cmdParam) { if (!processInstance.isComplementData()) { return; } Date start = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE)); Date end = DateUtils.stringToDate(cmdParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE)); List<Schedule> listSchedules = queryReleaseSchedulerListByProcessDefinitionCode(processInstance.getProcessDefinitionCode()); List<Date> complementDate = CronUtils.getSelfFireDateList(start, end, listSchedules); if (complementDate.size() > 0 && Flag.NO == processInstance.getIsSubProcess()) { processInstance.setScheduleTime(complementDate.get(0)); } processInstance.setGlobalParams(ParameterUtils.curingGlobalParams( processDefinition.getGlobalParamMap(), processDefinition.getGlobalParamList(), CommandType.COMPLEMENT_DATA, processInstance.getScheduleTime())); } /** * set sub work process parameters. * handle sub work process instance, update relation table and command parameters * set sub work process flag, extends parent work process command parameters * * @param subProcessInstance subProcessInstance */ public void setSubProcessParam(ProcessInstance subProcessInstance) { String cmdParam = subProcessInstance.getCommandParam(); if (StringUtils.isEmpty(cmdParam)) { return; } Map<String, String> paramMap = JSONUtils.toMap(cmdParam); // write sub process id into cmd param. if (paramMap.containsKey(CMD_PARAM_SUB_PROCESS) && CMD_PARAM_EMPTY_SUB_PROCESS.equals(paramMap.get(CMD_PARAM_SUB_PROCESS))) { paramMap.remove(CMD_PARAM_SUB_PROCESS); paramMap.put(CMD_PARAM_SUB_PROCESS, String.valueOf(subProcessInstance.getId())); subProcessInstance.setCommandParam(JSONUtils.toJsonString(paramMap)); subProcessInstance.setIsSubProcess(Flag.YES); this.saveProcessInstance(subProcessInstance); } // copy parent instance user def params to sub process.. String parentInstanceId = paramMap.get(CMD_PARAM_SUB_PROCESS_PARENT_INSTANCE_ID); if (StringUtils.isNotEmpty(parentInstanceId)) { ProcessInstance parentInstance = findProcessInstanceDetailById(Integer.parseInt(parentInstanceId)); if (parentInstance != null) { subProcessInstance.setGlobalParams( joinGlobalParams(parentInstance.getGlobalParams(), subProcessInstance.getGlobalParams())); this.saveProcessInstance(subProcessInstance); } else { logger.error("sub process command params error, cannot find parent instance: {} ", cmdParam); } } ProcessInstanceMap processInstanceMap = JSONUtils.parseObject(cmdParam, ProcessInstanceMap.class); if (processInstanceMap == null || processInstanceMap.getParentProcessInstanceId() == 0) { return; } // update sub process id to process map table processInstanceMap.setProcessInstanceId(subProcessInstance.getId()); this.updateWorkProcessInstanceMap(processInstanceMap); } /** * join parent global params into sub process. * only the keys doesn't in sub process global would be joined. * * @param parentGlobalParams parentGlobalParams * @param subGlobalParams subGlobalParams * @return global params join */ private String joinGlobalParams(String parentGlobalParams, String subGlobalParams) { List<Property> parentPropertyList = JSONUtils.toList(parentGlobalParams, Property.class); List<Property> subPropertyList = JSONUtils.toList(subGlobalParams, Property.class); Map<String, String> subMap = subPropertyList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue)); for (Property parent : parentPropertyList) { if (!subMap.containsKey(parent.getProp())) { subPropertyList.add(parent); } } return JSONUtils.toJsonString(subPropertyList); } /** * initialize task instance * * @param taskInstance taskInstance */ private void initTaskInstance(TaskInstance taskInstance) { if (!taskInstance.isSubProcess() && (taskInstance.getState().typeIsCancel() || taskInstance.getState().typeIsFailure())) { taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); return; } taskInstance.setState(ExecutionStatus.SUBMITTED_SUCCESS); updateTaskInstance(taskInstance); } /** * retry submit task to db */ public TaskInstance submitTaskWithRetry(ProcessInstance processInstance, TaskInstance taskInstance, int commitRetryTimes, int commitInterval) { int retryTimes = 1; TaskInstance task = null; while (retryTimes <= commitRetryTimes) { try { // submit task to db task = SpringApplicationContext.getBean(ProcessService.class).submitTask(processInstance, taskInstance); if (task != null && task.getId() != 0) { break; } logger.error("task commit to db failed , taskId {} has already retry {} times, please check the database", taskInstance.getId(), retryTimes); Thread.sleep(commitInterval); } catch (Exception e) { logger.error("task commit to mysql failed", e); } retryTimes += 1; } return task; } /** * submit task to db * submit sub process to command * * @param processInstance processInstance * @param taskInstance taskInstance * @return task instance */ @Transactional(rollbackFor = Exception.class) public TaskInstance submitTask(ProcessInstance processInstance, TaskInstance taskInstance) { logger.info("start submit task : {}, instance id:{}, state: {}", taskInstance.getName(), taskInstance.getProcessInstanceId(), processInstance.getState()); //submit to db TaskInstance task = submitTaskInstanceToDB(taskInstance, processInstance); if (task == null) { logger.error("end submit task to db error, task name:{}, process id:{} state: {} ", taskInstance.getName(), taskInstance.getProcessInstance(), processInstance.getState()); return null; } if (!task.getState().typeIsFinished()) { createSubWorkProcess(processInstance, task); } logger.info("end submit task to db successfully:{} {} state:{} complete, instance id:{} state: {} ", taskInstance.getId(), taskInstance.getName(), task.getState(), processInstance.getId(), processInstance.getState()); return task; } /** * set work process instance map * consider o * repeat running does not generate new sub process instance * set map {parent instance id, task instance id, 0(child instance id)} * * @param parentInstance parentInstance * @param parentTask parentTask * @return process instance map */ private ProcessInstanceMap setProcessInstanceMap(ProcessInstance parentInstance, TaskInstance parentTask) { ProcessInstanceMap processMap = findWorkProcessMapByParent(parentInstance.getId(), parentTask.getId()); if (processMap != null) { return processMap; } if (parentInstance.getCommandType() == CommandType.REPEAT_RUNNING) { // update current task id to map processMap = findPreviousTaskProcessMap(parentInstance, parentTask); if (processMap != null) { processMap.setParentTaskInstanceId(parentTask.getId()); updateWorkProcessInstanceMap(processMap); return processMap; } } // new task processMap = new ProcessInstanceMap(); processMap.setParentProcessInstanceId(parentInstance.getId()); processMap.setParentTaskInstanceId(parentTask.getId()); createWorkProcessInstanceMap(processMap); return processMap; } /** * find previous task work process map. * * @param parentProcessInstance parentProcessInstance * @param parentTask parentTask * @return process instance map */ private ProcessInstanceMap findPreviousTaskProcessMap(ProcessInstance parentProcessInstance, TaskInstance parentTask) { Integer preTaskId = 0; List<TaskInstance> preTaskList = this.findPreviousTaskListByWorkProcessId(parentProcessInstance.getId()); for (TaskInstance task : preTaskList) { if (task.getName().equals(parentTask.getName())) { preTaskId = task.getId(); ProcessInstanceMap map = findWorkProcessMapByParent(parentProcessInstance.getId(), preTaskId); if (map != null) { return map; } } } logger.info("sub process instance is not found,parent task:{},parent instance:{}", parentTask.getId(), parentProcessInstance.getId()); return null; } /** * create sub work process command * * @param parentProcessInstance parentProcessInstance * @param task task */ public void createSubWorkProcess(ProcessInstance parentProcessInstance, TaskInstance task) { if (!task.isSubProcess()) { return; } //check create sub work flow firstly ProcessInstanceMap instanceMap = findWorkProcessMapByParent(parentProcessInstance.getId(), task.getId()); if (null != instanceMap && CommandType.RECOVER_TOLERANCE_FAULT_PROCESS == parentProcessInstance.getCommandType()) { // recover failover tolerance would not create a new command when the sub command already have been created return; } instanceMap = setProcessInstanceMap(parentProcessInstance, task); ProcessInstance childInstance = null; if (instanceMap.getProcessInstanceId() != 0) { childInstance = findProcessInstanceById(instanceMap.getProcessInstanceId()); } Command subProcessCommand = createSubProcessCommand(parentProcessInstance, childInstance, instanceMap, task); updateSubProcessDefinitionByParent(parentProcessInstance, subProcessCommand.getProcessDefinitionCode()); initSubInstanceState(childInstance); createCommand(subProcessCommand); logger.info("sub process command created: {} ", subProcessCommand); } /** * complement data needs transform parent parameter to child. */ private String getSubWorkFlowParam(ProcessInstanceMap instanceMap, ProcessInstance parentProcessInstance, Map<String, String> fatherParams) { // set sub work process command String processMapStr = JSONUtils.toJsonString(instanceMap); Map<String, String> cmdParam = JSONUtils.toMap(processMapStr); if (parentProcessInstance.isComplementData()) { Map<String, String> parentParam = JSONUtils.toMap(parentProcessInstance.getCommandParam()); String endTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_END_DATE); String startTime = parentParam.get(CMDPARAM_COMPLEMENT_DATA_START_DATE); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_END_DATE, endTime); cmdParam.put(CMDPARAM_COMPLEMENT_DATA_START_DATE, startTime); processMapStr = JSONUtils.toJsonString(cmdParam); } if (fatherParams.size() != 0) { cmdParam.put(CMD_PARAM_FATHER_PARAMS, JSONUtils.toJsonString(fatherParams)); processMapStr = JSONUtils.toJsonString(cmdParam); } return processMapStr; } public Map<String, String> getGlobalParamMap(String globalParams) { List<Property> propList; Map<String, String> globalParamMap = new HashMap<>(); if (StringUtils.isNotEmpty(globalParams)) { propList = JSONUtils.toList(globalParams, Property.class); globalParamMap = propList.stream().collect(Collectors.toMap(Property::getProp, Property::getValue)); } return globalParamMap; } /** * create sub work process command */ public Command createSubProcessCommand(ProcessInstance parentProcessInstance, ProcessInstance childInstance, ProcessInstanceMap instanceMap, TaskInstance task) { CommandType commandType = getSubCommandType(parentProcessInstance, childInstance); Map<String, String> subProcessParam = JSONUtils.toMap(task.getTaskParams()); long childDefineCode = 0L; if (subProcessParam.containsKey(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE)) { childDefineCode = Long.parseLong(subProcessParam.get(Constants.CMD_PARAM_SUB_PROCESS_DEFINE_CODE)); } ProcessDefinition subProcessDefinition = processDefineMapper.queryByCode(childDefineCode); Object localParams = subProcessParam.get(Constants.LOCAL_PARAMS); List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class); Map<String, String> globalMap = this.getGlobalParamMap(parentProcessInstance.getGlobalParams()); Map<String, String> fatherParams = new HashMap<>(); if (CollectionUtils.isNotEmpty(allParam)) { for (Property info : allParam) { fatherParams.put(info.getProp(), globalMap.get(info.getProp())); } } String processParam = getSubWorkFlowParam(instanceMap, parentProcessInstance, fatherParams); int subProcessInstanceId = childInstance == null ? 0 : childInstance.getId(); return new Command( commandType, TaskDependType.TASK_POST, parentProcessInstance.getFailureStrategy(), parentProcessInstance.getExecutorId(), subProcessDefinition.getCode(), processParam, parentProcessInstance.getWarningType(), parentProcessInstance.getWarningGroupId(), parentProcessInstance.getScheduleTime(), task.getWorkerGroup(), task.getEnvironmentCode(), parentProcessInstance.getProcessInstancePriority(), parentProcessInstance.getDryRun(), subProcessInstanceId, subProcessDefinition.getVersion() ); } /** * initialize sub work flow state * child instance state would be initialized when 'recovery from pause/stop/failure' */ private void initSubInstanceState(ProcessInstance childInstance) { if (childInstance != null) { childInstance.setState(ExecutionStatus.RUNNING_EXECUTION); updateProcessInstance(childInstance); } } /** * get sub work flow command type * child instance exist: child command = fatherCommand * child instance not exists: child command = fatherCommand[0] */ private CommandType getSubCommandType(ProcessInstance parentProcessInstance, ProcessInstance childInstance) { CommandType commandType = parentProcessInstance.getCommandType(); if (childInstance == null) { String fatherHistoryCommand = parentProcessInstance.getHistoryCmd(); commandType = CommandType.valueOf(fatherHistoryCommand.split(Constants.COMMA)[0]); } return commandType; } /** * update sub process definition * * @param parentProcessInstance parentProcessInstance * @param childDefinitionCode childDefinitionId */ private void updateSubProcessDefinitionByParent(ProcessInstance parentProcessInstance, long childDefinitionCode) { ProcessDefinition fatherDefinition = this.findProcessDefinition(parentProcessInstance.getProcessDefinitionCode(), parentProcessInstance.getProcessDefinitionVersion()); ProcessDefinition childDefinition = this.findProcessDefinitionByCode(childDefinitionCode); if (childDefinition != null && fatherDefinition != null) { childDefinition.setWarningGroupId(fatherDefinition.getWarningGroupId()); processDefineMapper.updateById(childDefinition); } } /** * submit task to mysql * * @param taskInstance taskInstance * @param processInstance processInstance * @return task instance */ public TaskInstance submitTaskInstanceToDB(TaskInstance taskInstance, ProcessInstance processInstance) { ExecutionStatus processInstanceState = processInstance.getState(); if (taskInstance.getState().typeIsFailure()) { if (taskInstance.isSubProcess()) { taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1); } else { if (processInstanceState != ExecutionStatus.READY_STOP && processInstanceState != ExecutionStatus.READY_PAUSE) { // failure task set invalid taskInstance.setFlag(Flag.NO); updateTaskInstance(taskInstance); // crate new task instance if (taskInstance.getState() != ExecutionStatus.NEED_FAULT_TOLERANCE) { taskInstance.setRetryTimes(taskInstance.getRetryTimes() + 1); } taskInstance.setSubmitTime(null); taskInstance.setLogPath(null); taskInstance.setExecutePath(null); taskInstance.setStartTime(null); taskInstance.setEndTime(null); taskInstance.setFlag(Flag.YES); taskInstance.setHost(null); taskInstance.setId(0); } } } taskInstance.setExecutorId(processInstance.getExecutorId()); taskInstance.setProcessInstancePriority(processInstance.getProcessInstancePriority()); taskInstance.setState(getSubmitTaskState(taskInstance, processInstance)); if (taskInstance.getSubmitTime() == null) { taskInstance.setSubmitTime(new Date()); } if (taskInstance.getFirstSubmitTime() == null) { taskInstance.setFirstSubmitTime(taskInstance.getSubmitTime()); } boolean saveResult = saveTaskInstance(taskInstance); if (!saveResult) { return null; } return taskInstance; } /** * get submit task instance state by the work process state * cannot modify the task state when running/kill/submit success, or this * task instance is already exists in task queue . * return pause if work process state is ready pause * return stop if work process state is ready stop * if all of above are not satisfied, return submit success * * @param taskInstance taskInstance * @param processInstance processInstance * @return process instance state */ public ExecutionStatus getSubmitTaskState(TaskInstance taskInstance, ProcessInstance processInstance) { ExecutionStatus state = taskInstance.getState(); // running, delayed or killed // the task already exists in task queue // return state if ( state == ExecutionStatus.RUNNING_EXECUTION || state == ExecutionStatus.DELAY_EXECUTION || state == ExecutionStatus.KILL ) { return state; } //return pasue /stop if process instance state is ready pause / stop // or return submit success if (processInstance.getState() == ExecutionStatus.READY_PAUSE) { state = ExecutionStatus.PAUSE; } else if (processInstance.getState() == ExecutionStatus.READY_STOP || !checkProcessStrategy(taskInstance, processInstance)) { state = ExecutionStatus.KILL; } else { state = ExecutionStatus.SUBMITTED_SUCCESS; } return state; } /** * check process instance strategy * * @param taskInstance taskInstance * @return check strategy result */ private boolean checkProcessStrategy(TaskInstance taskInstance, ProcessInstance processInstance) { FailureStrategy failureStrategy = processInstance.getFailureStrategy(); if (failureStrategy == FailureStrategy.CONTINUE) { return true; } List<TaskInstance> taskInstances = this.findValidTaskListByProcessId(taskInstance.getProcessInstanceId()); for (TaskInstance task : taskInstances) { if (task.getState() == ExecutionStatus.FAILURE && task.getRetryTimes() >= task.getMaxRetryTimes()) { return false; } } return true; } /** * insert or update work process instance to data base * * @param processInstance processInstance */ public void saveProcessInstance(ProcessInstance processInstance) { if (processInstance == null) { logger.error("save error, process instance is null!"); return; } if (processInstance.getId() != 0) { processInstanceMapper.updateById(processInstance); } else { processInstanceMapper.insert(processInstance); } } /** * insert or update command * * @param command command * @return save command result */ public int saveCommand(Command command) { if (command.getId() != 0) { return commandMapper.updateById(command); } else { return commandMapper.insert(command); } } /** * insert or update task instance * * @param taskInstance taskInstance * @return save task instance result */ public boolean saveTaskInstance(TaskInstance taskInstance) { if (taskInstance.getId() != 0) { return updateTaskInstance(taskInstance); } else { return createTaskInstance(taskInstance); } } /** * insert task instance * * @param taskInstance taskInstance * @return create task instance result */ public boolean createTaskInstance(TaskInstance taskInstance) { int count = taskInstanceMapper.insert(taskInstance); return count > 0; } /** * update task instance * * @param taskInstance taskInstance * @return update task instance result */ public boolean updateTaskInstance(TaskInstance taskInstance) { int count = taskInstanceMapper.updateById(taskInstance); return count > 0; } /** * find task instance by id * * @param taskId task id * @return task intance */ public TaskInstance findTaskInstanceById(Integer taskId) { return taskInstanceMapper.selectById(taskId); } /** * package task instance */ public void packageTaskInstance(TaskInstance taskInstance, ProcessInstance processInstance) { taskInstance.setProcessInstance(processInstance); taskInstance.setProcessDefine(processInstance.getProcessDefinition()); TaskDefinition taskDefinition = this.findTaskDefinition( taskInstance.getTaskCode(), taskInstance.getTaskDefinitionVersion()); this.updateTaskDefinitionResources(taskDefinition); taskInstance.setTaskDefine(taskDefinition); } /** * Update {@link ResourceInfo} information in {@link TaskDefinition} * * @param taskDefinition the given {@link TaskDefinition} */ public void updateTaskDefinitionResources(TaskDefinition taskDefinition) { Map<String, Object> taskParameters = JSONUtils.parseObject( taskDefinition.getTaskParams(), new TypeReference<Map<String, Object>>() { }); if (taskParameters != null) { // if contains mainJar field, query resource from database // Flink, Spark, MR if (taskParameters.containsKey("mainJar")) { Object mainJarObj = taskParameters.get("mainJar"); ResourceInfo mainJar = JSONUtils.parseObject( JSONUtils.toJsonString(mainJarObj), ResourceInfo.class); ResourceInfo resourceInfo = updateResourceInfo(mainJar); if (resourceInfo != null) { taskParameters.put("mainJar", resourceInfo); } } // update resourceList information if (taskParameters.containsKey("resourceList")) { String resourceListStr = JSONUtils.toJsonString(taskParameters.get("resourceList")); List<ResourceInfo> resourceInfos = JSONUtils.toList(resourceListStr, ResourceInfo.class); List<ResourceInfo> updatedResourceInfos = resourceInfos .stream() .map(this::updateResourceInfo) .filter(Objects::nonNull) .collect(Collectors.toList()); taskParameters.put("resourceList", updatedResourceInfos); } // set task parameters taskDefinition.setTaskParams(JSONUtils.toJsonString(taskParameters)); } } /** * update {@link ResourceInfo} by given original ResourceInfo * * @param res origin resource info * @return {@link ResourceInfo} */ private ResourceInfo updateResourceInfo(ResourceInfo res) { ResourceInfo resourceInfo = null; // only if mainJar is not null and does not contains "resourceName" field if (res != null) { int resourceId = res.getId(); if (resourceId <= 0) { logger.error("invalid resourceId, {}", resourceId); return null; } resourceInfo = new ResourceInfo(); // get resource from database, only one resource should be returned Resource resource = getResourceById(resourceId); resourceInfo.setId(resourceId); resourceInfo.setRes(resource.getFileName()); resourceInfo.setResourceName(resource.getFullName()); if (logger.isInfoEnabled()) { logger.info("updated resource info {}", JSONUtils.toJsonString(resourceInfo)); } } return resourceInfo; } /** * get id list by task state * * @param instanceId instanceId * @param state state * @return task instance states */ public List<Integer> findTaskIdByInstanceState(int instanceId, ExecutionStatus state) { return taskInstanceMapper.queryTaskByProcessIdAndState(instanceId, state.ordinal()); } /** * find valid task list by process definition id * * @param processInstanceId processInstanceId * @return task instance list */ public List<TaskInstance> findValidTaskListByProcessId(Integer processInstanceId) { return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.YES); } /** * find previous task list by work process id * * @param processInstanceId processInstanceId * @return task instance list */ public List<TaskInstance> findPreviousTaskListByWorkProcessId(Integer processInstanceId) { return taskInstanceMapper.findValidTaskListByProcessId(processInstanceId, Flag.NO); } /** * update work process instance map * * @param processInstanceMap processInstanceMap * @return update process instance result */ public int updateWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) { return processInstanceMapMapper.updateById(processInstanceMap); } /** * create work process instance map * * @param processInstanceMap processInstanceMap * @return create process instance result */ public int createWorkProcessInstanceMap(ProcessInstanceMap processInstanceMap) { int count = 0; if (processInstanceMap != null) { return processInstanceMapMapper.insert(processInstanceMap); } return count; } /** * find work process map by parent process id and parent task id. * * @param parentWorkProcessId parentWorkProcessId * @param parentTaskId parentTaskId * @return process instance map */ public ProcessInstanceMap findWorkProcessMapByParent(Integer parentWorkProcessId, Integer parentTaskId) { return processInstanceMapMapper.queryByParentId(parentWorkProcessId, parentTaskId); } /** * delete work process map by parent process id * * @param parentWorkProcessId parentWorkProcessId * @return delete process map result */ public int deleteWorkProcessMapByParentId(int parentWorkProcessId) { return processInstanceMapMapper.deleteByParentProcessId(parentWorkProcessId); } /** * find sub process instance * * @param parentProcessId parentProcessId * @param parentTaskId parentTaskId * @return process instance */ public ProcessInstance findSubProcessInstance(Integer parentProcessId, Integer parentTaskId) { ProcessInstance processInstance = null; ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryByParentId(parentProcessId, parentTaskId); if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) { return processInstance; } processInstance = findProcessInstanceById(processInstanceMap.getProcessInstanceId()); return processInstance; } /** * find parent process instance * * @param subProcessId subProcessId * @return process instance */ public ProcessInstance findParentProcessInstance(Integer subProcessId) { ProcessInstance processInstance = null; ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(subProcessId); if (processInstanceMap == null || processInstanceMap.getProcessInstanceId() == 0) { return processInstance; } processInstance = findProcessInstanceById(processInstanceMap.getParentProcessInstanceId()); return processInstance; } /** * change task state * * @param state state * @param startTime startTime * @param host host * @param executePath executePath * @param logPath logPath */ public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state, Date startTime, String host, String executePath, String logPath) { taskInstance.setState(state); taskInstance.setStartTime(startTime); taskInstance.setHost(host); taskInstance.setExecutePath(executePath); taskInstance.setLogPath(logPath); saveTaskInstance(taskInstance); } /** * update process instance * * @param processInstance processInstance * @return update process instance result */ public int updateProcessInstance(ProcessInstance processInstance) { return processInstanceMapper.updateById(processInstance); } /** * change task state * * @param state state * @param endTime endTime * @param varPool varPool */ public void changeTaskState(TaskInstance taskInstance, ExecutionStatus state, Date endTime, int processId, String appIds, String varPool) { taskInstance.setPid(processId); taskInstance.setAppLink(appIds); taskInstance.setState(state); taskInstance.setEndTime(endTime); taskInstance.setVarPool(varPool); changeOutParam(taskInstance); saveTaskInstance(taskInstance); } /** * for show in page of taskInstance */ public void changeOutParam(TaskInstance taskInstance) { if (StringUtils.isEmpty(taskInstance.getVarPool())) { return; } List<Property> properties = JSONUtils.toList(taskInstance.getVarPool(), Property.class); if (CollectionUtils.isEmpty(properties)) { return; } //if the result more than one line,just get the first . Map<String, Object> taskParams = JSONUtils.parseObject(taskInstance.getTaskParams(), new TypeReference<Map<String, Object>>() { }); Object localParams = taskParams.get(LOCAL_PARAMS); if (localParams == null) { return; } List<Property> allParam = JSONUtils.toList(JSONUtils.toJsonString(localParams), Property.class); Map<String, String> outProperty = new HashMap<>(); for (Property info : properties) { if (info.getDirect() == Direct.OUT) { outProperty.put(info.getProp(), info.getValue()); } } for (Property info : allParam) { if (info.getDirect() == Direct.OUT) { String paramName = info.getProp(); info.setValue(outProperty.get(paramName)); } } taskParams.put(LOCAL_PARAMS, allParam); taskInstance.setTaskParams(JSONUtils.toJsonString(taskParams)); } /** * convert integer list to string list * * @param intList intList * @return string list */ public List<String> convertIntListToString(List<Integer> intList) { if (intList == null) { return new ArrayList<>(); } List<String> result = new ArrayList<>(intList.size()); for (Integer intVar : intList) { result.add(String.valueOf(intVar)); } return result; } /** * query schedule by id * * @param id id * @return schedule */ public Schedule querySchedule(int id) { return scheduleMapper.selectById(id); } /** * query Schedule by processDefinitionCode * * @param processDefinitionCode processDefinitionCode * @see Schedule */ public List<Schedule> queryReleaseSchedulerListByProcessDefinitionCode(long processDefinitionCode) { return scheduleMapper.queryReleaseSchedulerListByProcessDefinitionCode(processDefinitionCode); } /** * query need failover process instance * * @param host host * @return process instance list */ public List<ProcessInstance> queryNeedFailoverProcessInstances(String host) { return processInstanceMapper.queryByHostAndStatus(host, stateArray); } public List<String> queryNeedFailoverProcessInstanceHost() { return processInstanceMapper.queryNeedFailoverProcessInstanceHost(stateArray); } /** * process need failover process instance * * @param processInstance processInstance */ @Transactional(rollbackFor = RuntimeException.class) public void processNeedFailoverProcessInstances(ProcessInstance processInstance) { //1 update processInstance host is null processInstance.setHost(Constants.NULL); processInstanceMapper.updateById(processInstance); ProcessDefinition processDefinition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); //2 insert into recover command Command cmd = new Command(); cmd.setProcessDefinitionCode(processDefinition.getCode()); cmd.setProcessDefinitionVersion(processDefinition.getVersion()); cmd.setProcessInstanceId(processInstance.getId()); cmd.setCommandParam(String.format("{\"%s\":%d}", Constants.CMD_PARAM_RECOVER_PROCESS_ID_STRING, processInstance.getId())); cmd.setExecutorId(processInstance.getExecutorId()); cmd.setCommandType(CommandType.RECOVER_TOLERANCE_FAULT_PROCESS); createCommand(cmd); } /** * query all need failover task instances by host * * @param host host * @return task instance list */ public List<TaskInstance> queryNeedFailoverTaskInstances(String host) { return taskInstanceMapper.queryByHostAndStatus(host, stateArray); } /** * find data source by id * * @param id id * @return datasource */ public DataSource findDataSourceById(int id) { return dataSourceMapper.selectById(id); } /** * update process instance state by id * * @param processInstanceId processInstanceId * @param executionStatus executionStatus * @return update process result */ public int updateProcessInstanceState(Integer processInstanceId, ExecutionStatus executionStatus) { ProcessInstance instance = processInstanceMapper.selectById(processInstanceId); instance.setState(executionStatus); return processInstanceMapper.updateById(instance); } /** * find process instance by the task id * * @param taskId taskId * @return process instance */ public ProcessInstance findProcessInstanceByTaskId(int taskId) { TaskInstance taskInstance = taskInstanceMapper.selectById(taskId); if (taskInstance != null) { return processInstanceMapper.selectById(taskInstance.getProcessInstanceId()); } return null; } /** * find udf function list by id list string * * @param ids ids * @return udf function list */ public List<UdfFunc> queryUdfFunListByIds(int[] ids) { return udfFuncMapper.queryUdfByIdStr(ids, null); } /** * find tenant code by resource name * * @param resName resource name * @param resourceType resource type * @return tenant code */ public String queryTenantCodeByResName(String resName, ResourceType resourceType) { // in order to query tenant code successful although the version is older String fullName = resName.startsWith("/") ? resName : String.format("/%s", resName); List<Resource> resourceList = resourceMapper.queryResource(fullName, resourceType.ordinal()); if (CollectionUtils.isEmpty(resourceList)) { return StringUtils.EMPTY; } int userId = resourceList.get(0).getUserId(); User user = userMapper.selectById(userId); if (Objects.isNull(user)) { return StringUtils.EMPTY; } Tenant tenant = tenantMapper.queryById(user.getTenantId()); if (Objects.isNull(tenant)) { return StringUtils.EMPTY; } return tenant.getTenantCode(); } /** * find schedule list by process define codes. * * @param codes codes * @return schedule list */ public List<Schedule> selectAllByProcessDefineCode(long[] codes) { return scheduleMapper.selectAllByProcessDefineArray(codes); } /** * find last scheduler process instance in the date interval * * @param definitionCode definitionCode * @param dateInterval dateInterval * @return process instance */ public ProcessInstance findLastSchedulerProcessInterval(Long definitionCode, DateInterval dateInterval) { return processInstanceMapper.queryLastSchedulerProcess(definitionCode, dateInterval.getStartTime(), dateInterval.getEndTime()); } /** * find last manual process instance interval * * @param definitionCode process definition code * @param dateInterval dateInterval * @return process instance */ public ProcessInstance findLastManualProcessInterval(Long definitionCode, DateInterval dateInterval) { return processInstanceMapper.queryLastManualProcess(definitionCode, dateInterval.getStartTime(), dateInterval.getEndTime()); } /** * find last running process instance * * @param definitionCode process definition code * @param startTime start time * @param endTime end time * @return process instance */ public ProcessInstance findLastRunningProcess(Long definitionCode, Date startTime, Date endTime) { return processInstanceMapper.queryLastRunningProcess(definitionCode, startTime, endTime, stateArray); } /** * query user queue by process instance * * @param processInstance processInstance * @return queue */ public String queryUserQueueByProcessInstance(ProcessInstance processInstance) { String queue = ""; if (processInstance == null) { return queue; } User executor = userMapper.selectById(processInstance.getExecutorId()); if (executor != null) { queue = executor.getQueue(); } return queue; } /** * query project name and user name by processInstanceId. * * @param processInstanceId processInstanceId * @return projectName and userName */ public ProjectUser queryProjectWithUserByProcessInstanceId(int processInstanceId) { return projectMapper.queryProjectWithUserByProcessInstanceId(processInstanceId); } /** * get task worker group * * @param taskInstance taskInstance * @return workerGroupId */ public String getTaskWorkerGroup(TaskInstance taskInstance) { String workerGroup = taskInstance.getWorkerGroup(); if (StringUtils.isNotBlank(workerGroup)) { return workerGroup; } int processInstanceId = taskInstance.getProcessInstanceId(); ProcessInstance processInstance = findProcessInstanceById(processInstanceId); if (processInstance != null) { return processInstance.getWorkerGroup(); } logger.info("task : {} will use default worker group", taskInstance.getId()); return Constants.DEFAULT_WORKER_GROUP; } /** * get have perm project list * * @param userId userId * @return project list */ public List<Project> getProjectListHavePerm(int userId) { List<Project> createProjects = projectMapper.queryProjectCreatedByUser(userId); List<Project> authedProjects = projectMapper.queryAuthedProjectListByUserId(userId); if (createProjects == null) { createProjects = new ArrayList<>(); } if (authedProjects != null) { createProjects.addAll(authedProjects); } return createProjects; } /** * list unauthorized udf function * * @param userId user id * @param needChecks data source id array * @return unauthorized udf function list */ public <T> List<T> listUnauthorized(int userId, T[] needChecks, AuthorizationType authorizationType) { List<T> resultList = new ArrayList<>(); if (Objects.nonNull(needChecks) && needChecks.length > 0) { Set<T> originResSet = new HashSet<>(Arrays.asList(needChecks)); switch (authorizationType) { case RESOURCE_FILE_ID: case UDF_FILE: List<Resource> ownUdfResources = resourceMapper.listAuthorizedResourceById(userId, needChecks); addAuthorizedResources(ownUdfResources, userId); Set<Integer> authorizedResourceFiles = ownUdfResources.stream().map(Resource::getId).collect(toSet()); originResSet.removeAll(authorizedResourceFiles); break; case RESOURCE_FILE_NAME: List<Resource> ownResources = resourceMapper.listAuthorizedResource(userId, needChecks); addAuthorizedResources(ownResources, userId); Set<String> authorizedResources = ownResources.stream().map(Resource::getFullName).collect(toSet()); originResSet.removeAll(authorizedResources); break; case DATASOURCE: Set<Integer> authorizedDatasources = dataSourceMapper.listAuthorizedDataSource(userId, needChecks).stream().map(DataSource::getId).collect(toSet()); originResSet.removeAll(authorizedDatasources); break; case UDF: Set<Integer> authorizedUdfs = udfFuncMapper.listAuthorizedUdfFunc(userId, needChecks).stream().map(UdfFunc::getId).collect(toSet()); originResSet.removeAll(authorizedUdfs); break; default: break; } resultList.addAll(originResSet); } return resultList; } /** * get user by user id * * @param userId user id * @return User */ public User getUserById(int userId) { return userMapper.selectById(userId); } /** * get resource by resource id * * @param resourceId resource id * @return Resource */ public Resource getResourceById(int resourceId) { return resourceMapper.selectById(resourceId); } /** * list resources by ids * * @param resIds resIds * @return resource list */ public List<Resource> listResourceByIds(Integer[] resIds) { return resourceMapper.listResourceByIds(resIds); } /** * format task app id in task instance */ public String formatTaskAppId(TaskInstance taskInstance) { ProcessInstance processInstance = findProcessInstanceById(taskInstance.getProcessInstanceId()); if (processInstance == null) { return ""; } ProcessDefinition definition = findProcessDefinition(processInstance.getProcessDefinitionCode(), processInstance.getProcessDefinitionVersion()); if (definition == null) { return ""; } return String.format("%s_%s_%s", definition.getId(), processInstance.getId(), taskInstance.getId()); } /** * switch process definition version to process definition log version */ public int switchVersion(ProcessDefinition processDefinition, ProcessDefinitionLog processDefinitionLog) { if (null == processDefinition || null == processDefinitionLog) { return Constants.DEFINITION_FAILURE; } processDefinitionLog.setId(processDefinition.getId()); processDefinitionLog.setReleaseState(ReleaseState.OFFLINE); processDefinitionLog.setFlag(Flag.YES); int result = processDefineMapper.updateById(processDefinitionLog); if (result > 0) { result = switchProcessTaskRelationVersion(processDefinitionLog); if (result <= 0) { return Constants.DEFINITION_FAILURE; } } return result; } public int switchProcessTaskRelationVersion(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); if (!processTaskRelationList.isEmpty()) { processTaskRelationMapper.deleteByCode(processDefinition.getProjectCode(), processDefinition.getCode()); } List<ProcessTaskRelationLog> processTaskRelationLogList = processTaskRelationLogMapper.queryByProcessCodeAndVersion(processDefinition.getCode(), processDefinition.getVersion()); return processTaskRelationMapper.batchInsert(processTaskRelationLogList); } /** * get resource ids * * @param taskDefinition taskDefinition * @return resource ids */ public String getResourceIds(TaskDefinition taskDefinition) { Set<Integer> resourceIds = null; AbstractParameters params = TaskParametersUtils.getParameters(taskDefinition.getTaskType(), taskDefinition.getTaskParams()); if (params != null && CollectionUtils.isNotEmpty(params.getResourceFilesList())) { resourceIds = params.getResourceFilesList(). stream() .filter(t -> t.getId() != 0) .map(ResourceInfo::getId) .collect(Collectors.toSet()); } if (CollectionUtils.isEmpty(resourceIds)) { return StringUtils.EMPTY; } return StringUtils.join(resourceIds, ","); } public int saveTaskDefine(User operator, long projectCode, List<TaskDefinitionLog> taskDefinitionLogs) { Date now = new Date(); List<TaskDefinitionLog> newTaskDefinitionLogs = new ArrayList<>(); List<TaskDefinitionLog> updateTaskDefinitionLogs = new ArrayList<>(); for (TaskDefinitionLog taskDefinitionLog : taskDefinitionLogs) { taskDefinitionLog.setProjectCode(projectCode); taskDefinitionLog.setUpdateTime(now); taskDefinitionLog.setOperateTime(now); taskDefinitionLog.setOperator(operator.getId()); taskDefinitionLog.setResourceIds(getResourceIds(taskDefinitionLog)); if (taskDefinitionLog.getCode() > 0 && taskDefinitionLog.getVersion() > 0) { TaskDefinitionLog definitionCodeAndVersion = taskDefinitionLogMapper .queryByDefinitionCodeAndVersion(taskDefinitionLog.getCode(), taskDefinitionLog.getVersion()); if (definitionCodeAndVersion != null) { if (!taskDefinitionLog.equals(definitionCodeAndVersion)) { taskDefinitionLog.setUserId(definitionCodeAndVersion.getUserId()); Integer version = taskDefinitionLogMapper.queryMaxVersionForDefinition(taskDefinitionLog.getCode()); taskDefinitionLog.setVersion(version + 1); taskDefinitionLog.setCreateTime(definitionCodeAndVersion.getCreateTime()); updateTaskDefinitionLogs.add(taskDefinitionLog); } continue; } } taskDefinitionLog.setUserId(operator.getId()); taskDefinitionLog.setVersion(Constants.VERSION_FIRST); taskDefinitionLog.setCreateTime(now); if (taskDefinitionLog.getCode() == 0) { try { taskDefinitionLog.setCode(CodeGenerateUtils.getInstance().genCode()); } catch (CodeGenerateException e) { logger.error("Task code get error, ", e); return Constants.DEFINITION_FAILURE; } } newTaskDefinitionLogs.add(taskDefinitionLog); } int insertResult = 0; int updateResult = 0; for (TaskDefinitionLog taskDefinitionToUpdate : updateTaskDefinitionLogs) { TaskDefinition task = taskDefinitionMapper.queryByCode(taskDefinitionToUpdate.getCode()); if (task == null) { newTaskDefinitionLogs.add(taskDefinitionToUpdate); } else { insertResult += taskDefinitionLogMapper.insert(taskDefinitionToUpdate); taskDefinitionToUpdate.setId(task.getId()); updateResult += taskDefinitionMapper.updateById(taskDefinitionToUpdate); } } if (!newTaskDefinitionLogs.isEmpty()) { updateResult += taskDefinitionMapper.batchInsert(newTaskDefinitionLogs); insertResult += taskDefinitionLogMapper.batchInsert(newTaskDefinitionLogs); } return (insertResult & updateResult) > 0 ? 1 : Constants.EXIT_CODE_SUCCESS; } /** * save processDefinition (including create or update processDefinition) */ public int saveProcessDefine(User operator, ProcessDefinition processDefinition, Boolean isFromProcessDefine) { ProcessDefinitionLog processDefinitionLog = new ProcessDefinitionLog(processDefinition); Integer version = processDefineLogMapper.queryMaxVersionForDefinition(processDefinition.getCode()); int insertVersion = version == null || version == 0 ? Constants.VERSION_FIRST : version + 1; processDefinitionLog.setVersion(insertVersion); processDefinitionLog.setReleaseState(isFromProcessDefine ? ReleaseState.OFFLINE : ReleaseState.ONLINE); processDefinitionLog.setOperator(operator.getId()); processDefinitionLog.setOperateTime(processDefinition.getUpdateTime()); int insertLog = processDefineLogMapper.insert(processDefinitionLog); int result; if (0 == processDefinition.getId()) { result = processDefineMapper.insert(processDefinitionLog); } else { processDefinitionLog.setId(processDefinition.getId()); result = processDefineMapper.updateById(processDefinitionLog); } return (insertLog & result) > 0 ? insertVersion : 0; } /** * save task relations */ public int saveTaskRelation(User operator, long projectCode, long processDefinitionCode, int processDefinitionVersion, List<ProcessTaskRelationLog> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) { if (taskRelationList.isEmpty()) { return Constants.EXIT_CODE_SUCCESS; } Map<Long, TaskDefinitionLog> taskDefinitionLogMap = null; if (CollectionUtils.isNotEmpty(taskDefinitionLogs)) { taskDefinitionLogMap = taskDefinitionLogs.stream() .collect(Collectors.toMap(TaskDefinition::getCode, taskDefinitionLog -> taskDefinitionLog)); } Date now = new Date(); for (ProcessTaskRelationLog processTaskRelationLog : taskRelationList) { processTaskRelationLog.setProjectCode(projectCode); processTaskRelationLog.setProcessDefinitionCode(processDefinitionCode); processTaskRelationLog.setProcessDefinitionVersion(processDefinitionVersion); if (taskDefinitionLogMap != null) { TaskDefinitionLog preTaskDefinitionLog = taskDefinitionLogMap.get(processTaskRelationLog.getPreTaskCode()); if (preTaskDefinitionLog != null) { processTaskRelationLog.setPreTaskVersion(preTaskDefinitionLog.getVersion()); } TaskDefinitionLog postTaskDefinitionLog = taskDefinitionLogMap.get(processTaskRelationLog.getPostTaskCode()); if (postTaskDefinitionLog != null) { processTaskRelationLog.setPostTaskVersion(postTaskDefinitionLog.getVersion()); } } processTaskRelationLog.setCreateTime(now); processTaskRelationLog.setUpdateTime(now); processTaskRelationLog.setOperator(operator.getId()); processTaskRelationLog.setOperateTime(now); } List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode); if (!processTaskRelationList.isEmpty()) { Set<Integer> processTaskRelationSet = processTaskRelationList.stream().map(ProcessTaskRelation::hashCode).collect(toSet()); Set<Integer> taskRelationSet = taskRelationList.stream().map(ProcessTaskRelationLog::hashCode).collect(toSet()); boolean result = CollectionUtils.isEqualCollection(processTaskRelationSet, taskRelationSet); if (result) { return Constants.EXIT_CODE_SUCCESS; } processTaskRelationMapper.deleteByCode(projectCode, processDefinitionCode); } int result = processTaskRelationMapper.batchInsert(taskRelationList); int resultLog = processTaskRelationLogMapper.batchInsert(taskRelationList); return (result & resultLog) > 0 ? Constants.EXIT_CODE_SUCCESS : Constants.EXIT_CODE_FAILURE; } public boolean isTaskOnline(long taskCode) { List<ProcessTaskRelation> processTaskRelationList = processTaskRelationMapper.queryByTaskCode(taskCode); if (!processTaskRelationList.isEmpty()) { Set<Long> processDefinitionCodes = processTaskRelationList .stream() .map(ProcessTaskRelation::getProcessDefinitionCode) .collect(Collectors.toSet()); List<ProcessDefinition> processDefinitionList = processDefineMapper.queryByCodes(processDefinitionCodes); // check process definition is already online for (ProcessDefinition processDefinition : processDefinitionList) { if (processDefinition.getReleaseState() == ReleaseState.ONLINE) { return true; } } } return false; } /** * Generate the DAG Graph based on the process definition id * * @param processDefinition process definition * @return dag graph */ public DAG<String, TaskNode, TaskNodeRelation> genDagGraph(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); List<TaskNode> taskNodeList = transformTask(processTaskRelations, Lists.newArrayList()); ProcessDag processDag = DagHelper.getProcessDag(taskNodeList, new ArrayList<>(processTaskRelations)); // Generate concrete Dag to be executed return DagHelper.buildDagGraph(processDag); } /** * generate DagData */ public DagData genDagData(ProcessDefinition processDefinition) { List<ProcessTaskRelation> processTaskRelations = processTaskRelationMapper.queryByProcessCode(processDefinition.getProjectCode(), processDefinition.getCode()); List<TaskDefinitionLog> taskDefinitionLogList = genTaskDefineList(processTaskRelations); List<TaskDefinition> taskDefinitions = taskDefinitionLogList.stream() .map(taskDefinitionLog -> JSONUtils.parseObject(JSONUtils.toJsonString(taskDefinitionLog), TaskDefinition.class)) .collect(Collectors.toList()); return new DagData(processDefinition, processTaskRelations, taskDefinitions); } public List<TaskDefinitionLog> genTaskDefineList(List<ProcessTaskRelation> processTaskRelations) { Set<TaskDefinition> taskDefinitionSet = new HashSet<>(); for (ProcessTaskRelation processTaskRelation : processTaskRelations) { if (processTaskRelation.getPreTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPreTaskCode(), processTaskRelation.getPreTaskVersion())); } if (processTaskRelation.getPostTaskCode() > 0) { taskDefinitionSet.add(new TaskDefinition(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion())); } } if (taskDefinitionSet.isEmpty()) { return Lists.newArrayList(); } return taskDefinitionLogMapper.queryByTaskDefinitions(taskDefinitionSet); } public List<TaskDefinitionLog> getTaskDefineLogListByRelation(List<ProcessTaskRelation> processTaskRelations) { List<TaskDefinitionLog> taskDefinitionLogs = new ArrayList<>(); Map<Long, Integer> taskCodeVersionMap = new HashMap<>(); for (ProcessTaskRelation processTaskRelation : processTaskRelations) { if (processTaskRelation.getPreTaskCode() > 0) { taskCodeVersionMap.put(processTaskRelation.getPreTaskCode(), processTaskRelation.getPreTaskVersion()); } if (processTaskRelation.getPostTaskCode() > 0) { taskCodeVersionMap.put(processTaskRelation.getPostTaskCode(), processTaskRelation.getPostTaskVersion()); } } taskCodeVersionMap.forEach((code,version) -> { taskDefinitionLogs.add((TaskDefinitionLog) this.findTaskDefinition(code, version)); }); return taskDefinitionLogs; } /** * find task definition by code and version */ public TaskDefinition findTaskDefinition(long taskCode, int taskDefinitionVersion) { return taskDefinitionLogMapper.queryByDefinitionCodeAndVersion(taskCode, taskDefinitionVersion); } /** * find process task relation list by projectCode and processDefinitionCode */ public List<ProcessTaskRelation> findRelationByCode(long projectCode, long processDefinitionCode) { return processTaskRelationMapper.queryByProcessCode(projectCode, processDefinitionCode); } /** * add authorized resources * * @param ownResources own resources * @param userId userId */ private void addAuthorizedResources(List<Resource> ownResources, int userId) { List<Integer> relationResourceIds = resourceUserMapper.queryResourcesIdListByUserIdAndPerm(userId, 7); List<Resource> relationResources = CollectionUtils.isNotEmpty(relationResourceIds) ? resourceMapper.queryResourceListById(relationResourceIds) : new ArrayList<>(); ownResources.addAll(relationResources); } /** * Use temporarily before refactoring taskNode */ public List<TaskNode> transformTask(List<ProcessTaskRelation> taskRelationList, List<TaskDefinitionLog> taskDefinitionLogs) { Map<Long, List<Long>> taskCodeMap = new HashMap<>(); for (ProcessTaskRelation processTaskRelation : taskRelationList) { taskCodeMap.compute(processTaskRelation.getPostTaskCode(), (k, v) -> { if (v == null) { v = new ArrayList<>(); } if (processTaskRelation.getPreTaskCode() != 0L) { v.add(processTaskRelation.getPreTaskCode()); } return v; }); } if (CollectionUtils.isEmpty(taskDefinitionLogs)) { taskDefinitionLogs = genTaskDefineList(taskRelationList); } Map<Long, TaskDefinitionLog> taskDefinitionLogMap = taskDefinitionLogs.stream() .collect(Collectors.toMap(TaskDefinitionLog::getCode, taskDefinitionLog -> taskDefinitionLog)); List<TaskNode> taskNodeList = new ArrayList<>(); for (Entry<Long, List<Long>> code : taskCodeMap.entrySet()) { TaskDefinitionLog taskDefinitionLog = taskDefinitionLogMap.get(code.getKey()); if (taskDefinitionLog != null) { TaskNode taskNode = new TaskNode(); taskNode.setCode(taskDefinitionLog.getCode()); taskNode.setVersion(taskDefinitionLog.getVersion()); taskNode.setName(taskDefinitionLog.getName()); taskNode.setDesc(taskDefinitionLog.getDescription()); taskNode.setType(taskDefinitionLog.getTaskType().toUpperCase()); taskNode.setRunFlag(taskDefinitionLog.getFlag() == Flag.YES ? Constants.FLOWNODE_RUN_FLAG_NORMAL : Constants.FLOWNODE_RUN_FLAG_FORBIDDEN); taskNode.setMaxRetryTimes(taskDefinitionLog.getFailRetryTimes()); taskNode.setRetryInterval(taskDefinitionLog.getFailRetryInterval()); Map<String, Object> taskParamsMap = taskNode.taskParamsToJsonObj(taskDefinitionLog.getTaskParams()); taskNode.setConditionResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.CONDITION_RESULT))); taskNode.setSwitchResult(JSONUtils.toJsonString(taskParamsMap.get(Constants.SWITCH_RESULT))); taskNode.setDependence(JSONUtils.toJsonString(taskParamsMap.get(Constants.DEPENDENCE))); taskParamsMap.remove(Constants.CONDITION_RESULT); taskParamsMap.remove(Constants.DEPENDENCE); taskNode.setParams(JSONUtils.toJsonString(taskParamsMap)); taskNode.setTaskInstancePriority(taskDefinitionLog.getTaskPriority()); taskNode.setWorkerGroup(taskDefinitionLog.getWorkerGroup()); taskNode.setEnvironmentCode(taskDefinitionLog.getEnvironmentCode()); taskNode.setTimeout(JSONUtils.toJsonString(new TaskTimeoutParameter(taskDefinitionLog.getTimeoutFlag() == TimeoutFlag.OPEN, taskDefinitionLog.getTimeoutNotifyStrategy(), taskDefinitionLog.getTimeout()))); taskNode.setDelayTime(taskDefinitionLog.getDelayTime()); taskNode.setPreTasks(JSONUtils.toJsonString(code.getValue().stream().map(taskDefinitionLogMap::get).map(TaskDefinition::getCode).collect(Collectors.toList()))); taskNode.setTaskGroupId(taskDefinitionLog.getTaskGroupId()); taskNode.setTaskGroupPriority(taskDefinitionLog.getTaskGroupPriority()); taskNodeList.add(taskNode); } } return taskNodeList; } public Map<ProcessInstance, TaskInstance> notifyProcessList(int processId) { HashMap<ProcessInstance, TaskInstance> processTaskMap = new HashMap<>(); //find sub tasks ProcessInstanceMap processInstanceMap = processInstanceMapMapper.queryBySubProcessId(processId); if (processInstanceMap == null) { return processTaskMap; } ProcessInstance fatherProcess = this.findProcessInstanceById(processInstanceMap.getParentProcessInstanceId()); TaskInstance fatherTask = this.findTaskInstanceById(processInstanceMap.getParentTaskInstanceId()); if (fatherProcess != null) { processTaskMap.put(fatherProcess, fatherTask); } return processTaskMap; } /** * the first time (when submit the task ) get the resource of the task group * @param taskId task id * @param taskName * @param groupId * @param processId * @param priority * @return */ public boolean acquireTaskGroup(int taskId, String taskName, int groupId, int processId, int priority) { TaskGroup taskGroup = taskGroupMapper.selectById(groupId); if (taskGroup == null) { return true; } // if task group is not applicable if (taskGroup.getStatus() == Flag.NO.getCode()) { return true; } TaskGroupQueue taskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskId); if (taskGroupQueue == null) { taskGroupQueue = insertIntoTaskGroupQueue(taskId, taskName, groupId, processId, priority, TaskGroupQueueStatus.WAIT_QUEUE); } else { if (taskGroupQueue.getStatus() == TaskGroupQueueStatus.ACQUIRE_SUCCESS) { return true; } taskGroupQueue.setInQueue(Flag.NO.getCode()); taskGroupQueue.setStatus(TaskGroupQueueStatus.WAIT_QUEUE); this.taskGroupQueueMapper.updateById(taskGroupQueue); } //check priority List<TaskGroupQueue> highPriorityTasks = taskGroupQueueMapper.queryHighPriorityTasks(groupId, priority, TaskGroupQueueStatus.WAIT_QUEUE.getCode()); if (CollectionUtils.isNotEmpty(highPriorityTasks)) { this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return false; } //try to get taskGroup int count = taskGroupMapper.selectAvailableCountById(groupId); if (count == 1 && robTaskGroupResouce(taskGroupQueue)) { return true; } this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return false; } /** * try to get the task group resource(when other task release the resource) * @param taskGroupQueue * @return */ public boolean robTaskGroupResouce(TaskGroupQueue taskGroupQueue) { TaskGroup taskGroup = taskGroupMapper.selectById(taskGroupQueue.getGroupId()); int affectedCount = taskGroupMapper.updateTaskGroupResource(taskGroup.getId(),taskGroupQueue.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode()); if (affectedCount > 0) { taskGroupQueue.setStatus(TaskGroupQueueStatus.ACQUIRE_SUCCESS); this.taskGroupQueueMapper.updateById(taskGroupQueue); this.taskGroupQueueMapper.updateInQueue(Flag.NO.getCode(), taskGroupQueue.getId()); return true; } return false; } public boolean acquireTaskGroupAgain(TaskGroupQueue taskGroupQueue) { return robTaskGroupResouce(taskGroupQueue); } public void releaseAllTaskGroup(int processInstanceId) { List<TaskInstance> taskInstances = this.taskInstanceMapper.loadAllInfosNoRelease(processInstanceId, TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode()); for (TaskInstance info : taskInstances) { releaseTaskGroup(info); } } /** * release the TGQ resource when the corresponding task is finished. * * @return the result code and msg */ public TaskInstance releaseTaskGroup(TaskInstance taskInstance) { TaskGroup taskGroup = taskGroupMapper.selectById(taskInstance.getTaskGroupId()); if (taskGroup == null) { return null; } TaskGroupQueue thisTaskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskInstance.getId()); if (thisTaskGroupQueue.getStatus() == TaskGroupQueueStatus.RELEASE) { return null; } try { while (taskGroupMapper.releaseTaskGroupResource(taskGroup.getId(), taskGroup.getUseSize() , thisTaskGroupQueue.getId(), TaskGroupQueueStatus.ACQUIRE_SUCCESS.getCode()) != 1) { thisTaskGroupQueue = this.taskGroupQueueMapper.queryByTaskId(taskInstance.getId()); if (thisTaskGroupQueue.getStatus() == TaskGroupQueueStatus.RELEASE) { return null; } taskGroup = taskGroupMapper.selectById(taskInstance.getTaskGroupId()); } } catch (Exception e) { logger.error("release the task group error",e); } logger.info("updateTask:{}",taskInstance.getName()); changeTaskGroupQueueStatus(taskInstance.getId(), TaskGroupQueueStatus.RELEASE); TaskGroupQueue taskGroupQueue = this.taskGroupQueueMapper.queryTheHighestPriorityTasks(taskGroup.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode(), Flag.NO.getCode(), Flag.NO.getCode()); if (taskGroupQueue == null) { return null; } while (this.taskGroupQueueMapper.updateInQueueCAS(Flag.NO.getCode(), Flag.YES.getCode(), taskGroupQueue.getId()) != 1) { taskGroupQueue = this.taskGroupQueueMapper.queryTheHighestPriorityTasks(taskGroup.getId(), TaskGroupQueueStatus.WAIT_QUEUE.getCode(), Flag.NO.getCode(), Flag.NO.getCode()); if (taskGroupQueue == null) { return null; } } return this.taskInstanceMapper.selectById(taskGroupQueue.getTaskId()); } /** * release the TGQ resource when the corresponding task is finished. * * @param taskId task id * @return the result code and msg */ public void changeTaskGroupQueueStatus(int taskId, TaskGroupQueueStatus status) { TaskGroupQueue taskGroupQueue = taskGroupQueueMapper.queryByTaskId(taskId); taskGroupQueue.setStatus(status); taskGroupQueue.setUpdateTime(new Date(System.currentTimeMillis())); taskGroupQueueMapper.updateById(taskGroupQueue); } /** * insert into task group queue * * @param taskId task id * @param taskName task name * @param groupId group id * @param processId process id * @param priority priority * @return result and msg code */ public TaskGroupQueue insertIntoTaskGroupQueue(Integer taskId, String taskName, Integer groupId, Integer processId, Integer priority, TaskGroupQueueStatus status) { TaskGroupQueue taskGroupQueue = new TaskGroupQueue(taskId, taskName, groupId, processId, priority, status); taskGroupQueueMapper.insert(taskGroupQueue); return taskGroupQueue; } public int updateTaskGroupQueueStatus(Integer taskId, int status) { return taskGroupQueueMapper.updateStatusByTaskId(taskId, status); } public int updateTaskGroupQueue(TaskGroupQueue taskGroupQueue) { return taskGroupQueueMapper.updateById(taskGroupQueue); } public TaskGroupQueue loadTaskGroupQueue(int taskId) { return this.taskGroupQueueMapper.queryByTaskId(taskId); } public void sendStartTask2Master(ProcessInstance processInstance,int taskId, org.apache.dolphinscheduler.remote.command.CommandType taskType) { String host = processInstance.getHost(); String address = host.split(":")[0]; int port = Integer.parseInt(host.split(":")[1]); TaskEventChangeCommand taskEventChangeCommand = new TaskEventChangeCommand( processInstance.getId(), taskId ); stateEventCallbackService.sendResult(address, port, taskEventChangeCommand.convert2Command(taskType)); } public ProcessInstance loadNextProcess4Serial(long code, int state) { return this.processInstanceMapper.loadNextProcess4Serial(code, state); } private void deleteCommandWithCheck(int commandId) { int delete = this.commandMapper.deleteById(commandId); if (delete != 1) { throw new ServiceException("delete command fail, id:" + commandId); } } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,701
[Bug] [DAO] There's an redundant field named 'project_id' in the table of 't_ds_task_group'.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened In the table of 't_ds_task_group' there's two fields of 'project_code' and 'project_id'. Each of them can identify the project, and the class of 'TaskGroup' doesn't use this field. So need to remove it from the table and improve a few wrong annotations. ![image](https://user-images.githubusercontent.com/4928204/147622409-d9407e03-a2eb-49fc-abd7-e67111027472.png) ### What you expected to happen I expect to remove the redundant field 'project_id' in the table 't_ds_task_group'. ### How to reproduce See it in the project. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7701
https://github.com/apache/dolphinscheduler/pull/7703
5ab0135d1831c9156388d8b593417e775e2b2b84
71ccb42697e197b2668a08a83eb9bb19261b4f5a
"2021-12-29T02:52:14Z"
java
"2021-12-29T04:57:36Z"
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/controller/TaskGroupController.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.api.controller; import io.swagger.annotations.Api; import io.swagger.annotations.ApiImplicitParam; import io.swagger.annotations.ApiImplicitParams; import io.swagger.annotations.ApiOperation; import org.apache.dolphinscheduler.api.aspect.AccessLogAnnotation; import org.apache.dolphinscheduler.api.exceptions.ApiException; import org.apache.dolphinscheduler.api.service.TaskGroupQueueService; import org.apache.dolphinscheduler.api.service.TaskGroupService; import org.apache.dolphinscheduler.api.utils.Result; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.dao.entity.User; import java.util.Map; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.PostMapping; import org.springframework.web.bind.annotation.RequestAttribute; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.ResponseStatus; import org.springframework.web.bind.annotation.RestController; import springfox.documentation.annotations.ApiIgnore; import java.util.Map; import static org.apache.dolphinscheduler.api.enums.Status.CLOSE_TASK_GROUP_ERROR; import static org.apache.dolphinscheduler.api.enums.Status.CREATE_TASK_GROUP_ERROR; import static org.apache.dolphinscheduler.api.enums.Status.QUERY_TASK_GROUP_LIST_ERROR; import static org.apache.dolphinscheduler.api.enums.Status.QUERY_TASK_GROUP_QUEUE_LIST_ERROR; import static org.apache.dolphinscheduler.api.enums.Status.START_TASK_GROUP_ERROR; import static org.apache.dolphinscheduler.api.enums.Status.UPDATE_TASK_GROUP_ERROR; /** * task group controller */ @Api(tags = "task group") @RestController @RequestMapping("/task-group") public class TaskGroupController extends BaseController { @Autowired private TaskGroupService taskGroupService; /** * query task group list * * @param loginUser login user * @param name name * @param description description * @param groupSize group size * @param name project id * @return result and msg code */ @ApiOperation(value = "create", notes = "CREATE_TAKS_GROUP_NOTE") @ApiImplicitParams({ @ApiImplicitParam(name = "name", value = "NAME", dataType = "String"), @ApiImplicitParam(name = "projectCode", value = "PROJECT_CODE", type = "Long"), @ApiImplicitParam(name = "description", value = "DESCRIPTION", dataType = "String"), @ApiImplicitParam(name = "groupSize", value = "GROUPSIZE", dataType = "Int"), }) @PostMapping(value = "/create") @ResponseStatus(HttpStatus.CREATED) @ApiException(CREATE_TASK_GROUP_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result createTaskGroup(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam("name") String name, @RequestParam(value = "projectCode", required = false, defaultValue = "0") Long projectcode, @RequestParam("description") String description, @RequestParam("groupSize") Integer groupSize) { Map<String, Object> result = taskGroupService.createTaskGroup(loginUser, projectcode, name, description, groupSize); return returnDataList(result); } /** * update task group list * * @param loginUser login user * @param name name * @param description description * @param groupSize group size * @param name project id * @return result and msg code */ @ApiOperation(value = "update", notes = "UPDATE_TAKS_GROUP_NOTE") @ApiImplicitParams({ @ApiImplicitParam(name = "id", value = "id", dataType = "Int"), @ApiImplicitParam(name = "name", value = "NAME", dataType = "String"), @ApiImplicitParam(name = "description", value = "DESCRIPTION", dataType = "String"), @ApiImplicitParam(name = "groupSize", value = "GROUPSIZE", dataType = "Int"), }) @PostMapping(value = "/update") @ResponseStatus(HttpStatus.CREATED) @ApiException(UPDATE_TASK_GROUP_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result updateTaskGroup(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam("id") Integer id, @RequestParam("name") String name, @RequestParam("description") String description, @RequestParam("groupSize") Integer groupSize) { Map<String, Object> result = taskGroupService.updateTaskGroup(loginUser, id, name, description, groupSize); return returnDataList(result); } /** * query task group list paging * * @param loginUser login user * @param pageNo page number * @param pageSize page size * @return queue list */ @ApiOperation(value = "list-paging", notes = "QUERY_ALL_TASK_GROUP_NOTES") @ApiImplicitParams({ @ApiImplicitParam(name = "pageNo", value = "PAGE_NO", required = true, dataType = "Int", example = "1"), @ApiImplicitParam(name = "name", value = "NAME", required = false, dataType = "String"), @ApiImplicitParam(name = "pageSize", value = "PAGE_SIZE", required = true, dataType = "Int", example = "20") }) @GetMapping(value = "/list-paging") @ResponseStatus(HttpStatus.OK) @ApiException(QUERY_TASK_GROUP_LIST_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result queryAllTaskGroup(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam(value = "name", required = false) String name, @RequestParam(value = "status", required = false) Integer status, @RequestParam("pageNo") Integer pageNo, @RequestParam("pageSize") Integer pageSize) { Map<String, Object> result = taskGroupService.queryAllTaskGroup(loginUser, name, status, pageNo, pageSize); return returnDataList(result); } /** * query task group list paging * * @param loginUser login user * @param pageNo page number * @param status status * @param pageSize page size * @return queue list */ @ApiOperation(value = "queryTaskGroupByStatus", notes = "QUERY_TASK_GROUP_LIST_BY_STSATUS_NOTES") @ApiImplicitParams({ @ApiImplicitParam(name = "pageNo", value = "PAGE_NO", required = true, dataType = "Int", example = "1"), @ApiImplicitParam(name = "pageSize", value = "PAGE_SIZE", required = true, dataType = "Int", example = "20"), @ApiImplicitParam(name = "status", value = "status", required = true, dataType = "Int") }) @GetMapping(value = "/query-list-by-status") @ResponseStatus(HttpStatus.OK) @ApiException(QUERY_TASK_GROUP_LIST_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result queryTaskGroupByStatus(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam("pageNo") Integer pageNo, @RequestParam(value = "status", required = false) Integer status, @RequestParam("pageSize") Integer pageSize) { Map<String, Object> result = taskGroupService.queryTaskGroupByStatus(loginUser, pageNo, pageSize, status); return returnDataList(result); } /** * query task group list paging by project id * * @param loginUser login user * @param pageNo page number * @param projectCode project id * @param pageSize page size * @return queue list */ @ApiOperation(value = "queryTaskGroupByName", notes = "QUERY_TASK_GROUP_LIST_BY_PROJECT_ID_NOTES") @ApiImplicitParams({ @ApiImplicitParam(name = "pageNo", value = "PAGE_NO", required = true, dataType = "Int", example = "1"), @ApiImplicitParam(name = "pageSize", value = "PAGE_SIZE", required = true, dataType = "Int", example = "20"), @ApiImplicitParam(name = "projectCode", value = "PROJECT_CODE", required = true, dataType = "String") }) @GetMapping(value = "/query-list-by-projectCode") @ResponseStatus(HttpStatus.OK) @ApiException(QUERY_TASK_GROUP_LIST_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result queryTaskGroupByCode(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam("pageNo") Integer pageNo, @RequestParam(value = "projectCode", required = false) Long projectCode, @RequestParam("pageSize") Integer pageSize) { Map<String, Object> result = taskGroupService.queryTaskGroupByProjectCode(loginUser, pageNo, pageSize, projectCode); return returnDataList(result); } /** * close a task group * * @param loginUser login user * @param id id * @return result */ @ApiOperation(value = "closeTaskGroup", notes = "CLOSE_TASK_GROUP_NOTES") @ApiImplicitParams({ @ApiImplicitParam(name = "id", value = "ID", required = true, dataType = "Int") }) @PostMapping(value = "/close-task-group") @ResponseStatus(HttpStatus.CREATED) @ApiException(CLOSE_TASK_GROUP_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result closeTaskGroup(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam(value = "id", required = false) Integer id) { Map<String, Object> result = taskGroupService.closeTaskGroup(loginUser, id); return returnDataList(result); } /** * start a task group * * @param loginUser login user * @param id id * @return result */ @ApiOperation(value = "startTaskGroup", notes = "START_TASK_GROUP_NOTES") @ApiImplicitParams({ @ApiImplicitParam(name = "id", value = "ID", required = true, dataType = "Int") }) @PostMapping(value = "/start-task-group") @ResponseStatus(HttpStatus.CREATED) @ApiException(START_TASK_GROUP_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result startTaskGroup(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam(value = "id", required = false) Integer id) { Map<String, Object> result = taskGroupService.startTaskGroup(loginUser, id); return returnDataList(result); } /** * force start task without task group * * @param loginUser login user * @param queueId task group queue id * @return result */ @ApiOperation(value = "forceStart", notes = "WAKE_TASK_COMPULSIVELY_NOTES") @ApiImplicitParams({ @ApiImplicitParam(name = "queueId", value = "TASK_GROUP_QUEUEID", required = true, dataType = "Int") }) @PostMapping(value = "/forceStart") @ResponseStatus(HttpStatus.CREATED) @ApiException(START_TASK_GROUP_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result forceStart(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam(value = "queueId") Integer queueId) { Map<String, Object> result = taskGroupService.forceStartTask(loginUser, queueId); return returnDataList(result); } /** * force start task without task group * * @param loginUser login user * @param queueId task group queue id * @return result */ @ApiOperation(value = "modifyPriority", notes = "WAKE_TASK_COMPULSIVELY_NOTES") @ApiImplicitParams({ @ApiImplicitParam(name = "queueId", value = "TASK_GROUP_QUEUEID", required = true, dataType = "Int"), @ApiImplicitParam(name = "priority", value = "TASK_GROUP_QUEUE_PRIORITY", required = true, dataType = "Int") }) @PostMapping(value = "/modifyPriority") @ResponseStatus(HttpStatus.CREATED) @ApiException(START_TASK_GROUP_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result modifyPriority(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam(value = "queueId") Integer queueId, @RequestParam(value = "priority") Integer priority) { Map<String, Object> result = taskGroupService.modifyPriority(loginUser, queueId,priority); return returnDataList(result); } @Autowired private TaskGroupQueueService taskGroupQueueService; /** * query task group queue list paging * * @param loginUser login user * @param pageNo page number * @param pageSize page size * @return queue list */ @ApiOperation(value = "queryTasksByGroupId", notes = "QUERY_ALL_TASKS_NOTES") @ApiImplicitParams({ @ApiImplicitParam(name = "groupId", value = "GROUP_ID", required = true, dataType = "Int", example = "1"), @ApiImplicitParam(name = "pageNo", value = "PAGE_NO", required = true, dataType = "Int", example = "1"), @ApiImplicitParam(name = "pageSize", value = "PAGE_SIZE", required = true, dataType = "Int", example = "20") }) @GetMapping(value = "/query-list-by-group-id") @ResponseStatus(HttpStatus.OK) @ApiException(QUERY_TASK_GROUP_QUEUE_LIST_ERROR) @AccessLogAnnotation(ignoreRequestArgs = "loginUser") public Result queryTasksByGroupId(@ApiIgnore @RequestAttribute(value = Constants.SESSION_USER) User loginUser, @RequestParam("groupId") Integer groupId, @RequestParam(value = "taskInstanceName",required = false) String taskName, @RequestParam(value = "processInstanceName",required = false) String processName, @RequestParam(value = "status",required = false) Integer status, @RequestParam("pageNo") Integer pageNo, @RequestParam("pageSize") Integer pageSize) { Map<String, Object> result = taskGroupQueueService.queryTasksByGroupId(loginUser, taskName,processName,status, groupId, pageNo, pageSize); return returnDataList(result); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,701
[Bug] [DAO] There's an redundant field named 'project_id' in the table of 't_ds_task_group'.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened In the table of 't_ds_task_group' there's two fields of 'project_code' and 'project_id'. Each of them can identify the project, and the class of 'TaskGroup' doesn't use this field. So need to remove it from the table and improve a few wrong annotations. ![image](https://user-images.githubusercontent.com/4928204/147622409-d9407e03-a2eb-49fc-abd7-e67111027472.png) ### What you expected to happen I expect to remove the redundant field 'project_id' in the table 't_ds_task_group'. ### How to reproduce See it in the project. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7701
https://github.com/apache/dolphinscheduler/pull/7703
5ab0135d1831c9156388d8b593417e775e2b2b84
71ccb42697e197b2668a08a83eb9bb19261b4f5a
"2021-12-29T02:52:14Z"
java
"2021-12-29T04:57:36Z"
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/TaskGroupService.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.api.service; import org.apache.dolphinscheduler.dao.entity.User; import java.util.Map; /** * task group service */ public interface TaskGroupService { /** * create a Task group * * @param loginUser login user * @param name task group name * @param description task group description * @param groupSize task group total size * @return the result code and msg */ Map<String, Object> createTaskGroup(User loginUser, Long projectCode, String name, String description, int groupSize); /** * update the task group * * @param loginUser login user * @param name task group name * @param description task group description * @param groupSize task group total size * @return the result code and msg */ Map<String, Object> updateTaskGroup(User loginUser, int id, String name, String description, int groupSize); /** * get task group status * * @param id task group id * @return the result code and msg */ boolean isTheTaskGroupAvailable(int id); /** * query all task group by user id * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @return the result code and msg */ Map<String, Object> queryAllTaskGroup(User loginUser, String name,Integer status, int pageNo, int pageSize); /** * query all task group by status * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @param status status * @return the result code and msg */ Map<String, Object> queryTaskGroupByStatus(User loginUser, int pageNo, int pageSize, int status); /** * query all task group by name * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @param name name * @return the result code and msg */ Map<String, Object> queryTaskGroupByProjectCode(User loginUser, int pageNo, int pageSize, Long projectCode); /** * query all task group by id * * @param loginUser login user * @param id id * @return the result code and msg */ Map<String, Object> queryTaskGroupById(User loginUser, int id); /** * query * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @param userId user id * @param name name * @param status status * @return the result code and msg */ Map<String, Object> doQuery(User loginUser, int pageNo, int pageSize, int userId, String name, Integer status); /** * close a task group * * @param loginUser login user * @param id task group id * @return the result code and msg */ Map<String, Object> closeTaskGroup(User loginUser, int id); /** * start a task group * * @param loginUser login user * @param id task group id * @return the result code and msg */ Map<String, Object> startTaskGroup(User loginUser, int id); /** * wake a task manually * * @param taskId task id * @return result */ Map<String, Object> forceStartTask(User loginUser, int taskId); Map<String, Object> modifyPriority(User loginUser, Integer queueId, Integer priority); }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,701
[Bug] [DAO] There's an redundant field named 'project_id' in the table of 't_ds_task_group'.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened In the table of 't_ds_task_group' there's two fields of 'project_code' and 'project_id'. Each of them can identify the project, and the class of 'TaskGroup' doesn't use this field. So need to remove it from the table and improve a few wrong annotations. ![image](https://user-images.githubusercontent.com/4928204/147622409-d9407e03-a2eb-49fc-abd7-e67111027472.png) ### What you expected to happen I expect to remove the redundant field 'project_id' in the table 't_ds_task_group'. ### How to reproduce See it in the project. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7701
https://github.com/apache/dolphinscheduler/pull/7703
5ab0135d1831c9156388d8b593417e775e2b2b84
71ccb42697e197b2668a08a83eb9bb19261b4f5a
"2021-12-29T02:52:14Z"
java
"2021-12-29T04:57:36Z"
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/service/impl/TaskGroupServiceImpl.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.api.service.impl; import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper; import com.baomidou.mybatisplus.core.metadata.IPage; import com.baomidou.mybatisplus.extension.plugins.pagination.Page; import org.apache.dolphinscheduler.api.dto.gantt.Task; import org.apache.dolphinscheduler.api.enums.Status; import org.apache.dolphinscheduler.api.service.TaskGroupQueueService; import org.apache.dolphinscheduler.api.service.TaskGroupService; import org.apache.dolphinscheduler.api.utils.PageInfo; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.enums.Flag; import org.apache.dolphinscheduler.dao.entity.TaskGroup; import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.dao.mapper.TaskGroupMapper; import org.apache.dolphinscheduler.service.process.ProcessService; import org.apache.dolphinscheduler.spi.utils.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; /** * task Group Service */ @Service public class TaskGroupServiceImpl extends BaseServiceImpl implements TaskGroupService { @Autowired private TaskGroupMapper taskGroupMapper; @Autowired private TaskGroupQueueService taskGroupQueueService; @Autowired private ProcessService processService; private static final Logger logger = LoggerFactory.getLogger(TaskGroupServiceImpl.class); /** * create a Task group * * @param loginUser login user * @param name task group name * @param description task group description * @param groupSize task group total size * @return the result code and msg */ @Override public Map<String, Object> createTaskGroup(User loginUser, Long projectCode, String name, String description, int groupSize) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } if (name == null) { putMsg(result, Status.NAME_NULL); return result; } if (groupSize <= 0) { putMsg(result, Status.TASK_GROUP_SIZE_ERROR); return result; } TaskGroup taskGroup1 = taskGroupMapper.queryByName(loginUser.getId(), name); if (taskGroup1 != null) { putMsg(result, Status.TASK_GROUP_NAME_EXSIT); return result; } TaskGroup taskGroup = new TaskGroup(name, projectCode, description, groupSize, loginUser.getId(), Flag.YES.getCode()); taskGroup.setCreateTime(new Date()); taskGroup.setUpdateTime(new Date()); if (taskGroupMapper.insert(taskGroup) > 0) { putMsg(result, Status.SUCCESS); } else { putMsg(result, Status.CREATE_TASK_GROUP_ERROR); return result; } return result; } /** * update the task group * * @param loginUser login user * @param name task group name * @param description task group description * @param groupSize task group total size * @return the result code and msg */ @Override public Map<String, Object> updateTaskGroup(User loginUser, int id, String name, String description, int groupSize) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } if (name == null) { putMsg(result, Status.NAME_NULL); return result; } if (groupSize <= 0) { putMsg(result, Status.TASK_GROUP_SIZE_ERROR); return result; } Integer exists = taskGroupMapper.selectCount(new QueryWrapper<TaskGroup>().lambda().eq(TaskGroup::getName, name).ne(TaskGroup::getId, id)); if (exists > 0) { putMsg(result, Status.TASK_GROUP_NAME_EXSIT); return result; } TaskGroup taskGroup = taskGroupMapper.selectById(id); if (taskGroup.getStatus() != Flag.YES.getCode()) { putMsg(result, Status.TASK_GROUP_STATUS_ERROR); return result; } taskGroup.setGroupSize(groupSize); taskGroup.setDescription(description); taskGroup.setUpdateTime(new Date()); if (StringUtils.isNotEmpty(name)) { taskGroup.setName(name); } int i = taskGroupMapper.updateById(taskGroup); logger.info("update result:{}", i); putMsg(result, Status.SUCCESS); return result; } /** * get task group status * * @param id task group id * @return is the task group available */ @Override public boolean isTheTaskGroupAvailable(int id) { return taskGroupMapper.selectCountByIdStatus(id, Flag.YES.getCode()) == 1; } /** * query all task group by user id * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @return the result code and msg */ @Override public Map<String, Object> queryAllTaskGroup(User loginUser, String name, Integer status, int pageNo, int pageSize) { return this.doQuery(loginUser, pageNo, pageSize, loginUser.getId(), name, status); } /** * query all task group by status * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @param status status * @return the result code and msg */ @Override public Map<String, Object> queryTaskGroupByStatus(User loginUser, int pageNo, int pageSize, int status) { return this.doQuery(loginUser, pageNo, pageSize, loginUser.getId(), null, status); } /** * query all task group by name * * @param loginUser login user * @param pageNo page no * @param pageSize page size * @param projectCode project code * @return the result code and msg */ @Override public Map<String, Object> queryTaskGroupByProjectCode(User loginUser, int pageNo, int pageSize, Long projectCode) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } Page<TaskGroup> page = new Page<>(pageNo, pageSize); IPage<TaskGroup> taskGroupPaging = taskGroupMapper.queryTaskGroupPagingByProjectCode(page, projectCode); return getStringObjectMap(pageNo, pageSize, result, taskGroupPaging); } private Map<String, Object> getStringObjectMap(int pageNo, int pageSize, Map<String, Object> result, IPage<TaskGroup> taskGroupPaging) { PageInfo<TaskGroup> pageInfo = new PageInfo<>(pageNo, pageSize); int total = taskGroupPaging == null ? 0 : (int) taskGroupPaging.getTotal(); List<TaskGroup> list = taskGroupPaging == null ? new ArrayList<TaskGroup>() : taskGroupPaging.getRecords(); pageInfo.setTotal(total); pageInfo.setTotalList(list); result.put(Constants.DATA_LIST, pageInfo); logger.info("select result:{}", taskGroupPaging); putMsg(result, Status.SUCCESS); return result; } /** * query all task group by id * * @param loginUser login user * @param id id * @return the result code and msg */ @Override public Map<String, Object> queryTaskGroupById(User loginUser, int id) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } TaskGroup taskGroup = taskGroupMapper.selectById(id); result.put(Constants.DATA_LIST, taskGroup); putMsg(result, Status.SUCCESS); return result; } /** * query * * @param pageNo page no * @param pageSize page size * @param userId user id * @param name name * @param status status * @return the result code and msg */ @Override public Map<String, Object> doQuery(User loginUser, int pageNo, int pageSize, int userId, String name, Integer status) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } Page<TaskGroup> page = new Page<>(pageNo, pageSize); IPage<TaskGroup> taskGroupPaging = taskGroupMapper.queryTaskGroupPaging(page, userId, name, status); return getStringObjectMap(pageNo, pageSize, result, taskGroupPaging); } /** * close a task group * * @param loginUser login user * @param id task group id * @return the result code and msg */ @Override public Map<String, Object> closeTaskGroup(User loginUser, int id) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } TaskGroup taskGroup = taskGroupMapper.selectById(id); taskGroup.setStatus(Flag.NO.getCode()); taskGroupMapper.updateById(taskGroup); putMsg(result, Status.SUCCESS); return result; } /** * start a task group * * @param loginUser login user * @param id task group id * @return the result code and msg */ @Override public Map<String, Object> startTaskGroup(User loginUser, int id) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } TaskGroup taskGroup = taskGroupMapper.selectById(id); if (taskGroup.getStatus() == 1) { putMsg(result, Status.TASK_GROUP_STATUS_ERROR); return result; } taskGroup.setStatus(1); taskGroup.setUpdateTime(new Date(System.currentTimeMillis())); int update = taskGroupMapper.updateById(taskGroup); putMsg(result, Status.SUCCESS); return result; } /** * wake a task manually * * @param loginUser * @param queueId task group queue id * @return result */ @Override public Map<String, Object> forceStartTask(User loginUser, int queueId) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } taskGroupQueueService.forceStartTask(queueId, Flag.YES.getCode()); putMsg(result, Status.SUCCESS); return result; } @Override public Map<String, Object> modifyPriority(User loginUser, Integer queueId, Integer priority) { Map<String, Object> result = new HashMap<>(); if (isNotAdmin(loginUser, result)) { return result; } taskGroupQueueService.modifyPriority(queueId, priority); putMsg(result, Status.SUCCESS); return result; } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,701
[Bug] [DAO] There's an redundant field named 'project_id' in the table of 't_ds_task_group'.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened In the table of 't_ds_task_group' there's two fields of 'project_code' and 'project_id'. Each of them can identify the project, and the class of 'TaskGroup' doesn't use this field. So need to remove it from the table and improve a few wrong annotations. ![image](https://user-images.githubusercontent.com/4928204/147622409-d9407e03-a2eb-49fc-abd7-e67111027472.png) ### What you expected to happen I expect to remove the redundant field 'project_id' in the table 't_ds_task_group'. ### How to reproduce See it in the project. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7701
https://github.com/apache/dolphinscheduler/pull/7703
5ab0135d1831c9156388d8b593417e775e2b2b84
71ccb42697e197b2668a08a83eb9bb19261b4f5a
"2021-12-29T02:52:14Z"
java
"2021-12-29T04:57:36Z"
dolphinscheduler-dao/src/main/resources/sql/dolphinscheduler_h2.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ SET FOREIGN_KEY_CHECKS=0; SET REFERENTIAL_INTEGRITY FALSE; -- ---------------------------- -- Table structure for QRTZ_JOB_DETAILS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_JOB_DETAILS CASCADE; CREATE TABLE QRTZ_JOB_DETAILS ( SCHED_NAME varchar(120) NOT NULL, JOB_NAME varchar(200) NOT NULL, JOB_GROUP varchar(200) NOT NULL, DESCRIPTION varchar(250) DEFAULT NULL, JOB_CLASS_NAME varchar(250) NOT NULL, IS_DURABLE boolean NOT NULL, IS_NONCONCURRENT boolean NOT NULL, IS_UPDATE_DATA boolean NOT NULL, REQUESTS_RECOVERY boolean NOT NULL, JOB_DATA blob, PRIMARY KEY (SCHED_NAME, JOB_NAME, JOB_GROUP) ); -- ---------------------------- -- Table structure for QRTZ_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_TRIGGERS CASCADE; CREATE TABLE QRTZ_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, JOB_NAME varchar(200) NOT NULL, JOB_GROUP varchar(200) NOT NULL, DESCRIPTION varchar(250) DEFAULT NULL, NEXT_FIRE_TIME bigint(13) DEFAULT NULL, PREV_FIRE_TIME bigint(13) DEFAULT NULL, PRIORITY int(11) DEFAULT NULL, TRIGGER_STATE varchar(16) NOT NULL, TRIGGER_TYPE varchar(8) NOT NULL, START_TIME bigint(13) NOT NULL, END_TIME bigint(13) DEFAULT NULL, CALENDAR_NAME varchar(200) DEFAULT NULL, MISFIRE_INSTR smallint(2) DEFAULT NULL, JOB_DATA blob, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), CONSTRAINT QRTZ_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, JOB_NAME, JOB_GROUP) REFERENCES QRTZ_JOB_DETAILS (SCHED_NAME, JOB_NAME, JOB_GROUP) ); -- ---------------------------- -- Table structure for QRTZ_BLOB_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_BLOB_TRIGGERS CASCADE; CREATE TABLE QRTZ_BLOB_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, BLOB_DATA blob, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_BLOB_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_CALENDARS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_CALENDARS CASCADE; CREATE TABLE QRTZ_CALENDARS ( SCHED_NAME varchar(120) NOT NULL, CALENDAR_NAME varchar(200) NOT NULL, CALENDAR blob NOT NULL, PRIMARY KEY (SCHED_NAME, CALENDAR_NAME) ); -- ---------------------------- -- Records of QRTZ_CALENDARS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_CRON_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_CRON_TRIGGERS CASCADE; CREATE TABLE QRTZ_CRON_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, CRON_EXPRESSION varchar(120) NOT NULL, TIME_ZONE_ID varchar(80) DEFAULT NULL, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), CONSTRAINT QRTZ_CRON_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_CRON_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_FIRED_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_FIRED_TRIGGERS CASCADE; CREATE TABLE QRTZ_FIRED_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, ENTRY_ID varchar(200) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, INSTANCE_NAME varchar(200) NOT NULL, FIRED_TIME bigint(13) NOT NULL, SCHED_TIME bigint(13) NOT NULL, PRIORITY int(11) NOT NULL, STATE varchar(16) NOT NULL, JOB_NAME varchar(200) DEFAULT NULL, JOB_GROUP varchar(200) DEFAULT NULL, IS_NONCONCURRENT boolean DEFAULT NULL, REQUESTS_RECOVERY boolean DEFAULT NULL, PRIMARY KEY (SCHED_NAME, ENTRY_ID) ); -- ---------------------------- -- Records of QRTZ_FIRED_TRIGGERS -- ---------------------------- -- ---------------------------- -- Records of QRTZ_JOB_DETAILS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_LOCKS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_LOCKS CASCADE; CREATE TABLE QRTZ_LOCKS ( SCHED_NAME varchar(120) NOT NULL, LOCK_NAME varchar(40) NOT NULL, PRIMARY KEY (SCHED_NAME, LOCK_NAME) ); -- ---------------------------- -- Records of QRTZ_LOCKS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_PAUSED_TRIGGER_GRPS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_PAUSED_TRIGGER_GRPS CASCADE; CREATE TABLE QRTZ_PAUSED_TRIGGER_GRPS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, PRIMARY KEY (SCHED_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_PAUSED_TRIGGER_GRPS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SCHEDULER_STATE -- ---------------------------- DROP TABLE IF EXISTS QRTZ_SCHEDULER_STATE CASCADE; CREATE TABLE QRTZ_SCHEDULER_STATE ( SCHED_NAME varchar(120) NOT NULL, INSTANCE_NAME varchar(200) NOT NULL, LAST_CHECKIN_TIME bigint(13) NOT NULL, CHECKIN_INTERVAL bigint(13) NOT NULL, PRIMARY KEY (SCHED_NAME, INSTANCE_NAME) ); -- ---------------------------- -- Records of QRTZ_SCHEDULER_STATE -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SIMPLE_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_SIMPLE_TRIGGERS CASCADE; CREATE TABLE QRTZ_SIMPLE_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, REPEAT_COUNT bigint(7) NOT NULL, REPEAT_INTERVAL bigint(12) NOT NULL, TIMES_TRIGGERED bigint(10) NOT NULL, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), CONSTRAINT QRTZ_SIMPLE_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_SIMPLE_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for QRTZ_SIMPROP_TRIGGERS -- ---------------------------- DROP TABLE IF EXISTS QRTZ_SIMPROP_TRIGGERS CASCADE; CREATE TABLE QRTZ_SIMPROP_TRIGGERS ( SCHED_NAME varchar(120) NOT NULL, TRIGGER_NAME varchar(200) NOT NULL, TRIGGER_GROUP varchar(200) NOT NULL, STR_PROP_1 varchar(512) DEFAULT NULL, STR_PROP_2 varchar(512) DEFAULT NULL, STR_PROP_3 varchar(512) DEFAULT NULL, INT_PROP_1 int(11) DEFAULT NULL, INT_PROP_2 int(11) DEFAULT NULL, LONG_PROP_1 bigint(20) DEFAULT NULL, LONG_PROP_2 bigint(20) DEFAULT NULL, DEC_PROP_1 decimal(13, 4) DEFAULT NULL, DEC_PROP_2 decimal(13, 4) DEFAULT NULL, BOOL_PROP_1 boolean DEFAULT NULL, BOOL_PROP_2 boolean DEFAULT NULL, PRIMARY KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP), CONSTRAINT QRTZ_SIMPROP_TRIGGERS_ibfk_1 FOREIGN KEY (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) REFERENCES QRTZ_TRIGGERS (SCHED_NAME, TRIGGER_NAME, TRIGGER_GROUP) ); -- ---------------------------- -- Records of QRTZ_SIMPROP_TRIGGERS -- ---------------------------- -- ---------------------------- -- Records of QRTZ_TRIGGERS -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_access_token -- ---------------------------- DROP TABLE IF EXISTS t_ds_access_token CASCADE; CREATE TABLE t_ds_access_token ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) DEFAULT NULL, token varchar(64) DEFAULT NULL, expire_time datetime DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_access_token -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_alert -- ---------------------------- DROP TABLE IF EXISTS t_ds_alert CASCADE; CREATE TABLE t_ds_alert ( id int(11) NOT NULL AUTO_INCREMENT, title varchar(64) DEFAULT NULL, content text, alert_status tinyint(4) DEFAULT '0', log text, alertgroup_id int(11) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_alert -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_alertgroup -- ---------------------------- DROP TABLE IF EXISTS t_ds_alertgroup CASCADE; CREATE TABLE t_ds_alertgroup ( id int(11) NOT NULL AUTO_INCREMENT, alert_instance_ids varchar(255) DEFAULT NULL, create_user_id int(11) DEFAULT NULL, group_name varchar(255) DEFAULT NULL, description varchar(255) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY t_ds_alertgroup_name_un (group_name) ); -- ---------------------------- -- Records of t_ds_alertgroup -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_command -- ---------------------------- DROP TABLE IF EXISTS t_ds_command CASCADE; CREATE TABLE t_ds_command ( id int(11) NOT NULL AUTO_INCREMENT, command_type tinyint(4) DEFAULT NULL, process_definition_code bigint(20) DEFAULT NULL, command_param text, task_depend_type tinyint(4) DEFAULT NULL, failure_strategy tinyint(4) DEFAULT '0', warning_type tinyint(4) DEFAULT '0', warning_group_id int(11) DEFAULT NULL, schedule_time datetime DEFAULT NULL, start_time datetime DEFAULT NULL, executor_id int(11) DEFAULT NULL, update_time datetime DEFAULT NULL, process_instance_priority int(11) DEFAULT NULL, worker_group varchar(64), environment_code bigint(20) DEFAULT '-1', dry_run int NULL DEFAULT 0, process_instance_id int(11) DEFAULT 0, process_definition_version int(11) DEFAULT 0, PRIMARY KEY (id), KEY priority_id_index (process_instance_priority, id) ); -- ---------------------------- -- Records of t_ds_command -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_datasource -- ---------------------------- DROP TABLE IF EXISTS t_ds_datasource CASCADE; CREATE TABLE t_ds_datasource ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(64) NOT NULL, note varchar(255) DEFAULT NULL, type tinyint(4) NOT NULL, user_id int(11) NOT NULL, connection_params text NOT NULL, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY t_ds_datasource_name_un (name, type) ); -- ---------------------------- -- Records of t_ds_datasource -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_error_command -- ---------------------------- DROP TABLE IF EXISTS t_ds_error_command CASCADE; CREATE TABLE t_ds_error_command ( id int(11) NOT NULL, command_type tinyint(4) DEFAULT NULL, executor_id int(11) DEFAULT NULL, process_definition_code bigint(20) DEFAULT NULL, command_param text, task_depend_type tinyint(4) DEFAULT NULL, failure_strategy tinyint(4) DEFAULT '0', warning_type tinyint(4) DEFAULT '0', warning_group_id int(11) DEFAULT NULL, schedule_time datetime DEFAULT NULL, start_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, process_instance_priority int(11) DEFAULT NULL, worker_group varchar(64), environment_code bigint(20) DEFAULT '-1', message text, dry_run int NULL DEFAULT 0, process_instance_id int(11) DEFAULT 0, process_definition_version int(11) DEFAULT 0, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_error_command -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_process_definition -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_definition CASCADE; CREATE TABLE t_ds_process_definition ( id int(11) NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(255) DEFAULT NULL, version int(11) DEFAULT NULL, description text, project_code bigint(20) NOT NULL, release_state tinyint(4) DEFAULT NULL, user_id int(11) DEFAULT NULL, global_params text, flag tinyint(4) DEFAULT NULL, locations text, warning_group_id int(11) DEFAULT NULL, timeout int(11) DEFAULT '0', tenant_id int(11) NOT NULL DEFAULT '-1', execution_type tinyint(4) DEFAULT '0', create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY process_unique (name,project_code) USING BTREE, UNIQUE KEY code_unique (code) ); -- ---------------------------- -- Records of t_ds_process_definition -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_process_definition_log -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_definition_log CASCADE; CREATE TABLE t_ds_process_definition_log ( id int(11) NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(200) DEFAULT NULL, version int(11) DEFAULT NULL, description text, project_code bigint(20) NOT NULL, release_state tinyint(4) DEFAULT NULL, user_id int(11) DEFAULT NULL, global_params text, flag tinyint(4) DEFAULT NULL, locations text, warning_group_id int(11) DEFAULT NULL, timeout int(11) DEFAULT '0', tenant_id int(11) NOT NULL DEFAULT '-1', execution_type tinyint(4) DEFAULT '0', operator int(11) DEFAULT NULL, operate_time datetime DEFAULT NULL, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_task_definition -- ---------------------------- DROP TABLE IF EXISTS t_ds_task_definition CASCADE; CREATE TABLE t_ds_task_definition ( id int(11) NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(200) DEFAULT NULL, version int(11) DEFAULT NULL, description text, project_code bigint(20) NOT NULL, user_id int(11) DEFAULT NULL, task_type varchar(50) NOT NULL, task_params longtext, flag tinyint(2) DEFAULT NULL, task_priority tinyint(4) DEFAULT NULL, worker_group varchar(200) DEFAULT NULL, environment_code bigint(20) DEFAULT '-1', fail_retry_times int(11) DEFAULT NULL, fail_retry_interval int(11) DEFAULT NULL, timeout_flag tinyint(2) DEFAULT '0', timeout_notify_strategy tinyint(4) DEFAULT NULL, timeout int(11) DEFAULT '0', delay_time int(11) DEFAULT '0', task_group_id int(11) DEFAULT NULL, task_group_priority tinyint(4) DEFAULT '0', resource_ids text, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id, code) ); -- ---------------------------- -- Table structure for t_ds_task_definition_log -- ---------------------------- DROP TABLE IF EXISTS t_ds_task_definition_log CASCADE; CREATE TABLE t_ds_task_definition_log ( id int(11) NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(200) DEFAULT NULL, version int(11) DEFAULT NULL, description text, project_code bigint(20) NOT NULL, user_id int(11) DEFAULT NULL, task_type varchar(50) NOT NULL, task_params text, flag tinyint(2) DEFAULT NULL, task_priority tinyint(4) DEFAULT NULL, worker_group varchar(200) DEFAULT NULL, environment_code bigint(20) DEFAULT '-1', fail_retry_times int(11) DEFAULT NULL, fail_retry_interval int(11) DEFAULT NULL, timeout_flag tinyint(2) DEFAULT '0', timeout_notify_strategy tinyint(4) DEFAULT NULL, timeout int(11) DEFAULT '0', delay_time int(11) DEFAULT '0', resource_ids text, operator int(11) DEFAULT NULL, task_group_id int(11) DEFAULT NULL, task_group_priority tinyint(4) DEFAULT '0', operate_time datetime DEFAULT NULL, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_process_task_relation -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_task_relation CASCADE; CREATE TABLE t_ds_process_task_relation ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(200) DEFAULT NULL, process_definition_version int(11) DEFAULT NULL, project_code bigint(20) NOT NULL, process_definition_code bigint(20) NOT NULL, pre_task_code bigint(20) NOT NULL, pre_task_version int(11) NOT NULL, post_task_code bigint(20) NOT NULL, post_task_version int(11) NOT NULL, condition_type tinyint(2) DEFAULT NULL, condition_params text, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_process_task_relation_log -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_task_relation_log CASCADE; CREATE TABLE t_ds_process_task_relation_log ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(200) DEFAULT NULL, process_definition_version int(11) DEFAULT NULL, project_code bigint(20) NOT NULL, process_definition_code bigint(20) NOT NULL, pre_task_code bigint(20) NOT NULL, pre_task_version int(11) NOT NULL, post_task_code bigint(20) NOT NULL, post_task_version int(11) NOT NULL, condition_type tinyint(2) DEFAULT NULL, condition_params text, operator int(11) DEFAULT NULL, operate_time datetime DEFAULT NULL, create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_process_instance -- ---------------------------- DROP TABLE IF EXISTS t_ds_process_instance CASCADE; CREATE TABLE t_ds_process_instance ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) DEFAULT NULL, process_definition_version int(11) DEFAULT NULL, process_definition_code bigint(20) not NULL, state tinyint(4) DEFAULT NULL, recovery tinyint(4) DEFAULT NULL, start_time datetime DEFAULT NULL, end_time datetime DEFAULT NULL, run_times int(11) DEFAULT NULL, host varchar(135) DEFAULT NULL, command_type tinyint(4) DEFAULT NULL, command_param text, task_depend_type tinyint(4) DEFAULT NULL, max_try_times tinyint(4) DEFAULT '0', failure_strategy tinyint(4) DEFAULT '0', warning_type tinyint(4) DEFAULT '0', warning_group_id int(11) DEFAULT NULL, schedule_time datetime DEFAULT NULL, command_start_time datetime DEFAULT NULL, global_params text, flag tinyint(4) DEFAULT '1', update_time timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, is_sub_process int(11) DEFAULT '0', executor_id int(11) NOT NULL, history_cmd text, process_instance_priority int(11) DEFAULT NULL, worker_group varchar(64) DEFAULT NULL, environment_code bigint(20) DEFAULT '-1', timeout int(11) DEFAULT '0', next_process_instance_id int(11) DEFAULT '0', tenant_id int(11) NOT NULL DEFAULT '-1', var_pool longtext, dry_run int NULL DEFAULT 0, restart_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_process_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_project -- ---------------------------- DROP TABLE IF EXISTS t_ds_project CASCADE; CREATE TABLE t_ds_project ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(100) DEFAULT NULL, code bigint(20) NOT NULL, description varchar(200) DEFAULT NULL, user_id int(11) DEFAULT NULL, flag tinyint(4) DEFAULT '1', create_time datetime NOT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_project -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_queue -- ---------------------------- DROP TABLE IF EXISTS t_ds_queue CASCADE; CREATE TABLE t_ds_queue ( id int(11) NOT NULL AUTO_INCREMENT, queue_name varchar(64) DEFAULT NULL, queue varchar(64) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_queue -- ---------------------------- INSERT INTO t_ds_queue VALUES ('1', 'default', 'default', null, null); -- ---------------------------- -- Table structure for t_ds_relation_datasource_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_datasource_user CASCADE; CREATE TABLE t_ds_relation_datasource_user ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, datasource_id int(11) DEFAULT NULL, perm int(11) DEFAULT '1', create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_relation_datasource_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_process_instance -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_process_instance CASCADE; CREATE TABLE t_ds_relation_process_instance ( id int(11) NOT NULL AUTO_INCREMENT, parent_process_instance_id int(11) DEFAULT NULL, parent_task_instance_id int(11) DEFAULT NULL, process_instance_id int(11) DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_relation_process_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_project_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_project_user CASCADE; CREATE TABLE t_ds_relation_project_user ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, project_id int(11) DEFAULT NULL, perm int(11) DEFAULT '1', create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_relation_project_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_resources_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_resources_user CASCADE; CREATE TABLE t_ds_relation_resources_user ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, resources_id int(11) DEFAULT NULL, perm int(11) DEFAULT '1', create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_relation_resources_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_relation_udfs_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_relation_udfs_user CASCADE; CREATE TABLE t_ds_relation_udfs_user ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, udf_id int(11) DEFAULT NULL, perm int(11) DEFAULT '1', create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Table structure for t_ds_resources -- ---------------------------- DROP TABLE IF EXISTS t_ds_resources CASCADE; CREATE TABLE t_ds_resources ( id int(11) NOT NULL AUTO_INCREMENT, alias varchar(64) DEFAULT NULL, file_name varchar(64) DEFAULT NULL, description varchar(255) DEFAULT NULL, user_id int(11) DEFAULT NULL, type tinyint(4) DEFAULT NULL, size bigint(20) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, pid int(11) DEFAULT NULL, full_name varchar(64) DEFAULT NULL, is_directory tinyint(4) DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY t_ds_resources_un (full_name, type) ); -- ---------------------------- -- Records of t_ds_resources -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_schedules -- ---------------------------- DROP TABLE IF EXISTS t_ds_schedules CASCADE; CREATE TABLE t_ds_schedules ( id int(11) NOT NULL AUTO_INCREMENT, process_definition_code bigint(20) NOT NULL, start_time datetime NOT NULL, end_time datetime NOT NULL, timezone_id varchar(40) DEFAULT NULL, crontab varchar(255) NOT NULL, failure_strategy tinyint(4) NOT NULL, user_id int(11) NOT NULL, release_state tinyint(4) NOT NULL, warning_type tinyint(4) NOT NULL, warning_group_id int(11) DEFAULT NULL, process_instance_priority int(11) DEFAULT NULL, worker_group varchar(64) DEFAULT '', environment_code bigint(20) DEFAULT '-1', create_time datetime NOT NULL, update_time datetime NOT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_schedules -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_session -- ---------------------------- DROP TABLE IF EXISTS t_ds_session CASCADE; CREATE TABLE t_ds_session ( id varchar(64) NOT NULL, user_id int(11) DEFAULT NULL, ip varchar(45) DEFAULT NULL, last_login_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_session -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_task_instance -- ---------------------------- DROP TABLE IF EXISTS t_ds_task_instance CASCADE; CREATE TABLE t_ds_task_instance ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(255) DEFAULT NULL, task_type varchar(50) NOT NULL, task_code bigint(20) NOT NULL, task_definition_version int(11) DEFAULT NULL, process_instance_id int(11) DEFAULT NULL, state tinyint(4) DEFAULT NULL, submit_time datetime DEFAULT NULL, start_time datetime DEFAULT NULL, end_time datetime DEFAULT NULL, host varchar(135) DEFAULT NULL, execute_path varchar(200) DEFAULT NULL, log_path varchar(200) DEFAULT NULL, alert_flag tinyint(4) DEFAULT NULL, retry_times int(4) DEFAULT '0', pid int(4) DEFAULT NULL, app_link text, task_params longtext, flag tinyint(4) DEFAULT '1', retry_interval int(4) DEFAULT NULL, max_retry_times int(2) DEFAULT NULL, task_instance_priority int(11) DEFAULT NULL, worker_group varchar(64) DEFAULT NULL, environment_code bigint(20) DEFAULT '-1', environment_config text DEFAULT '', executor_id int(11) DEFAULT NULL, first_submit_time datetime DEFAULT NULL, delay_time int(4) DEFAULT '0', task_group_id int(11) DEFAULT NULL, var_pool longtext, dry_run int NULL DEFAULT 0, PRIMARY KEY (id), FOREIGN KEY (process_instance_id) REFERENCES t_ds_process_instance (id) ON DELETE CASCADE ); -- ---------------------------- -- Records of t_ds_task_instance -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_tenant -- ---------------------------- DROP TABLE IF EXISTS t_ds_tenant CASCADE; CREATE TABLE t_ds_tenant ( id int(11) NOT NULL AUTO_INCREMENT, tenant_code varchar(64) DEFAULT NULL, description varchar(255) DEFAULT NULL, queue_id int(11) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_tenant -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_udfs -- ---------------------------- DROP TABLE IF EXISTS t_ds_udfs CASCADE; CREATE TABLE t_ds_udfs ( id int(11) NOT NULL AUTO_INCREMENT, user_id int(11) NOT NULL, func_name varchar(100) NOT NULL, class_name varchar(255) NOT NULL, type tinyint(4) NOT NULL, arg_types varchar(255) DEFAULT NULL, database varchar(255) DEFAULT NULL, description varchar(255) DEFAULT NULL, resource_id int(11) NOT NULL, resource_name varchar(255) NOT NULL, create_time datetime NOT NULL, update_time datetime NOT NULL, PRIMARY KEY (id) ); -- ---------------------------- -- Records of t_ds_udfs -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_user -- ---------------------------- DROP TABLE IF EXISTS t_ds_user CASCADE; CREATE TABLE t_ds_user ( id int(11) NOT NULL AUTO_INCREMENT, user_name varchar(64) DEFAULT NULL, user_password varchar(64) DEFAULT NULL, user_type tinyint(4) DEFAULT NULL, email varchar(64) DEFAULT NULL, phone varchar(11) DEFAULT NULL, tenant_id int(11) DEFAULT NULL, create_time datetime DEFAULT NULL, update_time datetime DEFAULT NULL, queue varchar(64) DEFAULT NULL, state int(1) DEFAULT 1, PRIMARY KEY (id), UNIQUE KEY user_name_unique (user_name) ); -- ---------------------------- -- Records of t_ds_user -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_worker_group -- ---------------------------- DROP TABLE IF EXISTS t_ds_worker_group CASCADE; CREATE TABLE t_ds_worker_group ( id bigint(11) NOT NULL AUTO_INCREMENT, name varchar(255) NOT NULL, addr_list text NULL DEFAULT NULL, create_time datetime NULL DEFAULT NULL, update_time datetime NULL DEFAULT NULL, PRIMARY KEY (id), UNIQUE KEY name_unique (name) ); -- ---------------------------- -- Records of t_ds_worker_group -- ---------------------------- -- ---------------------------- -- Table structure for t_ds_version -- ---------------------------- DROP TABLE IF EXISTS t_ds_version CASCADE; CREATE TABLE t_ds_version ( id int(11) NOT NULL AUTO_INCREMENT, version varchar(200) NOT NULL, PRIMARY KEY (id), UNIQUE KEY version_UNIQUE (version) ); -- ---------------------------- -- Records of t_ds_version -- ---------------------------- INSERT INTO t_ds_version VALUES ('1', '1.4.0'); -- ---------------------------- -- Records of t_ds_alertgroup -- ---------------------------- INSERT INTO t_ds_alertgroup(alert_instance_ids, create_user_id, group_name, description, create_time, update_time) VALUES ('1,2', 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39'); -- ---------------------------- -- Records of t_ds_user -- ---------------------------- INSERT INTO t_ds_user VALUES ('1', 'admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22', null, 1); -- ---------------------------- -- Table structure for t_ds_plugin_define -- ---------------------------- DROP TABLE IF EXISTS t_ds_plugin_define CASCADE; CREATE TABLE t_ds_plugin_define ( id int NOT NULL AUTO_INCREMENT, plugin_name varchar(100) NOT NULL, plugin_type varchar(100) NOT NULL, plugin_params text, create_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, update_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id), UNIQUE KEY t_ds_plugin_define_UN (plugin_name,plugin_type) ); -- ---------------------------- -- Table structure for t_ds_alert_plugin_instance -- ---------------------------- DROP TABLE IF EXISTS t_ds_alert_plugin_instance CASCADE; CREATE TABLE t_ds_alert_plugin_instance ( id int NOT NULL AUTO_INCREMENT, plugin_define_id int NOT NULL, plugin_instance_params text, create_time timestamp NULL DEFAULT CURRENT_TIMESTAMP, update_time timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, instance_name varchar(200) DEFAULT NULL, PRIMARY KEY (id) ); -- -- Table structure for table t_ds_environment -- DROP TABLE IF EXISTS t_ds_environment CASCADE; CREATE TABLE t_ds_environment ( id int NOT NULL AUTO_INCREMENT, code bigint(20) NOT NULL, name varchar(100) DEFAULT NULL, config text DEFAULT NULL, description text, operator int DEFAULT NULL, create_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, update_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id), UNIQUE KEY environment_name_unique (name), UNIQUE KEY environment_code_unique (code) ); -- -- Table structure for table t_ds_environment_worker_group_relation -- DROP TABLE IF EXISTS t_ds_environment_worker_group_relation CASCADE; CREATE TABLE t_ds_environment_worker_group_relation ( id int NOT NULL AUTO_INCREMENT, environment_code bigint(20) NOT NULL, worker_group varchar(255) NOT NULL, operator int DEFAULT NULL, create_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, update_time timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id), UNIQUE KEY environment_worker_group_unique (environment_code,worker_group) ); DROP TABLE IF EXISTS t_ds_task_group_queue; CREATE TABLE t_ds_task_group_queue ( id int(11) NOT NULL AUTO_INCREMENT , task_id int(11) DEFAULT NULL , task_name VARCHAR(100) DEFAULT NULL , group_id int(11) DEFAULT NULL , process_id int(11) DEFAULT NULL , priority int(8) DEFAULT '0' , status int(4) DEFAULT '-1' , force_start int(4) DEFAULT '0' , in_queue int(4) DEFAULT '0' , create_time datetime DEFAULT NULL , update_time datetime DEFAULT NULL , PRIMARY KEY (id) ); DROP TABLE IF EXISTS t_ds_task_group; CREATE TABLE t_ds_task_group ( id int(11) NOT NULL AUTO_INCREMENT , name varchar(100) DEFAULT NULL , description varchar(200) DEFAULT NULL , group_size int(11) NOT NULL , project_code bigint(20) DEFAULT '0', use_size int(11) DEFAULT '0' , user_id int(11) DEFAULT NULL , project_id int(11) DEFAULT NULL , status int(4) DEFAULT '1' , create_time datetime DEFAULT NULL , update_time datetime DEFAULT NULL , PRIMARY KEY(id) );
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,701
[Bug] [DAO] There's an redundant field named 'project_id' in the table of 't_ds_task_group'.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened In the table of 't_ds_task_group' there's two fields of 'project_code' and 'project_id'. Each of them can identify the project, and the class of 'TaskGroup' doesn't use this field. So need to remove it from the table and improve a few wrong annotations. ![image](https://user-images.githubusercontent.com/4928204/147622409-d9407e03-a2eb-49fc-abd7-e67111027472.png) ### What you expected to happen I expect to remove the redundant field 'project_id' in the table 't_ds_task_group'. ### How to reproduce See it in the project. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7701
https://github.com/apache/dolphinscheduler/pull/7703
5ab0135d1831c9156388d8b593417e775e2b2b84
71ccb42697e197b2668a08a83eb9bb19261b4f5a
"2021-12-29T02:52:14Z"
java
"2021-12-29T04:57:36Z"
dolphinscheduler-dao/src/main/resources/sql/dolphinscheduler_postgresql.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ DROP TABLE IF EXISTS QRTZ_FIRED_TRIGGERS; DROP TABLE IF EXISTS QRTZ_PAUSED_TRIGGER_GRPS; DROP TABLE IF EXISTS QRTZ_SCHEDULER_STATE; DROP TABLE IF EXISTS QRTZ_LOCKS; DROP TABLE IF EXISTS QRTZ_SIMPLE_TRIGGERS; DROP TABLE IF EXISTS QRTZ_SIMPROP_TRIGGERS; DROP TABLE IF EXISTS QRTZ_CRON_TRIGGERS; DROP TABLE IF EXISTS QRTZ_BLOB_TRIGGERS; DROP TABLE IF EXISTS QRTZ_TRIGGERS; DROP TABLE IF EXISTS QRTZ_JOB_DETAILS; DROP TABLE IF EXISTS QRTZ_CALENDARS; CREATE TABLE QRTZ_JOB_DETAILS ( SCHED_NAME character varying(120) NOT NULL, JOB_NAME character varying(200) NOT NULL, JOB_GROUP character varying(200) NOT NULL, DESCRIPTION character varying(250) NULL, JOB_CLASS_NAME character varying(250) NOT NULL, IS_DURABLE boolean NOT NULL, IS_NONCONCURRENT boolean NOT NULL, IS_UPDATE_DATA boolean NOT NULL, REQUESTS_RECOVERY boolean NOT NULL, JOB_DATA bytea NULL ); alter table QRTZ_JOB_DETAILS add primary key(SCHED_NAME,JOB_NAME,JOB_GROUP); CREATE TABLE QRTZ_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, JOB_NAME character varying(200) NOT NULL, JOB_GROUP character varying(200) NOT NULL, DESCRIPTION character varying(250) NULL, NEXT_FIRE_TIME BIGINT NULL, PREV_FIRE_TIME BIGINT NULL, PRIORITY INTEGER NULL, TRIGGER_STATE character varying(16) NOT NULL, TRIGGER_TYPE character varying(8) NOT NULL, START_TIME BIGINT NOT NULL, END_TIME BIGINT NULL, CALENDAR_NAME character varying(200) NULL, MISFIRE_INSTR SMALLINT NULL, JOB_DATA bytea NULL ) ; alter table QRTZ_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_SIMPLE_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, REPEAT_COUNT BIGINT NOT NULL, REPEAT_INTERVAL BIGINT NOT NULL, TIMES_TRIGGERED BIGINT NOT NULL ) ; alter table QRTZ_SIMPLE_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_CRON_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, CRON_EXPRESSION character varying(120) NOT NULL, TIME_ZONE_ID character varying(80) ) ; alter table QRTZ_CRON_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_SIMPROP_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, STR_PROP_1 character varying(512) NULL, STR_PROP_2 character varying(512) NULL, STR_PROP_3 character varying(512) NULL, INT_PROP_1 INT NULL, INT_PROP_2 INT NULL, LONG_PROP_1 BIGINT NULL, LONG_PROP_2 BIGINT NULL, DEC_PROP_1 NUMERIC(13,4) NULL, DEC_PROP_2 NUMERIC(13,4) NULL, BOOL_PROP_1 boolean NULL, BOOL_PROP_2 boolean NULL ) ; alter table QRTZ_SIMPROP_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_BLOB_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, BLOB_DATA bytea NULL ) ; alter table QRTZ_BLOB_TRIGGERS add primary key(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_CALENDARS ( SCHED_NAME character varying(120) NOT NULL, CALENDAR_NAME character varying(200) NOT NULL, CALENDAR bytea NOT NULL ) ; alter table QRTZ_CALENDARS add primary key(SCHED_NAME,CALENDAR_NAME); CREATE TABLE QRTZ_PAUSED_TRIGGER_GRPS ( SCHED_NAME character varying(120) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL ) ; alter table QRTZ_PAUSED_TRIGGER_GRPS add primary key(SCHED_NAME,TRIGGER_GROUP); CREATE TABLE QRTZ_FIRED_TRIGGERS ( SCHED_NAME character varying(120) NOT NULL, ENTRY_ID character varying(200) NOT NULL, TRIGGER_NAME character varying(200) NOT NULL, TRIGGER_GROUP character varying(200) NOT NULL, INSTANCE_NAME character varying(200) NOT NULL, FIRED_TIME BIGINT NOT NULL, SCHED_TIME BIGINT NOT NULL, PRIORITY INTEGER NOT NULL, STATE character varying(16) NOT NULL, JOB_NAME character varying(200) NULL, JOB_GROUP character varying(200) NULL, IS_NONCONCURRENT boolean NULL, REQUESTS_RECOVERY boolean NULL ) ; alter table QRTZ_FIRED_TRIGGERS add primary key(SCHED_NAME,ENTRY_ID); CREATE TABLE QRTZ_SCHEDULER_STATE ( SCHED_NAME character varying(120) NOT NULL, INSTANCE_NAME character varying(200) NOT NULL, LAST_CHECKIN_TIME BIGINT NOT NULL, CHECKIN_INTERVAL BIGINT NOT NULL ) ; alter table QRTZ_SCHEDULER_STATE add primary key(SCHED_NAME,INSTANCE_NAME); CREATE TABLE QRTZ_LOCKS ( SCHED_NAME character varying(120) NOT NULL, LOCK_NAME character varying(40) NOT NULL ) ; alter table QRTZ_LOCKS add primary key(SCHED_NAME,LOCK_NAME); CREATE INDEX IDX_QRTZ_J_REQ_RECOVERY ON QRTZ_JOB_DETAILS(SCHED_NAME,REQUESTS_RECOVERY); CREATE INDEX IDX_QRTZ_J_GRP ON QRTZ_JOB_DETAILS(SCHED_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_T_J ON QRTZ_TRIGGERS(SCHED_NAME,JOB_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_T_JG ON QRTZ_TRIGGERS(SCHED_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_T_C ON QRTZ_TRIGGERS(SCHED_NAME,CALENDAR_NAME); CREATE INDEX IDX_QRTZ_T_G ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_GROUP); CREATE INDEX IDX_QRTZ_T_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_T_N_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_T_N_G_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_GROUP,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_T_NEXT_FIRE_TIME ON QRTZ_TRIGGERS(SCHED_NAME,NEXT_FIRE_TIME); CREATE INDEX IDX_QRTZ_T_NFT_ST ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_STATE,NEXT_FIRE_TIME); CREATE INDEX IDX_QRTZ_T_NFT_MISFIRE ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME); CREATE INDEX IDX_QRTZ_T_NFT_ST_MISFIRE ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_T_NFT_ST_MISFIRE_GRP ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME,TRIGGER_GROUP,TRIGGER_STATE); CREATE INDEX IDX_QRTZ_FT_TRIG_INST_NAME ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,INSTANCE_NAME); CREATE INDEX IDX_QRTZ_FT_INST_JOB_REQ_RCVRY ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,INSTANCE_NAME,REQUESTS_RECOVERY); CREATE INDEX IDX_QRTZ_FT_J_G ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,JOB_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_FT_JG ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,JOB_GROUP); CREATE INDEX IDX_QRTZ_FT_T_G ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP); CREATE INDEX IDX_QRTZ_FT_TG ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,TRIGGER_GROUP); -- -- Table structure for table t_ds_access_token -- DROP TABLE IF EXISTS t_ds_access_token; CREATE TABLE t_ds_access_token ( id int NOT NULL , user_id int DEFAULT NULL , token varchar(64) DEFAULT NULL , expire_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_alert -- DROP TABLE IF EXISTS t_ds_alert; CREATE TABLE t_ds_alert ( id int NOT NULL , title varchar(64) DEFAULT NULL , content text , alert_status int DEFAULT '0' , log text , alertgroup_id int DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_alertgroup -- DROP TABLE IF EXISTS t_ds_alertgroup; CREATE TABLE t_ds_alertgroup( id int NOT NULL, alert_instance_ids varchar (255) DEFAULT NULL, create_user_id int4 DEFAULT NULL, group_name varchar(255) DEFAULT NULL, description varchar(255) DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id), CONSTRAINT t_ds_alertgroup_name_un UNIQUE (group_name) ) ; -- -- Table structure for table t_ds_command -- DROP TABLE IF EXISTS t_ds_command; CREATE TABLE t_ds_command ( id int NOT NULL , command_type int DEFAULT NULL , process_definition_code bigint NOT NULL , command_param text , task_depend_type int DEFAULT NULL , failure_strategy int DEFAULT '0' , warning_type int DEFAULT '0' , warning_group_id int DEFAULT NULL , schedule_time timestamp DEFAULT NULL , start_time timestamp DEFAULT NULL , executor_id int DEFAULT NULL , update_time timestamp DEFAULT NULL , process_instance_priority int DEFAULT NULL , worker_group varchar(64), environment_code bigint DEFAULT '-1', dry_run int DEFAULT '0' , process_instance_id int DEFAULT 0, process_definition_version int DEFAULT 0, PRIMARY KEY (id) ) ; create index priority_id_index on t_ds_command (process_instance_priority,id); -- -- Table structure for table t_ds_datasource -- DROP TABLE IF EXISTS t_ds_datasource; CREATE TABLE t_ds_datasource ( id int NOT NULL , name varchar(64) NOT NULL , note varchar(255) DEFAULT NULL , type int NOT NULL , user_id int NOT NULL , connection_params text NOT NULL , create_time timestamp NOT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id), CONSTRAINT t_ds_datasource_name_un UNIQUE (name, type) ) ; -- -- Table structure for table t_ds_error_command -- DROP TABLE IF EXISTS t_ds_error_command; CREATE TABLE t_ds_error_command ( id int NOT NULL , command_type int DEFAULT NULL , process_definition_code bigint NOT NULL , command_param text , task_depend_type int DEFAULT NULL , failure_strategy int DEFAULT '0' , warning_type int DEFAULT '0' , warning_group_id int DEFAULT NULL , schedule_time timestamp DEFAULT NULL , start_time timestamp DEFAULT NULL , executor_id int DEFAULT NULL , update_time timestamp DEFAULT NULL , process_instance_priority int DEFAULT NULL , worker_group varchar(64), environment_code bigint DEFAULT '-1', dry_run int DEFAULT '0' , message text , process_instance_id int DEFAULT 0, process_definition_version int DEFAULT 0, PRIMARY KEY (id) ); -- -- Table structure for table t_ds_process_definition -- DROP TABLE IF EXISTS t_ds_process_definition; CREATE TABLE t_ds_process_definition ( id int NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , release_state int DEFAULT NULL , user_id int DEFAULT NULL , global_params text , locations text , warning_group_id int DEFAULT NULL , flag int DEFAULT NULL , timeout int DEFAULT '0' , tenant_id int DEFAULT '-1' , execution_type int DEFAULT '0', create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) , CONSTRAINT process_definition_unique UNIQUE (name, project_code) ) ; create index process_definition_index on t_ds_process_definition (code,id); -- -- Table structure for table t_ds_process_definition_log -- DROP TABLE IF EXISTS t_ds_process_definition_log; CREATE TABLE t_ds_process_definition_log ( id int NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , release_state int DEFAULT NULL , user_id int DEFAULT NULL , global_params text , locations text , warning_group_id int DEFAULT NULL , flag int DEFAULT NULL , timeout int DEFAULT '0' , tenant_id int DEFAULT '-1' , execution_type int DEFAULT '0', operator int DEFAULT NULL , operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_task_definition -- DROP TABLE IF EXISTS t_ds_task_definition; CREATE TABLE t_ds_task_definition ( id int NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , user_id int DEFAULT NULL , task_type varchar(50) DEFAULT NULL , task_params text , flag int DEFAULT NULL , task_priority int DEFAULT NULL , worker_group varchar(255) DEFAULT NULL , environment_code bigint DEFAULT '-1', fail_retry_times int DEFAULT NULL , fail_retry_interval int DEFAULT NULL , timeout_flag int DEFAULT NULL , timeout_notify_strategy int DEFAULT NULL , timeout int DEFAULT '0' , delay_time int DEFAULT '0' , task_group_id int DEFAULT NULL, task_group_priority int(4) DEFAULT '0', resource_ids text , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; create index task_definition_index on t_ds_task_definition (project_code,id); -- -- Table structure for table t_ds_task_definition_log -- DROP TABLE IF EXISTS t_ds_task_definition_log; CREATE TABLE t_ds_task_definition_log ( id int NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , user_id int DEFAULT NULL , task_type varchar(50) DEFAULT NULL , task_params text , flag int DEFAULT NULL , task_priority int DEFAULT NULL , worker_group varchar(255) DEFAULT NULL , environment_code bigint DEFAULT '-1', fail_retry_times int DEFAULT NULL , fail_retry_interval int DEFAULT NULL , timeout_flag int DEFAULT NULL , timeout_notify_strategy int DEFAULT NULL , timeout int DEFAULT '0' , delay_time int DEFAULT '0' , resource_ids text , operator int DEFAULT NULL , task_group_id int DEFAULT NULL, task_group_priority int(4) DEFAULT '0', operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; create index idx_task_definition_log_code_version on t_ds_task_definition_log (code,version); -- -- Table structure for table t_ds_process_task_relation -- DROP TABLE IF EXISTS t_ds_process_task_relation; CREATE TABLE t_ds_process_task_relation ( id int NOT NULL , name varchar(255) DEFAULT NULL , project_code bigint DEFAULT NULL , process_definition_code bigint DEFAULT NULL , process_definition_version int DEFAULT NULL , pre_task_code bigint DEFAULT NULL , pre_task_version int DEFAULT '0' , post_task_code bigint DEFAULT NULL , post_task_version int DEFAULT '0' , condition_type int DEFAULT NULL , condition_params text , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; create index process_task_relation_idx_project_code_process_definition_code on t_ds_process_task_relation (project_code,process_definition_code); -- -- Table structure for table t_ds_process_task_relation_log -- DROP TABLE IF EXISTS t_ds_process_task_relation_log; CREATE TABLE t_ds_process_task_relation_log ( id int NOT NULL , name varchar(255) DEFAULT NULL , project_code bigint DEFAULT NULL , process_definition_code bigint DEFAULT NULL , process_definition_version int DEFAULT NULL , pre_task_code bigint DEFAULT NULL , pre_task_version int DEFAULT '0' , post_task_code bigint DEFAULT NULL , post_task_version int DEFAULT '0' , condition_type int DEFAULT NULL , condition_params text , operator int DEFAULT NULL , operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; create index process_task_relation_log_idx_project_code_process_definition_code on t_ds_process_task_relation_log (project_code,process_definition_code); -- -- Table structure for table t_ds_process_instance -- DROP TABLE IF EXISTS t_ds_process_instance; CREATE TABLE t_ds_process_instance ( id int NOT NULL , name varchar(255) DEFAULT NULL , process_definition_code bigint DEFAULT NULL , process_definition_version int DEFAULT NULL , state int DEFAULT NULL , recovery int DEFAULT NULL , start_time timestamp DEFAULT NULL , end_time timestamp DEFAULT NULL , run_times int DEFAULT NULL , host varchar(135) DEFAULT NULL , command_type int DEFAULT NULL , command_param text , task_depend_type int DEFAULT NULL , max_try_times int DEFAULT '0' , failure_strategy int DEFAULT '0' , warning_type int DEFAULT '0' , warning_group_id int DEFAULT NULL , schedule_time timestamp DEFAULT NULL , command_start_time timestamp DEFAULT NULL , global_params text , process_instance_json text , flag int DEFAULT '1' , update_time timestamp NULL , is_sub_process int DEFAULT '0' , executor_id int NOT NULL , history_cmd text , dependence_schedule_times text , process_instance_priority int DEFAULT NULL , worker_group varchar(64) , environment_code bigint DEFAULT '-1', timeout int DEFAULT '0' , tenant_id int NOT NULL DEFAULT '-1' , var_pool text , dry_run int DEFAULT '0' , next_process_instance_id int DEFAULT '0', restart_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; create index process_instance_index on t_ds_process_instance (process_definition_code,id); create index start_time_index on t_ds_process_instance (start_time,end_time); -- -- Table structure for table t_ds_project -- DROP TABLE IF EXISTS t_ds_project; CREATE TABLE t_ds_project ( id int NOT NULL , name varchar(100) DEFAULT NULL , code bigint NOT NULL, description varchar(200) DEFAULT NULL , user_id int DEFAULT NULL , flag int DEFAULT '1' , create_time timestamp DEFAULT CURRENT_TIMESTAMP , update_time timestamp DEFAULT CURRENT_TIMESTAMP , PRIMARY KEY (id) ) ; create index user_id_index on t_ds_project (user_id); -- -- Table structure for table t_ds_queue -- DROP TABLE IF EXISTS t_ds_queue; CREATE TABLE t_ds_queue ( id int NOT NULL , queue_name varchar(64) DEFAULT NULL , queue varchar(64) DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ); -- -- Table structure for table t_ds_relation_datasource_user -- DROP TABLE IF EXISTS t_ds_relation_datasource_user; CREATE TABLE t_ds_relation_datasource_user ( id int NOT NULL , user_id int NOT NULL , datasource_id int DEFAULT NULL , perm int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; ; -- -- Table structure for table t_ds_relation_process_instance -- DROP TABLE IF EXISTS t_ds_relation_process_instance; CREATE TABLE t_ds_relation_process_instance ( id int NOT NULL , parent_process_instance_id int DEFAULT NULL , parent_task_instance_id int DEFAULT NULL , process_instance_id int DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_relation_project_user -- DROP TABLE IF EXISTS t_ds_relation_project_user; CREATE TABLE t_ds_relation_project_user ( id int NOT NULL , user_id int NOT NULL , project_id int DEFAULT NULL , perm int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; create index relation_project_user_id_index on t_ds_relation_project_user (user_id); -- -- Table structure for table t_ds_relation_resources_user -- DROP TABLE IF EXISTS t_ds_relation_resources_user; CREATE TABLE t_ds_relation_resources_user ( id int NOT NULL , user_id int NOT NULL , resources_id int DEFAULT NULL , perm int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_relation_udfs_user -- DROP TABLE IF EXISTS t_ds_relation_udfs_user; CREATE TABLE t_ds_relation_udfs_user ( id int NOT NULL , user_id int NOT NULL , udf_id int DEFAULT NULL , perm int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; ; -- -- Table structure for table t_ds_resources -- DROP TABLE IF EXISTS t_ds_resources; CREATE TABLE t_ds_resources ( id int NOT NULL , alias varchar(64) DEFAULT NULL , file_name varchar(64) DEFAULT NULL , description varchar(255) DEFAULT NULL , user_id int DEFAULT NULL , type int DEFAULT NULL , size bigint DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , pid int, full_name varchar(64), is_directory int, PRIMARY KEY (id), CONSTRAINT t_ds_resources_un UNIQUE (full_name, type) ) ; -- -- Table structure for table t_ds_schedules -- DROP TABLE IF EXISTS t_ds_schedules; CREATE TABLE t_ds_schedules ( id int NOT NULL , process_definition_code bigint NOT NULL , start_time timestamp NOT NULL , end_time timestamp NOT NULL , timezone_id varchar(40) default NULL , crontab varchar(255) NOT NULL , failure_strategy int NOT NULL , user_id int NOT NULL , release_state int NOT NULL , warning_type int NOT NULL , warning_group_id int DEFAULT NULL , process_instance_priority int DEFAULT NULL , worker_group varchar(64), environment_code bigint DEFAULT '-1', create_time timestamp NOT NULL , update_time timestamp NOT NULL , PRIMARY KEY (id) ); -- -- Table structure for table t_ds_session -- DROP TABLE IF EXISTS t_ds_session; CREATE TABLE t_ds_session ( id varchar(64) NOT NULL , user_id int DEFAULT NULL , ip varchar(45) DEFAULT NULL , last_login_time timestamp DEFAULT NULL , PRIMARY KEY (id) ); -- -- Table structure for table t_ds_task_instance -- DROP TABLE IF EXISTS t_ds_task_instance; CREATE TABLE t_ds_task_instance ( id int NOT NULL , name varchar(255) DEFAULT NULL , task_type varchar(50) DEFAULT NULL , task_code bigint NOT NULL, task_definition_version int DEFAULT NULL , process_instance_id int DEFAULT NULL , state int DEFAULT NULL , submit_time timestamp DEFAULT NULL , start_time timestamp DEFAULT NULL , end_time timestamp DEFAULT NULL , host varchar(135) DEFAULT NULL , execute_path varchar(200) DEFAULT NULL , log_path varchar(200) DEFAULT NULL , alert_flag int DEFAULT NULL , retry_times int DEFAULT '0' , pid int DEFAULT NULL , app_link text , task_params text , flag int DEFAULT '1' , retry_interval int DEFAULT NULL , max_retry_times int DEFAULT NULL , task_instance_priority int DEFAULT NULL , worker_group varchar(64), environment_code bigint DEFAULT '-1', environment_config text, executor_id int DEFAULT NULL , first_submit_time timestamp DEFAULT NULL , delay_time int DEFAULT '0' , task_group_id int DEFAULT NULL, var_pool text , dry_run int DEFAULT '0' , PRIMARY KEY (id), CONSTRAINT foreign_key_instance_id FOREIGN KEY(process_instance_id) REFERENCES t_ds_process_instance(id) ON DELETE CASCADE ) ; -- -- Table structure for table t_ds_tenant -- DROP TABLE IF EXISTS t_ds_tenant; CREATE TABLE t_ds_tenant ( id int NOT NULL , tenant_code varchar(64) DEFAULT NULL , description varchar(255) DEFAULT NULL , queue_id int DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_udfs -- DROP TABLE IF EXISTS t_ds_udfs; CREATE TABLE t_ds_udfs ( id int NOT NULL , user_id int NOT NULL , func_name varchar(100) NOT NULL , class_name varchar(255) NOT NULL , type int NOT NULL , arg_types varchar(255) DEFAULT NULL , database varchar(255) DEFAULT NULL , description varchar(255) DEFAULT NULL , resource_id int NOT NULL , resource_name varchar(255) NOT NULL , create_time timestamp NOT NULL , update_time timestamp NOT NULL , PRIMARY KEY (id) ) ; -- -- Table structure for table t_ds_user -- DROP TABLE IF EXISTS t_ds_user; CREATE TABLE t_ds_user ( id int NOT NULL , user_name varchar(64) DEFAULT NULL , user_password varchar(64) DEFAULT NULL , user_type int DEFAULT NULL , email varchar(64) DEFAULT NULL , phone varchar(11) DEFAULT NULL , tenant_id int DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , queue varchar(64) DEFAULT NULL , state int DEFAULT 1 , PRIMARY KEY (id) ); comment on column t_ds_user.state is 'state 0:disable 1:enable'; -- -- Table structure for table t_ds_version -- DROP TABLE IF EXISTS t_ds_version; CREATE TABLE t_ds_version ( id int NOT NULL , version varchar(200) NOT NULL, PRIMARY KEY (id) ) ; create index version_index on t_ds_version(version); -- -- Table structure for table t_ds_worker_group -- DROP TABLE IF EXISTS t_ds_worker_group; CREATE TABLE t_ds_worker_group ( id bigint NOT NULL , name varchar(255) NOT NULL , addr_list text DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) , CONSTRAINT name_unique UNIQUE (name) ) ; -- -- Table structure for table t_ds_worker_server -- DROP TABLE IF EXISTS t_ds_worker_server; CREATE TABLE t_ds_worker_server ( id int NOT NULL , host varchar(45) DEFAULT NULL , port int DEFAULT NULL , zk_directory varchar(64) DEFAULT NULL , res_info varchar(255) DEFAULT NULL , create_time timestamp DEFAULT NULL , last_heartbeat_time timestamp DEFAULT NULL , PRIMARY KEY (id) ) ; DROP SEQUENCE IF EXISTS t_ds_access_token_id_sequence; CREATE SEQUENCE t_ds_access_token_id_sequence; ALTER TABLE t_ds_access_token ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_access_token_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_alert_id_sequence; CREATE SEQUENCE t_ds_alert_id_sequence; ALTER TABLE t_ds_alert ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_alert_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_alertgroup_id_sequence; CREATE SEQUENCE t_ds_alertgroup_id_sequence; ALTER TABLE t_ds_alertgroup ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_alertgroup_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_command_id_sequence; CREATE SEQUENCE t_ds_command_id_sequence; ALTER TABLE t_ds_command ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_command_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_datasource_id_sequence; CREATE SEQUENCE t_ds_datasource_id_sequence; ALTER TABLE t_ds_datasource ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_datasource_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_definition_id_sequence; CREATE SEQUENCE t_ds_process_definition_id_sequence; ALTER TABLE t_ds_process_definition ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_definition_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_definition_log_id_sequence; CREATE SEQUENCE t_ds_process_definition_log_id_sequence; ALTER TABLE t_ds_process_definition_log ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_definition_log_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_task_definition_id_sequence; CREATE SEQUENCE t_ds_task_definition_id_sequence; ALTER TABLE t_ds_task_definition ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_task_definition_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_task_definition_log_id_sequence; CREATE SEQUENCE t_ds_task_definition_log_id_sequence; ALTER TABLE t_ds_task_definition_log ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_task_definition_log_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_task_relation_id_sequence; CREATE SEQUENCE t_ds_process_task_relation_id_sequence; ALTER TABLE t_ds_process_task_relation ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_task_relation_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_task_relation_log_id_sequence; CREATE SEQUENCE t_ds_process_task_relation_log_id_sequence; ALTER TABLE t_ds_process_task_relation_log ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_task_relation_log_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_process_instance_id_sequence; CREATE SEQUENCE t_ds_process_instance_id_sequence; ALTER TABLE t_ds_process_instance ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_process_instance_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_project_id_sequence; CREATE SEQUENCE t_ds_project_id_sequence; ALTER TABLE t_ds_project ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_project_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_queue_id_sequence; CREATE SEQUENCE t_ds_queue_id_sequence; ALTER TABLE t_ds_queue ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_queue_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_datasource_user_id_sequence; CREATE SEQUENCE t_ds_relation_datasource_user_id_sequence; ALTER TABLE t_ds_relation_datasource_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_datasource_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_process_instance_id_sequence; CREATE SEQUENCE t_ds_relation_process_instance_id_sequence; ALTER TABLE t_ds_relation_process_instance ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_process_instance_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_project_user_id_sequence; CREATE SEQUENCE t_ds_relation_project_user_id_sequence; ALTER TABLE t_ds_relation_project_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_project_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_resources_user_id_sequence; CREATE SEQUENCE t_ds_relation_resources_user_id_sequence; ALTER TABLE t_ds_relation_resources_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_resources_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_relation_udfs_user_id_sequence; CREATE SEQUENCE t_ds_relation_udfs_user_id_sequence; ALTER TABLE t_ds_relation_udfs_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_relation_udfs_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_resources_id_sequence; CREATE SEQUENCE t_ds_resources_id_sequence; ALTER TABLE t_ds_resources ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_resources_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_schedules_id_sequence; CREATE SEQUENCE t_ds_schedules_id_sequence; ALTER TABLE t_ds_schedules ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_schedules_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_task_instance_id_sequence; CREATE SEQUENCE t_ds_task_instance_id_sequence; ALTER TABLE t_ds_task_instance ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_task_instance_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_tenant_id_sequence; CREATE SEQUENCE t_ds_tenant_id_sequence; ALTER TABLE t_ds_tenant ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_tenant_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_udfs_id_sequence; CREATE SEQUENCE t_ds_udfs_id_sequence; ALTER TABLE t_ds_udfs ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_udfs_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_user_id_sequence; CREATE SEQUENCE t_ds_user_id_sequence; ALTER TABLE t_ds_user ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_user_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_version_id_sequence; CREATE SEQUENCE t_ds_version_id_sequence; ALTER TABLE t_ds_version ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_version_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_worker_group_id_sequence; CREATE SEQUENCE t_ds_worker_group_id_sequence; ALTER TABLE t_ds_worker_group ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_worker_group_id_sequence'); DROP SEQUENCE IF EXISTS t_ds_worker_server_id_sequence; CREATE SEQUENCE t_ds_worker_server_id_sequence; ALTER TABLE t_ds_worker_server ALTER COLUMN id SET DEFAULT NEXTVAL('t_ds_worker_server_id_sequence'); -- Records of t_ds_user?user : admin , password : dolphinscheduler123 INSERT INTO t_ds_user(user_name, user_password, user_type, email, phone, tenant_id, state, create_time, update_time) VALUES ('admin', '7ad2410b2f4c074479a8937a28a22b8f', '0', 'xxx@qq.com', '', '0', 1, '2018-03-27 15:48:50', '2018-10-24 17:40:22'); -- Records of t_ds_alertgroup, default admin warning group INSERT INTO t_ds_alertgroup(alert_instance_ids, create_user_id, group_name, description, create_time, update_time) VALUES ('1,2', 1, 'default admin warning group', 'default admin warning group', '2018-11-29 10:20:39', '2018-11-29 10:20:39'); -- Records of t_ds_queue,default queue name : default INSERT INTO t_ds_queue(queue_name, queue, create_time, update_time) VALUES ('default', 'default', '2018-11-29 10:22:33', '2018-11-29 10:22:33'); -- Records of t_ds_queue,default queue name : default INSERT INTO t_ds_version(version) VALUES ('1.4.0'); -- -- Table structure for table t_ds_plugin_define -- DROP TABLE IF EXISTS t_ds_plugin_define; CREATE TABLE t_ds_plugin_define ( id serial NOT NULL, plugin_name varchar(100) NOT NULL, plugin_type varchar(100) NOT NULL, plugin_params text NULL, create_time timestamp NULL, update_time timestamp NULL, CONSTRAINT t_ds_plugin_define_pk PRIMARY KEY (id), CONSTRAINT t_ds_plugin_define_un UNIQUE (plugin_name, plugin_type) ); -- -- Table structure for table t_ds_alert_plugin_instance -- DROP TABLE IF EXISTS t_ds_alert_plugin_instance; CREATE TABLE t_ds_alert_plugin_instance ( id serial NOT NULL, plugin_define_id int4 NOT NULL, plugin_instance_params text NULL, create_time timestamp NULL, update_time timestamp NULL, instance_name varchar(200) NULL, CONSTRAINT t_ds_alert_plugin_instance_pk PRIMARY KEY (id) ); -- -- Table structure for table t_ds_environment -- DROP TABLE IF EXISTS t_ds_environment; CREATE TABLE t_ds_environment ( id serial NOT NULL, code bigint NOT NULL, name varchar(100) DEFAULT NULL, config text DEFAULT NULL, description text, operator int DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id), CONSTRAINT environment_name_unique UNIQUE (name), CONSTRAINT environment_code_unique UNIQUE (code) ); -- -- Table structure for table t_ds_environment_worker_group_relation -- DROP TABLE IF EXISTS t_ds_environment_worker_group_relation; CREATE TABLE t_ds_environment_worker_group_relation ( id serial NOT NULL, environment_code bigint NOT NULL, worker_group varchar(255) NOT NULL, operator int DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id) , CONSTRAINT environment_worker_group_unique UNIQUE (environment_code,worker_group) ); -- -- Table structure for table t_ds_task_group_queue -- DROP TABLE IF EXISTS t_ds_task_group_queue; CREATE TABLE t_ds_task_group_queue ( id serial NOT NULL, task_id int DEFAULT NULL , task_name VARCHAR(100) DEFAULT NULL , group_id int DEFAULT NULL , process_id int DEFAULT NULL , priority int DEFAULT '0' , status int DEFAULT '-1' , force_start int DEFAULT '0' , in_queue int DEFAULT '0' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) ); -- -- Table structure for table t_ds_task_group -- DROP TABLE IF EXISTS t_ds_task_group; CREATE TABLE t_ds_task_group ( id serial NOT NULL, name varchar(100) DEFAULT NULL , description varchar(200) DEFAULT NULL , group_size int NOT NULL , project_code bigint DEFAULT '0' , use_size int DEFAULT '0' , user_id int DEFAULT NULL , project_id int DEFAULT NULL , status int DEFAULT '1' , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY(id) );
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,686
[Bug] [MasterServer] server restart fail after force killed
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened When I kill the worker server by force kill, and restart it, it will stop because dead server. And I found that the session timeout was still happen as usual in seconds, even if this is already a new server. So the master accept the remove node event and handle dead server. ### What you expected to happen server can restart successfully ### How to reproduce kill server forcely and restart it. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7686
https://github.com/apache/dolphinscheduler/pull/7688
71ccb42697e197b2668a08a83eb9bb19261b4f5a
d3bd7309fb623d87ae7f35a01bdaec727a672be4
"2021-12-28T11:57:34Z"
java
"2021-12-29T07:35:03Z"
dolphinscheduler-master/src/main/java/org/apache/dolphinscheduler/server/master/registry/MasterRegistryClient.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.server.master.registry; import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_MASTERS; import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_NODE; import static org.apache.dolphinscheduler.common.Constants.SLEEP_TIME_MILLIS; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.IStoppable; import org.apache.dolphinscheduler.common.enums.ExecutionStatus; import org.apache.dolphinscheduler.common.enums.NodeType; import org.apache.dolphinscheduler.common.enums.StateEvent; import org.apache.dolphinscheduler.common.enums.StateEventType; import org.apache.dolphinscheduler.common.model.Server; import org.apache.dolphinscheduler.common.thread.ThreadUtils; import org.apache.dolphinscheduler.common.utils.NetUtils; import org.apache.dolphinscheduler.dao.entity.ProcessInstance; import org.apache.dolphinscheduler.dao.entity.TaskInstance; import org.apache.dolphinscheduler.registry.api.ConnectionState; import org.apache.dolphinscheduler.remote.utils.NamedThreadFactory; import org.apache.dolphinscheduler.server.builder.TaskExecutionContextBuilder; import org.apache.dolphinscheduler.server.master.config.MasterConfig; import org.apache.dolphinscheduler.server.master.runner.WorkflowExecuteThreadPool; import org.apache.dolphinscheduler.server.registry.HeartBeatTask; import org.apache.dolphinscheduler.server.utils.ProcessUtils; import org.apache.dolphinscheduler.service.process.ProcessService; import org.apache.dolphinscheduler.service.queue.entity.TaskExecutionContext; import org.apache.dolphinscheduler.service.registry.RegistryClient; import org.apache.commons.collections4.CollectionUtils; import org.apache.commons.lang.StringUtils; import java.time.Duration; import java.util.Collections; import java.util.Date; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component; import com.google.common.collect.Sets; /** * zookeeper master client * <p> * single instance */ @Component public class MasterRegistryClient { /** * logger */ private static final Logger logger = LoggerFactory.getLogger(MasterRegistryClient.class); /** * process service */ @Autowired private ProcessService processService; @Autowired private RegistryClient registryClient; /** * master config */ @Autowired private MasterConfig masterConfig; /** * heartbeat executor */ private ScheduledExecutorService heartBeatExecutor; @Autowired private WorkflowExecuteThreadPool workflowExecuteThreadPool; /** * master startup time, ms */ private long startupTime; private String localNodePath; public void init() { this.startupTime = System.currentTimeMillis(); this.heartBeatExecutor = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("HeartBeatExecutor")); } public void start() { String nodeLock = Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_STARTUP_MASTERS; try { // create distributed lock with the root node path of the lock space as /dolphinscheduler/lock/failover/startup-masters registryClient.getLock(nodeLock); // master registry registry(); String registryPath = getMasterPath(); registryClient.handleDeadServer(Collections.singleton(registryPath), NodeType.MASTER, Constants.DELETE_OP); // init system node while (!registryClient.checkNodeExists(NetUtils.getHost(), NodeType.MASTER)) { ThreadUtils.sleep(SLEEP_TIME_MILLIS); } registryClient.subscribe(REGISTRY_DOLPHINSCHEDULER_NODE, new MasterRegistryDataListener()); } catch (Exception e) { logger.error("master start up exception", e); } finally { registryClient.releaseLock(nodeLock); } } public void setRegistryStoppable(IStoppable stoppable) { registryClient.setStoppable(stoppable); } public void closeRegistry() { // TODO unsubscribe MasterRegistryDataListener deregister(); } /** * remove master node path * * @param path node path * @param nodeType node type * @param failover is failover */ public void removeMasterNodePath(String path, NodeType nodeType, boolean failover) { logger.info("{} node deleted : {}", nodeType, path); if (StringUtils.isEmpty(path)) { logger.error("server down error: empty path: {}, nodeType:{}", path, nodeType); return; } String serverHost = registryClient.getHostByEventDataPath(path); if (StringUtils.isEmpty(serverHost)) { logger.error("server down error: unknown path: {}, nodeType:{}", path, nodeType); return; } String failoverPath = getFailoverLockPath(nodeType, serverHost); try { registryClient.getLock(failoverPath); if (!registryClient.exists(path)) { logger.info("path: {} not exists", path); // handle dead server registryClient.handleDeadServer(Collections.singleton(path), nodeType, Constants.ADD_OP); } //failover server if (failover) { failoverServerWhenDown(serverHost, nodeType); } } catch (Exception e) { logger.error("{} server failover failed, host:{}", nodeType, serverHost, e); } finally { registryClient.releaseLock(failoverPath); } } /** * remove worker node path * * @param path node path * @param nodeType node type * @param failover is failover */ public void removeWorkerNodePath(String path, NodeType nodeType, boolean failover) { logger.info("{} node deleted : {}", nodeType, path); try { String serverHost = null; if (!StringUtils.isEmpty(path)) { serverHost = registryClient.getHostByEventDataPath(path); if (StringUtils.isEmpty(serverHost)) { logger.error("server down error: unknown path: {}", path); return; } if (!registryClient.exists(path)) { logger.info("path: {} not exists", path); // handle dead server registryClient.handleDeadServer(Collections.singleton(path), nodeType, Constants.ADD_OP); } } //failover server if (failover) { failoverServerWhenDown(serverHost, nodeType); } } catch (Exception e) { logger.error("{} server failover failed", nodeType, e); } } private boolean isNeedToHandleDeadServer(String host, NodeType nodeType, Duration sessionTimeout) { long sessionTimeoutMillis = Math.max(Constants.REGISTRY_SESSION_TIMEOUT, sessionTimeout.toMillis()); List<Server> serverList = registryClient.getServerList(nodeType); if (CollectionUtils.isEmpty(serverList)) { return true; } Date startupTime = getServerStartupTime(serverList, host); if (startupTime == null) { return true; } if (System.currentTimeMillis() - startupTime.getTime() > sessionTimeoutMillis) { return true; } return false; } /** * failover server when server down * * @param serverHost server host * @param nodeType zookeeper node type */ private void failoverServerWhenDown(String serverHost, NodeType nodeType) { switch (nodeType) { case MASTER: failoverMaster(serverHost); break; case WORKER: failoverWorker(serverHost); break; default: break; } } /** * get failover lock path * * @param nodeType zookeeper node type * @return fail over lock path */ public String getFailoverLockPath(NodeType nodeType, String host) { switch (nodeType) { case MASTER: return Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_MASTERS + "/" + host; case WORKER: return Constants.REGISTRY_DOLPHINSCHEDULER_LOCK_FAILOVER_WORKERS + "/" + host; default: return ""; } } /** * task needs failover if task start before worker starts * * @param workerServers worker servers * @param taskInstance task instance * @return true if task instance need fail over */ private boolean checkTaskInstanceNeedFailover(List<Server> workerServers, TaskInstance taskInstance) { boolean taskNeedFailover = true; //now no host will execute this task instance,so no need to failover the task if (taskInstance.getHost() == null) { return false; } //if task start after worker starts, there is no need to failover the task. if (checkTaskAfterWorkerStart(workerServers, taskInstance)) { taskNeedFailover = false; } return taskNeedFailover; } /** * check task start after the worker server starts. * * @param taskInstance task instance * @return true if task instance start time after worker server start date */ private boolean checkTaskAfterWorkerStart(List<Server> workerServers, TaskInstance taskInstance) { if (StringUtils.isEmpty(taskInstance.getHost())) { return false; } Date workerServerStartDate = getServerStartupTime(workerServers, taskInstance.getHost()); if (workerServerStartDate != null) { if (taskInstance.getStartTime() == null) { return taskInstance.getSubmitTime().after(workerServerStartDate); } else { return taskInstance.getStartTime().after(workerServerStartDate); } } return false; } /** * get server startup time */ private Date getServerStartupTime(List<Server> servers, String host) { if (CollectionUtils.isEmpty(servers)) { return null; } Date serverStartupTime = null; for (Server server : servers) { if (host.equals(server.getHost() + Constants.COLON + server.getPort())) { serverStartupTime = server.getCreateTime(); break; } } return serverStartupTime; } /** * get server startup time */ private Date getServerStartupTime(NodeType nodeType, String host) { if (StringUtils.isEmpty(host)) { return null; } List<Server> servers = registryClient.getServerList(nodeType); return getServerStartupTime(servers, host); } /** * failover worker tasks * <p> * 1. kill yarn job if there are yarn jobs in tasks. * 2. change task state from running to need failover. * 3. failover all tasks when workerHost is null * * @param workerHost worker host */ private void failoverWorker(String workerHost) { if (StringUtils.isEmpty(workerHost)) { return; } List<Server> workerServers = registryClient.getServerList(NodeType.WORKER); long startTime = System.currentTimeMillis(); List<TaskInstance> needFailoverTaskInstanceList = processService.queryNeedFailoverTaskInstances(workerHost); Map<Integer, ProcessInstance> processInstanceCacheMap = new HashMap<>(); logger.info("start worker[{}] failover, task list size:{}", workerHost, needFailoverTaskInstanceList.size()); for (TaskInstance taskInstance : needFailoverTaskInstanceList) { ProcessInstance processInstance = processInstanceCacheMap.get(taskInstance.getProcessInstanceId()); if (processInstance == null) { processInstance = processService.findProcessInstanceDetailById(taskInstance.getProcessInstanceId()); if (processInstance == null) { logger.error("failover task instance error, processInstance {} of taskInstance {} is null", taskInstance.getProcessInstanceId(), taskInstance.getId()); continue; } processInstanceCacheMap.put(processInstance.getId(), processInstance); } if (!checkTaskInstanceNeedFailover(workerServers, taskInstance)) { continue; } // only failover the task owned myself if worker down. if (!processInstance.getHost().equalsIgnoreCase(getLocalAddress())) { continue; } logger.info("failover task instance id: {}, process instance id: {}", taskInstance.getId(), taskInstance.getProcessInstanceId()); failoverTaskInstance(processInstance, taskInstance); } logger.info("end worker[{}] failover, useTime:{}ms", workerHost, System.currentTimeMillis() - startTime); } /** * failover master * <p> * failover process instance and associated task instance * * @param masterHost master host */ public void failoverMaster(String masterHost) { if (StringUtils.isEmpty(masterHost)) { return; } Date serverStartupTime = getServerStartupTime(NodeType.MASTER, masterHost); List<Server> workerServers = registryClient.getServerList(NodeType.WORKER); long startTime = System.currentTimeMillis(); List<ProcessInstance> needFailoverProcessInstanceList = processService.queryNeedFailoverProcessInstances(masterHost); logger.info("start master[{}] failover, process list size:{}", masterHost, needFailoverProcessInstanceList.size()); for (ProcessInstance processInstance : needFailoverProcessInstanceList) { if (Constants.NULL.equals(processInstance.getHost())) { continue; } List<TaskInstance> validTaskInstanceList = processService.findValidTaskListByProcessId(processInstance.getId()); for (TaskInstance taskInstance : validTaskInstanceList) { if (Constants.NULL.equals(taskInstance.getHost())) { continue; } if (taskInstance.getState().typeIsFinished()) { continue; } if (!checkTaskInstanceNeedFailover(workerServers, taskInstance)) { continue; } logger.info("failover task instance id: {}, process instance id: {}", taskInstance.getId(), taskInstance.getProcessInstanceId()); failoverTaskInstance(processInstance, taskInstance); } if (serverStartupTime != null && processInstance.getRestartTime() != null && processInstance.getRestartTime().after(serverStartupTime)) { continue; } logger.info("failover process instance id: {}", processInstance.getId()); //updateProcessInstance host is null and insert into command processService.processNeedFailoverProcessInstances(processInstance); } logger.info("master[{}] failover end, useTime:{}ms", masterHost, System.currentTimeMillis() - startTime); } /** * failover task instance * <p> * 1. kill yarn job if there are yarn jobs in tasks. * 2. change task state from running to need failover. * 3. try to notify local master */ private void failoverTaskInstance(ProcessInstance processInstance, TaskInstance taskInstance) { if (taskInstance == null) { logger.error("failover task instance error, taskInstance is null"); return; } if (processInstance == null) { logger.error("failover task instance error, processInstance {} of taskInstance {} is null", taskInstance.getProcessInstanceId(), taskInstance.getId()); return; } taskInstance.setProcessInstance(processInstance); TaskExecutionContext taskExecutionContext = TaskExecutionContextBuilder.get() .buildTaskInstanceRelatedInfo(taskInstance) .buildProcessInstanceRelatedInfo(processInstance) .create(); if (masterConfig.isKillYarnJobWhenTaskFailover()) { // only kill yarn job if exists , the local thread has exited ProcessUtils.killYarnJob(taskExecutionContext); } taskInstance.setState(ExecutionStatus.NEED_FAULT_TOLERANCE); processService.saveTaskInstance(taskInstance); StateEvent stateEvent = new StateEvent(); stateEvent.setTaskInstanceId(taskInstance.getId()); stateEvent.setType(StateEventType.TASK_STATE_CHANGE); stateEvent.setProcessInstanceId(processInstance.getId()); stateEvent.setExecutionStatus(taskInstance.getState()); workflowExecuteThreadPool.submitStateEvent(stateEvent); } /** * registry */ public void registry() { String address = NetUtils.getAddr(masterConfig.getListenPort()); localNodePath = getMasterPath(); int masterHeartbeatInterval = masterConfig.getHeartbeatInterval(); HeartBeatTask heartBeatTask = new HeartBeatTask(startupTime, masterConfig.getMaxCpuLoadAvg(), masterConfig.getReservedMemory(), Sets.newHashSet(getMasterPath()), Constants.MASTER_TYPE, registryClient); registryClient.persistEphemeral(localNodePath, heartBeatTask.getHeartBeatInfo()); registryClient.addConnectionStateListener(this::handleConnectionState); this.heartBeatExecutor.scheduleAtFixedRate(heartBeatTask, masterHeartbeatInterval, masterHeartbeatInterval, TimeUnit.SECONDS); logger.info("master node : {} registry to ZK successfully with heartBeatInterval : {}s", address, masterHeartbeatInterval); } public void handleConnectionState(ConnectionState state) { switch (state) { case CONNECTED: logger.debug("registry connection state is {}", state); break; case SUSPENDED: logger.warn("registry connection state is {}, ready to retry connection", state); break; case RECONNECTED: logger.debug("registry connection state is {}, clean the node info", state); registryClient.persistEphemeral(localNodePath, ""); break; case DISCONNECTED: logger.warn("registry connection state is {}, ready to stop myself", state); registryClient.getStoppable().stop("registry connection state is DISCONNECTED, stop myself"); break; default: } } public void deregister() { try { String address = getLocalAddress(); String localNodePath = getMasterPath(); registryClient.remove(localNodePath); logger.info("master node : {} unRegistry to register center.", address); heartBeatExecutor.shutdown(); logger.info("heartbeat executor shutdown"); registryClient.close(); } catch (Exception e) { logger.error("remove registry path exception ", e); } } /** * get master path */ public String getMasterPath() { String address = getLocalAddress(); return REGISTRY_DOLPHINSCHEDULER_MASTERS + "/" + address; } /** * get local address */ public String getLocalAddress() { return NetUtils.getAddr(masterConfig.getListenPort()); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,686
[Bug] [MasterServer] server restart fail after force killed
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened When I kill the worker server by force kill, and restart it, it will stop because dead server. And I found that the session timeout was still happen as usual in seconds, even if this is already a new server. So the master accept the remove node event and handle dead server. ### What you expected to happen server can restart successfully ### How to reproduce kill server forcely and restart it. ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7686
https://github.com/apache/dolphinscheduler/pull/7688
71ccb42697e197b2668a08a83eb9bb19261b4f5a
d3bd7309fb623d87ae7f35a01bdaec727a672be4
"2021-12-28T11:57:34Z"
java
"2021-12-29T07:35:03Z"
dolphinscheduler-worker/src/main/java/org/apache/dolphinscheduler/server/worker/registry/WorkerRegistryClient.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.server.worker.registry; import static org.apache.dolphinscheduler.common.Constants.DEFAULT_WORKER_GROUP; import static org.apache.dolphinscheduler.common.Constants.REGISTRY_DOLPHINSCHEDULER_WORKERS; import static org.apache.dolphinscheduler.common.Constants.SINGLE_SLASH; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.common.IStoppable; import org.apache.dolphinscheduler.common.enums.NodeType; import org.apache.dolphinscheduler.common.utils.NetUtils; import org.apache.dolphinscheduler.remote.utils.NamedThreadFactory; import org.apache.dolphinscheduler.server.registry.HeartBeatTask; import org.apache.dolphinscheduler.server.worker.config.WorkerConfig; import org.apache.dolphinscheduler.server.worker.runner.WorkerManagerThread; import org.apache.dolphinscheduler.service.registry.RegistryClient; import org.apache.commons.lang.StringUtils; import java.io.IOException; import java.util.Set; import java.util.StringJoiner; import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import javax.annotation.PostConstruct; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import com.google.common.collect.Sets; /** * worker registry */ @Service public class WorkerRegistryClient { private final Logger logger = LoggerFactory.getLogger(WorkerRegistryClient.class); /** * worker config */ @Autowired private WorkerConfig workerConfig; /** * worker manager */ @Autowired private WorkerManagerThread workerManagerThread; /** * heartbeat executor */ private ScheduledExecutorService heartBeatExecutor; @Autowired private RegistryClient registryClient; /** * worker startup time, ms */ private long startupTime; private Set<String> workerGroups; @PostConstruct public void initWorkRegistry() { this.workerGroups = workerConfig.getGroups(); this.startupTime = System.currentTimeMillis(); this.heartBeatExecutor = Executors.newSingleThreadScheduledExecutor(new NamedThreadFactory("HeartBeatExecutor")); } /** * registry */ public void registry() { String address = NetUtils.getAddr(workerConfig.getListenPort()); Set<String> workerZkPaths = getWorkerZkPaths(); int workerHeartbeatInterval = workerConfig.getHeartbeatInterval(); for (String workerZKPath : workerZkPaths) { registryClient.persistEphemeral(workerZKPath, ""); logger.info("worker node : {} registry to ZK {} successfully", address, workerZKPath); } HeartBeatTask heartBeatTask = new HeartBeatTask(startupTime, workerConfig.getMaxCpuLoadAvg(), workerConfig.getReservedMemory(), workerConfig.getHostWeight(), workerZkPaths, Constants.WORKER_TYPE, registryClient, workerConfig.getExecThreads(), workerManagerThread.getThreadPoolQueueSize() ); this.heartBeatExecutor.scheduleAtFixedRate(heartBeatTask, workerHeartbeatInterval, workerHeartbeatInterval, TimeUnit.SECONDS); logger.info("worker node : {} heartbeat interval {} s", address, workerHeartbeatInterval); } /** * remove registry info */ public void unRegistry() throws IOException { try { String address = getLocalAddress(); Set<String> workerZkPaths = getWorkerZkPaths(); for (String workerZkPath : workerZkPaths) { registryClient.remove(workerZkPath); logger.info("worker node : {} unRegistry from ZK {}.", address, workerZkPath); } } catch (Exception ex) { logger.error("remove worker zk path exception", ex); } this.heartBeatExecutor.shutdownNow(); logger.info("heartbeat executor shutdown"); registryClient.close(); logger.info("registry client closed"); } /** * get worker path */ public Set<String> getWorkerZkPaths() { Set<String> workerPaths = Sets.newHashSet(); String address = getLocalAddress(); for (String workGroup : this.workerGroups) { StringJoiner workerPathJoiner = new StringJoiner(SINGLE_SLASH); workerPathJoiner.add(REGISTRY_DOLPHINSCHEDULER_WORKERS); if (StringUtils.isEmpty(workGroup)) { workGroup = DEFAULT_WORKER_GROUP; } // trim and lower case is need workerPathJoiner.add(workGroup.trim().toLowerCase()); workerPathJoiner.add(address); workerPaths.add(workerPathJoiner.toString()); } return workerPaths; } public void handleDeadServer(Set<String> nodeSet, NodeType nodeType, String opType) { registryClient.handleDeadServer(nodeSet, nodeType, opType); } /** * get local address */ private String getLocalAddress() { return NetUtils.getAddr(workerConfig.getListenPort()); } public void setRegistryStoppable(IStoppable stoppable) { registryClient.setStoppable(stoppable); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,660
[Bug] [ui] Wrong create time of process definition version
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened In our process definition page, version info dialogs given a wrong create time info. For now it just use `process_definition.create_time` which should use `process_definition.update_time` make more sence. ![image](https://user-images.githubusercontent.com/15820530/147523987-b5c82222-a6a5-47d1-a537-918fc46d6dcd.png) ### What you expected to happen We should display process definition `updateTime` instead of `createTime` ![Uploading image.png…]() ### How to reproduce Go to web ui project -> process -> process definition, create a new process definition save it, and then add one more task and save again, you would see two version in this process definition `version info` and both them have same create time ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7660
https://github.com/apache/dolphinscheduler/pull/7662
d3bd7309fb623d87ae7f35a01bdaec727a672be4
abf3d001fc9cce2dc5a097a3b9902ce9d5e1007e
"2021-12-28T03:13:55Z"
java
"2021-12-29T09:25:00Z"
dolphinscheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/versions.vue
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ <template> <div class="container"> <div class="title-box"> <span class="name">{{$t('Version Info')}}</span> </div> <div class="table-box" v-if="versionData.processDefinitionVersions.length > 0"> <el-table :data="versionData.processDefinitionVersions" size="mini" style="width: 100%"> <el-table-column type="index" :label="$t('#')" width="50"></el-table-column> <el-table-column prop="userName" :label="$t('Version')"> <template slot-scope="scope"> <span v-if="scope.row.version"> <span v-if="scope.row.version === versionData.processDefinition.version" style="color: green"><strong>V{{scope.row.version}} {{$t('Current Version')}}</strong></span> <span v-else>V{{scope.row.version}}</span> </span> <span v-else>-</span> </template> </el-table-column> <el-table-column prop="description" :label="$t('Description')"></el-table-column> <el-table-column :label="$t('Create Time')" min-width="120"> <template slot-scope="scope"> <span>{{scope.row.createTime | formatDate}}</span> </template> </el-table-column> <el-table-column :label="$t('Operation')" width="100"> <template slot-scope="scope"> <el-tooltip :content="$t('Switch To This Version')" placement="top"> <el-popconfirm :confirmButtonText="$t('Confirm')" :cancelButtonText="$t('Cancel')" icon="el-icon-info" iconColor="red" :title="$t('Confirm Switch To This Version?')" @onConfirm="_mVersionSwitchProcessDefinitionVersion(scope.row)" > <el-button :disabled="versionData.processDefinition.releaseState === 'ONLINE' || scope.row.version === versionData.processDefinition.version || isInstance" type="primary" size="mini" icon="el-icon-warning" circle slot="reference"></el-button> </el-popconfirm> </el-tooltip> <el-tooltip :content="$t('Delete')" placement="top"> <el-popconfirm :confirmButtonText="$t('Confirm')" :cancelButtonText="$t('Cancel')" icon="el-icon-info" iconColor="red" :title="$t('Delete?')" @onConfirm="_mVersionDeleteProcessDefinitionVersion(scope.row)" > <el-button :disabled="scope.row.version === versionData.processDefinition.version || isInstance" type="danger" size="mini" icon="el-icon-delete" circle slot="reference"></el-button> </el-popconfirm> </el-tooltip> </template> </el-table-column> </el-table> </div> <div v-if="versionData.processDefinitionVersions.length === 0"> <m-no-data><!----></m-no-data> </div> <div v-if="versionData.processDefinitionVersions.length > 0"> <div class="bottom-box"> <el-pagination style="float:right" background @current-change="_mVersionGetProcessDefinitionVersionsPage" layout="prev, pager, next" :total="versionData.total"> </el-pagination> <el-button type="text" size="mini" @click="_close()" style="float:right">{{$t('Cancel')}}</el-button> </div> </div> </div> </template> <script> import mNoData from '@/module/components/noData/noData' export default { name: 'versions', data () { return { tableHeaders: [ { label: 'version', prop: 'version' }, { label: 'createTime', prop: 'createTime' } ] } }, props: { isInstance: Boolean, versionData: Object }, methods: { /** * switch version in process definition version list */ _mVersionSwitchProcessDefinitionVersion (item) { this.$emit('mVersionSwitchProcessDefinitionVersion', { version: item.version, processDefinitionCode: this.versionData.processDefinition.code, fromThis: this }) }, /** * delete one version of process definition */ _mVersionDeleteProcessDefinitionVersion (item) { this.$emit('mVersionDeleteProcessDefinitionVersion', { version: item.version, processDefinitionCode: this.versionData.processDefinition.code, fromThis: this }) }, /** * Paging event of process definition versions */ _mVersionGetProcessDefinitionVersionsPage (val) { this.$emit('mVersionGetProcessDefinitionVersionsPage', { pageNo: val, pageSize: this.pageSize, processDefinitionCode: this.versionData.processDefinition.code, fromThis: this }) }, /** * Close and destroy component and component internal events */ _close () { // flag Whether to delete a node this.$destroy() this.$emit('closeVersion') } }, created () { }, mounted () { }, components: { mNoData } } </script> <style lang="scss" rel="stylesheet/scss"> .container { width: 500px; position: relative; .title-box { height: 61px; border-bottom: 1px solid #DCDEDC; position: relative; .name { position: absolute; left: 24px; top: 18px; font-size: 16px; } } .bottom-box { position: absolute; bottom: 0; left: 0; width: 100%; text-align: right; height: 60px; line-height: 60px; border-top: 1px solid #DCDEDC; background: #fff; .ans-page { display: inline-block; } } .table-box { overflow-y: scroll; height: calc(100vh - 61px); padding-bottom: 60px; } } </style>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,512
[BUG][UI] dependent node ui dislocation
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://files.catbox.moe/a88qre.png) ### What you expected to happen above ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7512
https://github.com/apache/dolphinscheduler/pull/7716
0bc4f9b5de4f7ab0275ee77e3d769f1e58792962
1cd5162e4e9ba1ab5a3c3b254f8bc89bb9da94b9
"2021-12-21T03:18:13Z"
java
"2021-12-30T02:34:10Z"
dolphinscheduler-ui/src/js/conf/home/pages/dag/_source/formModel/_source/dependentTimeout.vue
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ <template> <div class="timeout-alarm-model"> <div class="clearfix list"> <div class="text-box"> <span>{{$t('Timeout alarm')}}</span> </div> <div class="cont-box"> <label class="label-box"> <div style="padding-top: 5px;"> <el-switch v-model="enable" size="small" @change="_onSwitch(0, $event)" :disabled="isDetails"></el-switch> </div> </label> </div> </div> <div class="clearfix list" v-if="enable"> <div class="text-box"> <span>{{$t('Waiting Dependent start')}}</span> </div> <div class="cont-box"> <label class="label-box"> <div style="padding: 5px 0;"> <el-switch v-model="waitStartTimeout.enable" size="small" @change="_onSwitch(1, $event)" :disabled="isDetails"></el-switch> </div> </label> </div> </div> <div class="clearfix list" v-if="enable && waitStartTimeout.enable"> <div class="cont-box"> <label class="label-box"> <span class="text-box"> <span>{{$t('Timeout period')}}</span> </span> <el-input v-model="waitStartTimeout.interval" size="small" style="width: 100px;" :disabled="isDetails" maxlength="9"> <span slot="append">{{$t('Minute')}}</span> </el-input> <span class="text-box"> <span>{{$t('Check interval')}}</span> </span> <el-input v-model="waitStartTimeout.checkInterval" size="small" style="width: 100px;" :disabled="isDetails" maxlength="9"> <span slot="append">{{$t('Minute')}}</span> </el-input> <span class="text-box"> <span>{{$t('Timeout strategy')}}</span> </span> <div style="padding-top: 6px;"> <el-checkbox-group size="small" v-model="waitStartTimeout.strategy"> <el-checkbox label="FAILED" :disabled="true">{{$t('Timeout failure')}}</el-checkbox> </el-checkbox-group> </div> </label> </div> </div> <div class="clearfix list" v-if="enable"> <div class="text-box"> <span>{{$t('Waiting Dependent complete')}}</span> </div> <div class="cont-box"> <label class="label-box"> <div style="padding: 5px 0;"> <el-switch v-model="waitCompleteTimeout.enable" size="small" @change="_onSwitch(2, $event)" :disabled="isDetails"></el-switch> </div> </label> </div> </div> <div class="clearfix list" v-if="enable && waitCompleteTimeout.enable"> <div class="cont-box"> <label class="label-box"> <span class="text-box"> <span>{{$t('Timeout period')}}</span> </span> <el-input v-model="waitCompleteTimeout.interval" size="small" style="width: 100px;" :disabled="isDetails" maxlength="9"> <span slot="append">{{$t('Minute')}}</span> </el-input> <span class="text-box"> <span>{{$t('Timeout strategy')}}</span> </span> <div style="padding-top: 6px;"> <el-checkbox-group size="small" v-model="waitCompleteTimeout.strategy"> <el-checkbox label="WARN" :disabled="isDetails">{{$t('Timeout alarm')}}</el-checkbox> <el-checkbox label="FAILED" :disabled="isDetails">{{$t('Timeout failure')}}</el-checkbox> </el-checkbox-group> </div> </label> </div> </div> </div> </template> <script> import _ from 'lodash' import disabledState from '@/module/mixin/disabledState' export default { name: 'form-dependent-timeout', data () { return { // Timeout display hiding enable: false, waitStartTimeout: { enable: false, // Timeout strategy strategy: ['FAILED'], // Timeout period interval: null, checkInterval: null }, waitCompleteTimeout: { enable: false, // Timeout strategy strategy: [], // Timeout period interval: null } } }, mixins: [disabledState], props: { backfillItem: Object }, methods: { _onSwitch (p, is) { // reset timeout setting when switch timeout on/off. // p = 0 for timeout switch; p = 1 for wait start timeout switch; p = 2 for wait complete timeout switch. if (p === 1 || p === 0) { this.waitStartTimeout.interval = is ? 30 : null this.waitStartTimeout.checkInterval = is ? 1 : null } if (p === 2 || p === 0) { this.waitCompleteTimeout.strategy = is ? ['WARN'] : [] this.waitCompleteTimeout.interval = is ? 30 : null } }, _verification () { // Verification timeout policy if (this.enable && (this.waitCompleteTimeout.enable && !this.waitCompleteTimeout.strategy.length) || (this.waitStartTimeout.enable && !this.waitStartTimeout.strategy.length)) { this.$message.warning(`${this.$t('Timeout strategy must be selected')}`) return false } // Verify timeout duration Non 0 positive integer const reg = /^[1-9]\d*$/ if (this.enable && (this.waitCompleteTimeout.enable && !reg.test(this.waitCompleteTimeout.interval)) || (this.waitStartTimeout.enable && (!reg.test(this.waitStartTimeout.interval || !reg.test(this.waitStartTimeout.checkInterval))))) { this.$message.warning(`${this.$t('Timeout must be a positive integer')}`) return false } // Verify timeout duration longer than check interval if (this.enable && this.waitStartTimeout.enable && this.waitStartTimeout.checkInterval >= this.waitStartTimeout.interval) { this.$message.warning(`${this.$t('Timeout must be longer than check interval')}`) return false } this.$emit('on-timeout', { waitStartTimeout: { strategy: 'FAILED', interval: parseInt(this.waitStartTimeout.interval), checkInterval: parseInt(this.waitStartTimeout.checkInterval), enable: this.waitStartTimeout.enable }, waitCompleteTimeout: { strategy: (() => { // Handling checkout sequence let strategy = this.waitCompleteTimeout.strategy if (strategy.length === 2 && strategy[0] === 'FAILED') { return [strategy[1], strategy[0]].join(',') } else { return strategy.join(',') } })(), interval: parseInt(this.waitCompleteTimeout.interval), enable: this.waitCompleteTimeout.enable } }) return true } }, watch: { }, created () { let o = this.backfillItem // Non-null objects represent backfill if (!_.isEmpty(o)) { if (o.timeout) { this.enable = true this.waitCompleteTimeout.enable = o.timeout.enable || false this.waitCompleteTimeout.strategy = _.split(o.timeout.strategy, ',') || ['WARN'] this.waitCompleteTimeout.interval = o.timeout.interval || null } if (o.waitStartTimeout) { this.enable = true this.waitStartTimeout.enable = o.waitStartTimeout.enable || false this.waitStartTimeout.strategy = ['FAILED'] this.waitStartTimeout.interval = o.waitStartTimeout.interval || null this.waitStartTimeout.checkInterval = o.waitStartTimeout.checkInterval || null } } }, mounted () { }, components: {} } </script>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,713
[Feature] [Log] The data source password in the log is not encrypted
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened The data source password in the log is not encrypted. ### What you expected to happen It should be encrypted or replaced with "****" ### How to reproduce See "What happened" ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7713
https://github.com/apache/dolphinscheduler/pull/7728
68906f1b31d60e38414de6a82feff5232e4a00b4
73993e98ee272ccbd1a0bb160eda5c7557ddc21e
"2021-12-29T10:47:52Z"
java
"2021-12-30T08:09:59Z"
dolphinscheduler-api/src/main/java/org/apache/dolphinscheduler/api/aspect/AccessLogAspect.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.api.aspect; import org.apache.dolphinscheduler.common.Constants; import org.apache.dolphinscheduler.dao.entity.User; import org.apache.dolphinscheduler.spi.utils.StringUtils; import java.lang.reflect.Method; import java.util.Arrays; import java.util.HashMap; import java.util.Set; import java.util.UUID; import java.util.stream.Collectors; import javax.servlet.http.HttpServletRequest; import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; import org.aspectj.lang.reflect.MethodSignature; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.stereotype.Component; import org.springframework.web.context.request.RequestContextHolder; import org.springframework.web.context.request.ServletRequestAttributes; @Aspect @Component public class AccessLogAspect { private static final Logger logger = LoggerFactory.getLogger(AccessLogAspect.class); private static final String TRACE_ID = "traceId"; @Pointcut("@annotation(org.apache.dolphinscheduler.api.aspect.AccessLogAnnotation)") public void logPointCut(){ // Do nothing because of it's a pointcut } @Around("logPointCut()") public Object doAround(ProceedingJoinPoint proceedingJoinPoint) throws Throwable { long startTime = System.currentTimeMillis(); // fetch AccessLogAnnotation MethodSignature sign = (MethodSignature) proceedingJoinPoint.getSignature(); Method method = sign.getMethod(); AccessLogAnnotation annotation = method.getAnnotation(AccessLogAnnotation.class); String traceId = UUID.randomUUID().toString(); // log request if (!annotation.ignoreRequest()) { ServletRequestAttributes attributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes(); if (attributes != null) { HttpServletRequest request = attributes.getRequest(); String traceIdFromHeader = request.getHeader(TRACE_ID); if (!StringUtils.isEmpty(traceIdFromHeader)) { traceId = traceIdFromHeader; } // handle login info String userName = parseLoginInfo(request); // handle args String argsString = parseArgs(proceedingJoinPoint, annotation); logger.info("REQUEST TRACE_ID:{}, LOGIN_USER:{}, URI:{}, METHOD:{}, HANDLER:{}, ARGS:{}", traceId, userName, request.getRequestURI(), request.getMethod(), proceedingJoinPoint.getSignature().getDeclaringTypeName() + "." + proceedingJoinPoint.getSignature().getName(), argsString); } } Object ob = proceedingJoinPoint.proceed(); // log response if (!annotation.ignoreResponse()) { logger.info("RESPONSE TRACE_ID:{}, BODY:{}, REQUEST DURATION:{} milliseconds", traceId, ob, (System.currentTimeMillis() - startTime)); } return ob; } private String parseArgs(ProceedingJoinPoint proceedingJoinPoint, AccessLogAnnotation annotation) { Object[] args = proceedingJoinPoint.getArgs(); String argsString = Arrays.toString(args); if (annotation.ignoreRequestArgs().length > 0) { String[] parameterNames = ((MethodSignature) proceedingJoinPoint.getSignature()).getParameterNames(); if (parameterNames.length > 0) { Set<String> ignoreSet = Arrays.stream(annotation.ignoreRequestArgs()).collect(Collectors.toSet()); HashMap<String, Object> argsMap = new HashMap<>(); for (int i = 0; i < parameterNames.length; i++) { if (!ignoreSet.contains(parameterNames[i])) { argsMap.put(parameterNames[i], args[i]); } } argsString = argsMap.toString(); } } return argsString; } private String parseLoginInfo(HttpServletRequest request) { String userName = "NOT LOGIN"; User loginUser = (User) (request.getAttribute(Constants.SESSION_USER)); if (loginUser != null) { userName = loginUser.getUserName(); } return userName; } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,713
[Feature] [Log] The data source password in the log is not encrypted
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened The data source password in the log is not encrypted. ### What you expected to happen It should be encrypted or replaced with "****" ### How to reproduce See "What happened" ### Anything else _No response_ ### Version dev ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7713
https://github.com/apache/dolphinscheduler/pull/7728
68906f1b31d60e38414de6a82feff5232e4a00b4
73993e98ee272ccbd1a0bb160eda5c7557ddc21e
"2021-12-29T10:47:52Z"
java
"2021-12-30T08:09:59Z"
dolphinscheduler-api/src/test/java/org/apache/dolphinscheduler/api/aspect/AccessLogAspectTest.java
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,510
[Bug] [UI] File Manage when return the parent folder ,ui send "undefined" to backend
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened issue:can not dispaly noraml when click the prent floder from the child folder create file folder (at least 2 layer),click the parent folder from the child folder ,then exception occuer ![image](https://user-images.githubusercontent.com/25214080/146863720-717c03f0-eb96-43da-9343-1d57819e4bbb.png) ### What you expected to happen should display the parent folder content ### How to reproduce create file folder (at least 2 layer),click the parent folder from the child folder ,then exception occuer version:2.0.0 release ### Anything else _No response_ ### Version 2.0.0 ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7510
https://github.com/apache/dolphinscheduler/pull/7747
b2e6b69b0902ddd0dac9aca12ca40224dd90cfaa
1f97a62411da2a2f9b43c6a54fe9a2a8c6929411
"2021-12-21T03:02:36Z"
java
"2021-12-31T05:14:07Z"
dolphinscheduler-ui/src/js/conf/home/pages/resource/pages/file/pages/subdirectory/index.vue
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ <template> <div class="home-main list-construction-model"> <div class="content-title"> <a class="bread" style="padding-left: 15px;" @click="() => $router.push({path: `/resource/file`})">{{$t('File Manage')}}</a> <a class="bread" v-for="(item,$index) in breadList" :key="$index" @click="_ckOperation($index)">{{'>'+item}}</a> </div> <div class="conditions-box"> <m-conditions @on-conditions="_onConditions"> <template slot="button-group"> <el-button-group> <el-button size="mini" @click="() => $router.push({path: `/resource/file/subFileFolder/${searchParams.id}`})">{{$t('Create folder')}}</el-button> <el-button size="mini" @click="() => $router.push({path: `/resource/file/subFile/${searchParams.id}`})">{{$t('Create File')}}</el-button> <el-button size="mini" @click="_uploading">{{$t('Upload Files')}}</el-button> </el-button-group> </template> </m-conditions> </div> <div class="list-box"> <template v-if="fileResourcesList.length || total>0"> <m-list @on-update="_onUpdate" @on-updateList="_updateList" :file-resources-list="fileResourcesList" :page-no="searchParams.pageNo" :page-size="searchParams.pageSize"> </m-list> <div class="page-box"> <el-pagination background @current-change="_page" @size-change="_pageSize" :page-size="searchParams.pageSize" :current-page.sync="searchParams.pageNo" :page-sizes="[10, 30, 50]" layout="sizes, prev, pager, next, jumper" :total="total"> </el-pagination> </div> </template> <template v-if="!fileResourcesList.length && total<=0"> <m-no-data></m-no-data> </template> <m-spin :is-spin="isLoading" :is-left="isLeft"> </m-spin> </div> </div> </template> <script> import _ from 'lodash' import { mapActions } from 'vuex' import mList from './_source/list' import localStore from '@/module/util/localStorage' import mSpin from '@/module/components/spin/spin' import { findComponentDownward } from '@/module/util/' import mNoData from '@/module/components/noData/noData' import listUrlParamHandle from '@/module/mixin/listUrlParamHandle' import mConditions from '@/module/components/conditions/conditions' export default { name: 'resource-list-index-FILE', data () { return { total: null, isLoading: false, fileResourcesList: [], searchParams: { id: this.$route.params.id, pageSize: 10, pageNo: 1, searchVal: '', type: 'FILE' }, isLeft: true, breadList: [] } }, mixins: [listUrlParamHandle], props: {}, methods: { ...mapActions('resource', ['getResourcesListP', 'getResourceId']), /** * File Upload */ _uploading () { findComponentDownward(this.$root, 'roof-nav')._fileChildUpdate('FILE', this.searchParams.id) }, _onConditions (o) { this.searchParams = _.assign(this.searchParams, o) this.searchParams.pageNo = 1 }, _page (val) { this.searchParams.pageNo = val }, _pageSize (val) { this.searchParams.pageSize = val }, _getList (flag) { if (sessionStorage.getItem('isLeft') === 0) { this.isLeft = false } else { this.isLeft = true } this.isLoading = !flag this.searchParams.id = this.$route.params.id this.getResourcesListP(this.searchParams).then(res => { if (this.searchParams.pageNo > 1 && res.totalList.length === 0) { this.searchParams.pageNo = this.searchParams.pageNo - 1 } else { this.fileResourcesList = res.totalList this.total = res.total this.isLoading = false } }).catch(e => { this.isLoading = false }) }, _updateList (data) { this.searchParams.id = data this.searchParams.pageNo = 1 this.searchParams.searchVal = '' this._debounceGET() }, _onUpdate () { this.searchParams.id = this.$route.params.id this._debounceGET() }, _ckOperation (index) { let breadName = '' this.breadList.forEach((item, i) => { if (i <= index) { breadName = breadName + '/' + item } }) this.transferApi(breadName) }, transferApi (api) { this.getResourceId({ type: 'FILE', fullName: api }).then(res => { localStore.setItem('currentDir', `${res.fullName}`) this.$router.push({ path: `/resource/file/subdirectory/${res.id}` }) }).catch(e => { this.$message.error(e.msg || '') }) } }, watch: { // router '$route' (a) { // url no params get instance list this.searchParams.pageNo = _.isEmpty(a.query) ? 1 : a.query.pageNo this.searchParams.id = a.params.id let dir = localStore.getItem('currentDir').split('/') dir.shift() this.breadList = dir } }, created () {}, mounted () { let dir = localStore.getItem('currentDir').split('/') dir.shift() this.breadList = dir }, beforeDestroy () { sessionStorage.setItem('isLeft', 1) }, components: { mConditions, mList, mSpin, mNoData } } </script> <style lang="scss" rel="stylesheet/scss"> .bread { font-size: 22px; padding-top: 10px; color: #2a455b; display: inline-block; cursor: pointer; } </style>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,459
[Bug][Sqoop] running error in parallel
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://files.catbox.moe/pxokv2.jpg) sqoop task misssing --target-dir in 2.0.1-release. Causing error while running in parallel with the same source table in different database. ### What you expected to happen running successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7459
https://github.com/apache/dolphinscheduler/pull/7752
d7d13f7f51c5a2d6fd19076018ff5528f140c49a
c58dbefaa50224f60d581980a43ba3d2ee4634a2
"2021-12-17T03:43:39Z"
java
"2021-12-31T08:26:07Z"
dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/generator/targets/HiveTargetGenerator.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.task.sqoop.generator.targets; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.CREATE_HIVE_TABLE; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.DELETE_TARGET_DIR; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.HIVE_DATABASE; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.HIVE_DELIMS_REPLACEMENT; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.HIVE_DROP_IMPORT_DELIMS; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.HIVE_IMPORT; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.HIVE_OVERWRITE; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.HIVE_PARTITION_KEY; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.HIVE_PARTITION_VALUE; import static org.apache.dolphinscheduler.plugin.task.sqoop.SqoopConstants.HIVE_TABLE; import static org.apache.dolphinscheduler.spi.task.TaskConstants.SPACE; import org.apache.dolphinscheduler.plugin.task.sqoop.generator.ITargetGenerator; import org.apache.dolphinscheduler.plugin.task.sqoop.parameter.SqoopParameters; import org.apache.dolphinscheduler.plugin.task.sqoop.parameter.targets.TargetHiveParameter; import org.apache.dolphinscheduler.spi.task.request.TaskRequest; import org.apache.dolphinscheduler.spi.utils.JSONUtils; import org.apache.dolphinscheduler.spi.utils.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; /** * hive target generator */ public class HiveTargetGenerator implements ITargetGenerator { private static final Logger logger = LoggerFactory.getLogger(HiveTargetGenerator.class); @Override public String generate(SqoopParameters sqoopParameters, TaskRequest taskExecutionContext) { StringBuilder hiveTargetSb = new StringBuilder(); try { TargetHiveParameter targetHiveParameter = JSONUtils.parseObject(sqoopParameters.getTargetParams(), TargetHiveParameter.class); if (null != targetHiveParameter) { hiveTargetSb.append(SPACE).append(HIVE_IMPORT); if (StringUtils.isNotEmpty(targetHiveParameter.getHiveDatabase()) && StringUtils.isNotEmpty(targetHiveParameter.getHiveTable())) { hiveTargetSb.append(SPACE).append(HIVE_DATABASE) .append(SPACE).append(targetHiveParameter.getHiveDatabase()) .append(SPACE).append(HIVE_TABLE) .append(SPACE).append(targetHiveParameter.getHiveTable()); } if (targetHiveParameter.isCreateHiveTable()) { hiveTargetSb.append(SPACE).append(CREATE_HIVE_TABLE); } if (targetHiveParameter.isDropDelimiter()) { hiveTargetSb.append(SPACE).append(HIVE_DROP_IMPORT_DELIMS); } if (targetHiveParameter.isHiveOverWrite()) { hiveTargetSb.append(SPACE).append(HIVE_OVERWRITE) .append(SPACE).append(DELETE_TARGET_DIR); } if (StringUtils.isNotEmpty(targetHiveParameter.getReplaceDelimiter())) { hiveTargetSb.append(SPACE).append(HIVE_DELIMS_REPLACEMENT) .append(SPACE).append(targetHiveParameter.getReplaceDelimiter()); } if (StringUtils.isNotEmpty(targetHiveParameter.getHivePartitionKey()) && StringUtils.isNotEmpty(targetHiveParameter.getHivePartitionValue())) { hiveTargetSb.append(SPACE).append(HIVE_PARTITION_KEY) .append(SPACE).append(targetHiveParameter.getHivePartitionKey()) .append(SPACE).append(HIVE_PARTITION_VALUE) .append(SPACE).append(targetHiveParameter.getHivePartitionValue()); } } } catch (Exception e) { logger.error(String.format("Sqoop hive target params build failed: [%s]", e.getMessage())); } return hiveTargetSb.toString(); } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,459
[Bug][Sqoop] running error in parallel
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://files.catbox.moe/pxokv2.jpg) sqoop task misssing --target-dir in 2.0.1-release. Causing error while running in parallel with the same source table in different database. ### What you expected to happen running successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7459
https://github.com/apache/dolphinscheduler/pull/7752
d7d13f7f51c5a2d6fd19076018ff5528f140c49a
c58dbefaa50224f60d581980a43ba3d2ee4634a2
"2021-12-17T03:43:39Z"
java
"2021-12-31T08:26:07Z"
dolphinscheduler-task-plugin/dolphinscheduler-task-sqoop/src/main/java/org/apache/dolphinscheduler/plugin/task/sqoop/parameter/targets/TargetHiveParameter.java
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.dolphinscheduler.plugin.task.sqoop.parameter.targets; /** * target hive parameter */ public class TargetHiveParameter { /** * hive database */ private String hiveDatabase; /** * hive table */ private String hiveTable; /** * create hive table */ private boolean createHiveTable; /** * drop delimiter */ private boolean dropDelimiter; /** * hive overwrite */ private boolean hiveOverWrite; /** * replace delimiter */ private String replaceDelimiter; /** * hive partition key */ private String hivePartitionKey; /** * hive partition value */ private String hivePartitionValue; public String getHiveDatabase() { return hiveDatabase; } public void setHiveDatabase(String hiveDatabase) { this.hiveDatabase = hiveDatabase; } public String getHiveTable() { return hiveTable; } public void setHiveTable(String hiveTable) { this.hiveTable = hiveTable; } public boolean isCreateHiveTable() { return createHiveTable; } public void setCreateHiveTable(boolean createHiveTable) { this.createHiveTable = createHiveTable; } public boolean isDropDelimiter() { return dropDelimiter; } public void setDropDelimiter(boolean dropDelimiter) { this.dropDelimiter = dropDelimiter; } public boolean isHiveOverWrite() { return hiveOverWrite; } public void setHiveOverWrite(boolean hiveOverWrite) { this.hiveOverWrite = hiveOverWrite; } public String getReplaceDelimiter() { return replaceDelimiter; } public void setReplaceDelimiter(String replaceDelimiter) { this.replaceDelimiter = replaceDelimiter; } public String getHivePartitionKey() { return hivePartitionKey; } public void setHivePartitionKey(String hivePartitionKey) { this.hivePartitionKey = hivePartitionKey; } public String getHivePartitionValue() { return hivePartitionValue; } public void setHivePartitionValue(String hivePartitionValue) { this.hivePartitionValue = hivePartitionValue; } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,459
[Bug][Sqoop] running error in parallel
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://files.catbox.moe/pxokv2.jpg) sqoop task misssing --target-dir in 2.0.1-release. Causing error while running in parallel with the same source table in different database. ### What you expected to happen running successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7459
https://github.com/apache/dolphinscheduler/pull/7752
d7d13f7f51c5a2d6fd19076018ff5528f140c49a
c58dbefaa50224f60d581980a43ba3d2ee4634a2
"2021-12-17T03:43:39Z"
java
"2021-12-31T08:26:07Z"
dolphinscheduler-ui/src/js/conf/home/pages/dag/_source/formModel/tasks/sqoop.vue
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ <template> <div class="sqoop-model"> <m-list-box> <div slot="text">{{$t('Custom Job')}}</div> <div slot="content"> <el-switch size="small" v-model="isCustomTask" @change="_onSwitch" :disabled="isDetails"></el-switch> </div> </m-list-box> <m-list-box v-show="isCustomTask"> <div slot="text">{{$t('Custom Script')}}</div> <div slot="content"> <div class="form-mirror"> <textarea id="code-shell-mirror" name="code-shell-mirror" style="opacity: 0;"></textarea> </div> </div> </m-list-box> <template v-if="!isCustomTask"> <m-list-box> <div slot="text">{{$t('Sqoop Job Name')}}</div> <div slot="content"> <el-input :disabled="isDetails" size="small" type="text" v-model="jobName" :placeholder="$t('Please enter Job Name(required)')"></el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Direct')}}</div> <div slot="content"> <el-select style="width: 130px;" size="small" v-model="modelType" :disabled="isDetails" @change="_handleModelTypeChange"> <el-option v-for="city in modelTypeList" :key="city.code" :value="city.code" :label="city.code"> </el-option> </el-select> </div> </m-list-box> <m-list-box> <div slot="text" style="width: 110px;">{{$t('Hadoop Custom Params')}}</div> <div slot="content"> <m-local-params ref="refMapColumnHadoopParams" @on-local-params="_onHadoopCustomParams" :udp-list="hadoopCustomParams" :hide="false"> </m-local-params> </div> </m-list-box> <m-list-box> <div slot="text" style="width: 100px;">{{$t('Sqoop Advanced Parameters')}}</div> <div slot="content"> <m-local-params ref="refMapColumnAdvancedParams" @on-local-params="_onSqoopAdvancedParams" :udp-list="sqoopAdvancedParams" :hide="false"> </m-local-params> </div> </m-list-box> <m-list-box> <div slot="text" style="font-weight:bold">{{$t('Data Source')}}</div> </m-list-box> <hr style="margin-left: 60px;"> <m-list-box> <div slot="text">{{$t('Type')}}</div> <div slot="content"> <el-select style="width: 130px;" size="small" v-model="sourceType" :disabled="isDetails" @change="_handleSourceTypeChange"> <el-option v-for="city in sourceTypeList" :key="city.code" :value="city.code" :label="city.code"> </el-option> </el-select> </div> </m-list-box> <template v-if="sourceType === 'MYSQL'"> <m-list-box> <div slot="text">{{$t('Datasource')}}</div> <div slot="content"> <m-datasource ref="refSourceDs" @on-dsData="_onSourceDsData" :data="{type:sourceMysqlParams.srcType, typeList: [{id: 0, code: 'MYSQL', disabled: false}], datasource:sourceMysqlParams.srcDatasource }" > </m-datasource> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('ModelType')}}</div> <div slot="content"> <el-radio-group v-model="srcQueryType" size="small" @change="_handleQueryType"> <el-radio label="0">{{$t('Form')}}</el-radio> <el-radio label="1">SQL</el-radio> </el-radio-group> </div> </m-list-box> <template v-if="sourceMysqlParams.srcQueryType === '0'"> <m-list-box> <div slot="text">{{$t('Table')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="sourceMysqlParams.srcTable" :placeholder="$t('Please enter Mysql Table(required)')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('ColumnType')}}</div> <div slot="content"> <el-radio-group v-model="sourceMysqlParams.srcColumnType" size="small" style="vertical-align: sub;"> <el-radio label="0">{{$t('All Columns')}}</el-radio> <el-radio label="1">{{$t('Some Columns')}}</el-radio> </el-radio-group> </div> </m-list-box> <m-list-box v-if="sourceMysqlParams.srcColumnType === '1'"> <div slot="text">{{$t('Column')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="sourceMysqlParams.srcColumns" :placeholder="$t('Please enter Columns (Comma separated)')"> </el-input> </div> </m-list-box> </template> </template> <template v-if="sourceType === 'HIVE'"> <m-list-box> <div slot="text">{{$t('Database')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="sourceHiveParams.hiveDatabase" :placeholder="$t('Please enter Hive Database(required)')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Table')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="sourceHiveParams.hiveTable" :placeholder="$t('Please enter Hive Table(required)')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Hive partition Keys')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="sourceHiveParams.hivePartitionKey" :placeholder="$t('Please enter Hive Partition Keys')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Hive partition Values')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="sourceHiveParams.hivePartitionValue" :placeholder="$t('Please enter Hive Partition Values')"> </el-input> </div> </m-list-box> </template> <template v-if="sourceType === 'HDFS'"> <m-list-box> <div slot="text">{{$t('Export Dir')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="sourceHdfsParams.exportDir" :placeholder="$t('Please enter Export Dir(required)')"> </el-input> </div> </m-list-box> </template> <template v-if="sourceType === 'MYSQL'"> <m-list-box v-show="srcQueryType === '1'"> <div slot="text">{{$t('SQL Statement')}}</div> <div slot="content"> <div class="form-mirror"> <textarea id="code-sqoop-mirror" name="code-sqoop-mirror" style="opacity: 0;"> </textarea> <a class="ans-modal-box-max"> <em class="el-icon-full-screen" @click="setEditorVal"></em> </a> </div> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Map Column Hive')}}</div> <div slot="content"> <m-local-params ref="refMapColumnHiveParams" @on-local-params="_onMapColumnHive" :udp-list="sourceMysqlParams.mapColumnHive" :hide="false"> </m-local-params> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Map Column Java')}}</div> <div slot="content"> <m-local-params ref="refMapColumnJavaParams" @on-local-params="_onMapColumnJava" :udp-list="sourceMysqlParams.mapColumnJava" :hide="false"> </m-local-params> </div> </m-list-box> </template> <m-list-box> <div slot="text" style="font-weight:bold">{{$t('Data Target')}}</div> </m-list-box> <hr style="margin-left: 60px;"> <m-list-box> <div slot="text">{{$t('Type')}}</div> <div slot="content"> <el-select style="width: 130px;" size="small" v-model="targetType" :disabled="isDetails"> <el-option v-for="city in targetTypeList" :key="city.code" :value="city.code" :label="city.code"> </el-option> </el-select> </div> </m-list-box> <template v-if="targetType === 'HIVE'"> <m-list-box> <div slot="text">{{$t('Database')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetHiveParams.hiveDatabase" :placeholder="$t('Please enter Hive Database(required)')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Table')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetHiveParams.hiveTable" :placeholder="$t('Please enter Hive Table(required)')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('CreateHiveTable')}}</div> <div slot="content"> <el-switch v-model="targetHiveParams.createHiveTable" size="small"></el-switch> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('DropDelimiter')}}</div> <div slot="content"> <el-switch v-model="targetHiveParams.dropDelimiter" size="small"></el-switch> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('OverWriteSrc')}}</div> <div slot="content"> <el-switch v-model="targetHiveParams.hiveOverWrite" size="small"></el-switch> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('ReplaceDelimiter')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetHiveParams.replaceDelimiter" :placeholder="$t('Please enter Replace Delimiter')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Hive partition Keys')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetHiveParams.hivePartitionKey" :placeholder="$t('Please enter Hive Partition Keys')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Hive partition Values')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetHiveParams.hivePartitionValue" :placeholder="$t('Please enter Hive Partition Values')"> </el-input> </div> </m-list-box> </template> <template v-if="targetType === 'HDFS'"> <m-list-box> <div slot="text">{{$t('Target Dir')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetHdfsParams.targetPath" :placeholder="$t('Please enter Target Dir(required)')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('DeleteTargetDir')}}</div> <div slot="content"> <el-switch v-model="targetHdfsParams.deleteTargetDir" size="small"></el-switch> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('CompressionCodec')}}</div> <div slot="content"> <el-radio-group v-model="targetHdfsParams.compressionCodec" size="small"> <el-radio label="snappy">snappy</el-radio> <el-radio label="lzo">lzo</el-radio> <el-radio label="gzip">gzip</el-radio> <el-radio label="">no</el-radio> </el-radio-group> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('FileType')}}</div> <div slot="content"> <el-radio-group v-model="targetHdfsParams.fileType" size="small"> <el-radio label="--as-avrodatafile">avro</el-radio> <el-radio label="--as-sequencefile">sequence</el-radio> <el-radio label="--as-textfile">text</el-radio> <el-radio label="--as-parquetfile">parquet</el-radio> </el-radio-group> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('FieldsTerminated')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetHdfsParams.fieldsTerminated" :placeholder="$t('Please enter Fields Terminated')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('LinesTerminated')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetHdfsParams.linesTerminated" :placeholder="$t('Please enter Lines Terminated')"> </el-input> </div> </m-list-box> </template> <template v-if="targetType === 'MYSQL'"> <m-list-box> <div slot="text">{{$t('Datasource')}}</div> <div slot="content"> <m-datasource ref="refTargetDs" @on-dsData="_onTargetDsData" :data="{ type:targetMysqlParams.targetType, typeList: [{id: 0, code: 'MYSQL', disabled: false}], datasource:targetMysqlParams.targetDatasource }" > </m-datasource> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Table')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetMysqlParams.targetTable" :placeholder="$t('Please enter Mysql Table(required)')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('Column')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetMysqlParams.targetColumns" :placeholder="$t('Please enter Columns (Comma separated)')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('FieldsTerminated')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetMysqlParams.fieldsTerminated" :placeholder="$t('Please enter Fields Terminated')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('LinesTerminated')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetMysqlParams.linesTerminated" :placeholder="$t('Please enter Lines Terminated')"> </el-input> </div> </m-list-box> <m-list-box> <div slot="text">{{$t('IsUpdate')}}</div> <div slot="content"> <el-switch v-model="targetMysqlParams.isUpdate" size="small"></el-switch> </div> </m-list-box> <m-list-box v-show="targetMysqlParams.isUpdate"> <div slot="text">{{$t('UpdateKey')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="targetMysqlParams.targetUpdateKey" :placeholder="$t('Please enter Update Key')"> </el-input> </div> </m-list-box> <m-list-box v-show="targetMysqlParams.isUpdate"> <div slot="text">{{$t('UpdateMode')}}</div> <div slot="content"> <el-radio-group v-model="targetMysqlParams.targetUpdateMode" size="small"> <el-radio label="updateonly">{{$t('OnlyUpdate')}}</el-radio> <el-radio label="allowinsert">{{$t('AllowInsert')}}</el-radio> </el-radio-group> </div> </m-list-box> </template> <m-list-box> <div slot="text">{{$t('Concurrency')}}</div> <div slot="content"> <el-input :disabled="isDetails" type="text" size="small" v-model="concurrency" :placeholder="$t('Please enter Concurrency')"> </el-input> </div> </m-list-box> </template> <m-list-box> <div slot="text">{{$t('Custom Parameters')}}</div> <div slot="content"> <m-local-params ref="refLocalParams" @on-local-params="_onLocalParams" :udp-list="localParams" :hide="false"> </m-local-params> </div> </m-list-box> <el-dialog :visible.sync="scriptBoxDialog" append-to-body="true" width="80%"> <m-script-box :item="item" @getSriptBoxValue="getSriptBoxValue" @closeAble="closeAble"></m-script-box> </el-dialog> </div> </template> <script> import _ from 'lodash' import i18n from '@/module/i18n' import mListBox from './_source/listBox' import mScriptBox from './_source/scriptBox' import mDatasource from './_source/datasource' import mLocalParams from './_source/localParams' import disabledState from '@/module/mixin/disabledState' import codemirror from '@/conf/home/pages/resource/pages/file/pages/_source/codemirror' let editor let shellEditor export default { name: 'sql', data () { return { /** * Is Custom Task */ isCustomTask: false, /** * Customer Params */ localParams: [], /** * Hadoop Custom Params */ hadoopCustomParams: [], /** * Sqoop Advanced Params */ sqoopAdvancedParams: [], /** * script */ customShell: '', /** * task name */ jobName: '', /** * mysql query type */ srcQueryType: '1', /** * source data source */ srcDatasource: '', /** * target data source */ targetDatasource: '', /** * concurrency */ concurrency: 1, /** * default job type */ jobType: 'TEMPLATE', /** * direct model type */ modelType: 'import', modelTypeList: [{ code: 'import' }, { code: 'export' }], sourceTypeList: [ { code: 'MYSQL' } ], targetTypeList: [ { code: 'HIVE' }, { code: 'HDFS' } ], sourceType: 'MYSQL', targetType: 'HDFS', sourceMysqlParams: { srcType: 'MYSQL', srcDatasource: '', srcTable: '', srcQueryType: '1', srcQuerySql: '', srcColumnType: '0', srcColumns: '', srcConditionList: [], mapColumnHive: [], mapColumnJava: [] }, sourceHdfsParams: { exportDir: '' }, sourceHiveParams: { hiveDatabase: '', hiveTable: '', hivePartitionKey: '', hivePartitionValue: '' }, targetHdfsParams: { targetPath: '', deleteTargetDir: true, fileType: '--as-avrodatafile', compressionCodec: 'snappy', fieldsTerminated: '', linesTerminated: '' }, targetMysqlParams: { targetType: 'MYSQL', targetDatasource: '', targetTable: '', targetColumns: '', fieldsTerminated: '', linesTerminated: '', preQuery: '', isUpdate: false, targetUpdateKey: '', targetUpdateMode: 'allowinsert' }, targetHiveParams: { hiveDatabase: '', hiveTable: '', createHiveTable: false, dropDelimiter: false, hiveOverWrite: true, replaceDelimiter: '', hivePartitionKey: '', hivePartitionValue: '' }, item: '', scriptBoxDialog: false } }, mixins: [disabledState], props: { backfillItem: Object }, methods: { setEditorVal () { this.item = editor.getValue() this.scriptBoxDialog = true }, getSriptBoxValue (val) { editor.setValue(val) }, _onSwitch (is) { if (is) { this.jobType = 'CUSTOM' this.isCustomTask = true setTimeout(() => { this._handlerShellEditor() }, 200) } else { this.jobType = 'TEMPLATE' this.isCustomTask = false if (this.srcQueryType === '1') { setTimeout(() => { this._handlerEditor() }, 200) } } }, _handleQueryType (o) { this.sourceMysqlParams.srcQueryType = this.srcQueryType this._getTargetTypeList(this.sourceType) this.targetType = this.targetTypeList[0].code if (this.srcQueryType === '1') { setTimeout(() => { this._handlerEditor() }, 200) } }, _handleModelTypeChange (a) { this._getSourceTypeList(a) this.sourceType = this.sourceTypeList[0].code this._handleSourceTypeChange({ label: this.sourceType, value: this.sourceType }) }, _handleSourceTypeChange (a) { this._getTargetTypeList(a.label) this.targetType = this.targetTypeList[0].code }, _getSourceTypeList (data) { switch (data) { case 'import': this.sourceTypeList = [ { code: 'MYSQL' } ] break case 'export': this.sourceTypeList = [ { code: 'HDFS' }, { code: 'HIVE' } ] break default: this.sourceTypeList = [ { code: 'MYSQL' }, { code: 'HIVE' }, { code: 'HDFS' } ] break } }, _getTargetTypeList (data) { switch (data) { case 'MYSQL': if (this.srcQueryType === '1') { this.targetTypeList = [ { code: 'HDFS' }] } else { this.targetTypeList = [ { code: 'HIVE' }, { code: 'HDFS' } ] } break case 'HDFS': this.targetTypeList = [ { code: 'MYSQL' } ] break case 'HIVE': this.targetTypeList = [ { code: 'MYSQL' } ] break default: this.targetTypeList = [ { code: 'HIVE' }, { code: 'HDFS' } ] break } }, _onMapColumnHive (a) { this.sourceMysqlParams.mapColumnHive = a }, _onMapColumnJava (a) { this.sourceMysqlParams.mapColumnJava = a }, /** * return data source */ _onSourceDsData (o) { this.sourceMysqlParams.srcType = o.type this.sourceMysqlParams.srcDatasource = o.datasource }, /** * return data source */ _onTargetDsData (o) { this.targetMysqlParams.targetType = o.type this.targetMysqlParams.targetDatasource = o.datasource }, /** * stringify the source params */ _handleSourceParams () { let params = null switch (this.sourceType) { case 'MYSQL': this.sourceMysqlParams.srcQuerySql = this.sourceMysqlParams.srcQueryType === '1' && editor ? editor.getValue() : this.sourceMysqlParams.srcQuerySql params = JSON.stringify(this.sourceMysqlParams) break case 'ORACLE': params = JSON.stringify(this.sourceOracleParams) break case 'HDFS': params = JSON.stringify(this.sourceHdfsParams) break case 'HIVE': params = JSON.stringify(this.sourceHiveParams) break default: params = '' break } return params }, /** * stringify the target params */ _handleTargetParams () { let params = null switch (this.targetType) { case 'HIVE': params = JSON.stringify(this.targetHiveParams) break case 'HDFS': params = JSON.stringify(this.targetHdfsParams) break case 'MYSQL': params = JSON.stringify(this.targetMysqlParams) break default: params = '' break } return params }, /** * get source params by source type */ _getSourceParams (data) { switch (this.sourceType) { case 'MYSQL': this.sourceMysqlParams = JSON.parse(data) this.srcDatasource = this.sourceMysqlParams.srcDatasource break case 'ORACLE': this.sourceOracleParams = JSON.parse(data) break case 'HDFS': this.sourceHdfsParams = JSON.parse(data) break case 'HIVE': this.sourceHiveParams = JSON.parse(data) break default: break } }, /** * get target params by target type */ _getTargetParams (data) { switch (this.targetType) { case 'HIVE': this.targetHiveParams = JSON.parse(data) break case 'HDFS': this.targetHdfsParams = JSON.parse(data) break case 'MYSQL': this.targetMysqlParams = JSON.parse(data) this.targetDatasource = this.targetMysqlParams.targetDatasource break default: break } }, /** * verification */ _verification () { // localParams Subcomponent verification if (!this.$refs.refLocalParams._verifProp()) { return false } let sqoopParams = { jobType: this.jobType, localParams: this.localParams } if (this.jobType === 'CUSTOM') { if (!shellEditor.getValue()) { this.$message.warning(`${i18n.$t('Please enter Custom Shell(required)')}`) return false } sqoopParams.customShell = shellEditor.getValue() } else { if (!this.jobName) { this.$message.warning(`${i18n.$t('Please enter Job Name(required)')}`) return false } switch (this.sourceType) { case 'MYSQL': if (!this.$refs.refSourceDs._verifDatasource()) { return false } if (this.srcQueryType === '1') { if (!editor.getValue()) { this.$message.warning(`${i18n.$t('Please enter a SQL Statement(required)')}`) return false } this.sourceMysqlParams.srcTable = '' this.sourceMysqlParams.srcColumnType = '0' this.sourceMysqlParams.srcColumns = '' } else { if (this.sourceMysqlParams.srcTable === '') { this.$message.warning(`${i18n.$t('Please enter Mysql Table(required)')}`) return false } this.sourceMysqlParams.srcQuerySql = '' if (this.sourceMysqlParams.srcColumnType === '1' && this.sourceMysqlParams.srcColumns === '') { this.$message.warning(`${i18n.$t('Please enter Columns (Comma separated)')}`) return false } if (this.sourceMysqlParams.srcColumnType === '0') { this.sourceMysqlParams.srcColumns = '' } } break case 'HDFS': if (this.sourceHdfsParams.exportDir === '') { this.$message.warning(`${i18n.$t('Please enter Export Dir(required)')}`) return false } break case 'HIVE': if (this.sourceHiveParams.hiveDatabase === '') { this.$message.warning(`${i18n.$t('Please enter Hive Database(required)')}`) return false } if (this.sourceHiveParams.hiveTable === '') { this.$message.warning(`${i18n.$t('Please enter Hive Table(required)')}`) return false } break default: break } switch (this.targetType) { case 'HIVE': if (this.targetHiveParams.hiveDatabase === '') { this.$message.warning(`${i18n.$t('Please enter Hive Database(required)')}`) return false } if (this.targetHiveParams.hiveTable === '') { this.$message.warning(`${i18n.$t('Please enter Hive Table(required)')}`) return false } break case 'HDFS': if (this.targetHdfsParams.targetPath === '') { this.$message.warning(`${i18n.$t('Please enter Target Dir(required)')}`) return false } break case 'MYSQL': if (!this.$refs.refTargetDs._verifDatasource()) { return false } if (this.targetMysqlParams.targetTable === '') { this.$message.warning(`${i18n.$t('Please enter Mysql Table(required)')}`) return false } break default: break } sqoopParams.jobName = this.jobName sqoopParams.hadoopCustomParams = this.hadoopCustomParams sqoopParams.sqoopAdvancedParams = this.sqoopAdvancedParams sqoopParams.concurrency = this.concurrency sqoopParams.modelType = this.modelType sqoopParams.sourceType = this.sourceType sqoopParams.targetType = this.targetType sqoopParams.targetParams = this._handleTargetParams() sqoopParams.sourceParams = this._handleSourceParams() } // storage this.$emit('on-params', sqoopParams) return true }, /** * Processing code highlighting */ _handlerEditor () { this._destroyEditor() editor = codemirror('code-sqoop-mirror', { mode: 'sql', readOnly: this.isDetails }) this.keypress = () => { if (!editor.getOption('readOnly')) { editor.showHint({ completeSingle: false }) } } this.changes = () => { this._cacheParams() } // Monitor keyboard editor.on('keypress', this.keypress) editor.on('changes', this.changes) editor.setValue(this.sourceMysqlParams.srcQuerySql) return editor }, /** * Processing code highlighting */ _handlerShellEditor () { this._destroyShellEditor() // shellEditor shellEditor = codemirror('code-shell-mirror', { mode: 'shell', readOnly: this.isDetails }) this.keypress = () => { if (!shellEditor.getOption('readOnly')) { shellEditor.showHint({ completeSingle: false }) } } // Monitor keyboard shellEditor.on('keypress', this.keypress) shellEditor.setValue(this.customShell) return shellEditor }, /** * return localParams */ _onLocalParams (a) { this.localParams = a }, /** * return hadoopParams */ _onHadoopCustomParams (a) { this.hadoopCustomParams = a }, /** * return sqoopAdvancedParams */ _onSqoopAdvancedParams (a) { this.sqoopAdvancedParams = a }, _cacheParams () { this.$emit('on-cache-params', { concurrency: this.concurrency, modelType: this.modelType, sourceType: this.sourceType, targetType: this.targetType, sourceParams: this._handleSourceParams(), targetParams: this._handleTargetParams(), localParams: this.localParams }) }, _destroyEditor () { if (editor) { editor.toTextArea() // Uninstall editor.off($('.code-sqoop-mirror'), 'keypress', this.keypress) editor.off($('.code-sqoop-mirror'), 'changes', this.changes) editor = null } }, _destroyShellEditor () { if (shellEditor) { shellEditor.toTextArea() // Uninstall shellEditor.off($('.code-shell-mirror'), 'keypress', this.keypress) shellEditor.off($('.code-shell-mirror'), 'changes', this.changes) } } }, watch: { // Listening to sqlType sqlType (val) { if (val === 0) { this.showType = [] } if (val !== 0) { this.title = '' } }, // Listening data source type (val) { if (val !== 'HIVE') { this.connParams = '' } }, // Watch the cacheParams cacheParams (val) { this._cacheParams() } }, created () { this._destroyEditor() let o = this.backfillItem // Non-null objects represent backfill if (!_.isEmpty(o)) { this.jobType = o.params.jobType this.isCustomTask = false if (this.jobType === 'CUSTOM') { this.customShell = o.params.customShell this.isCustomTask = true } else { this.jobName = o.params.jobName this.hadoopCustomParams = o.params.hadoopCustomParams this.sqoopAdvancedParams = o.params.sqoopAdvancedParams this.concurrency = o.params.concurrency || 1 this.modelType = o.params.modelType this.sourceType = o.params.sourceType this._getTargetTypeList(this.sourceType) this.targetType = o.params.targetType this._getSourceParams(o.params.sourceParams) this._getTargetParams(o.params.targetParams) this.localParams = o.params.localParams } } }, mounted () { setTimeout(() => { this._handlerEditor() }, 200) setTimeout(() => { this._handlerShellEditor() }, 200) setTimeout(() => { this.srcQueryType = this.sourceMysqlParams.srcQueryType }, 500) }, destroyed () { /** * Destroy the editor instance */ if (editor) { editor.toTextArea() // Uninstall editor.off($('.code-sqoop-mirror'), 'keypress', this.keypress) editor.off($('.code-sqoop-mirror'), 'changes', this.changes) editor = null } }, computed: { cacheParams () { return { concurrency: this.concurrency, modelType: this.modelType, sourceType: this.sourceType, targetType: this.targetType, localParams: this.localParams, sourceMysqlParams: this.sourceMysqlParams, sourceHdfsParams: this.sourceHdfsParams, sourceHiveParams: this.sourceHiveParams, targetHdfsParams: this.targetHdfsParams, targetMysqlParams: this.targetMysqlParams, targetHiveParams: this.targetHiveParams } } }, components: { mListBox, mDatasource, mLocalParams, mScriptBox } } </script>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,459
[Bug][Sqoop] running error in parallel
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://files.catbox.moe/pxokv2.jpg) sqoop task misssing --target-dir in 2.0.1-release. Causing error while running in parallel with the same source table in different database. ### What you expected to happen running successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7459
https://github.com/apache/dolphinscheduler/pull/7752
d7d13f7f51c5a2d6fd19076018ff5528f140c49a
c58dbefaa50224f60d581980a43ba3d2ee4634a2
"2021-12-17T03:43:39Z"
java
"2021-12-31T08:26:07Z"
dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default { 'User Name': 'User Name', 'Please enter user name': 'Please enter user name', Password: 'Password', 'Please enter your password': 'Please enter your password', 'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22': 'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22', Login: 'Login', Home: 'Home', 'Failed to create node to save': 'Failed to create node to save', 'Global parameters': 'Global parameters', 'Local parameters': 'Local parameters', 'Copy success': 'Copy success', 'The browser does not support automatic copying': 'The browser does not support automatic copying', 'Whether to save the DAG graph': 'Whether to save the DAG graph', 'Current node settings': 'Current node settings', 'View history': 'View history', 'View log': 'View log', 'Force success': 'Force success', 'Enter this child node': 'Enter this child node', 'Node name': 'Node name', 'Please enter name (required)': 'Please enter name (required)', 'Run flag': 'Run flag', Normal: 'Normal', 'Prohibition execution': 'Prohibition execution', 'Please enter description': 'Please enter description', 'Number of failed retries': 'Number of failed retries', Times: 'Times', 'Failed retry interval': 'Failed retry interval', Minute: 'Minute', 'Delay execution time': 'Delay execution time', 'Delay execution': 'Delay execution', 'Forced success': 'Forced success', Cancel: 'Cancel', 'Confirm add': 'Confirm add', 'The newly created sub-Process has not yet been executed and cannot enter the sub-Process': 'The newly created sub-Process has not yet been executed and cannot enter the sub-Process', 'The task has not been executed and cannot enter the sub-Process': 'The task has not been executed and cannot enter the sub-Process', 'Name already exists': 'Name already exists', 'Download Log': 'Download Log', 'Refresh Log': 'Refresh Log', 'Enter full screen': 'Enter full screen', 'Cancel full screen': 'Cancel full screen', Close: 'Close', 'Update log success': 'Update log success', 'No more logs': 'No more logs', 'No log': 'No log', 'Loading Log...': 'Loading Log...', 'Set the DAG diagram name': 'Set the DAG diagram name', 'Please enter description(optional)': 'Please enter description(optional)', 'Set global': 'Set global', 'Whether to go online the process definition': 'Whether to go online the process definition', 'Whether to update the process definition': 'Whether to update the process definition', Add: 'Add', 'DAG graph name cannot be empty': 'DAG graph name cannot be empty', 'Create Datasource': 'Create Datasource', 'Project Home': 'Workflow Monitor', 'Project Manage': 'Project', 'Create Project': 'Create Project', 'Cron Manage': 'Cron Manage', 'Copy Workflow': 'Copy Workflow', 'Tenant Manage': 'Tenant Manage', 'Create Tenant': 'Create Tenant', 'User Manage': 'User Manage', 'Create User': 'Create User', 'User Information': 'User Information', 'Edit Password': 'Edit Password', Success: 'Success', Failed: 'Failed', Delete: 'Delete', 'Please choose': 'Please choose', 'Please enter a positive integer': 'Please enter a positive integer', 'Program Type': 'Program Type', 'Main Class': 'Main Class', 'Main Package': 'Main Package', 'Please enter main package': 'Please enter main package', 'Please enter main class': 'Please enter main class', 'Main Arguments': 'Main Arguments', 'Please enter main arguments': 'Please enter main arguments', 'Option Parameters': 'Option Parameters', 'Please enter option parameters': 'Please enter option parameters', Resources: 'Resources', 'Custom Parameters': 'Custom Parameters', 'Custom template': 'Custom template', Datasource: 'Datasource', methods: 'methods', 'Please enter the procedure method': 'Please enter the procedure script \n\ncall procedure:{call <procedure-name>[(<arg1>,<arg2>, ...)]}\n\ncall function:{?= call <procedure-name>[(<arg1>,<arg2>, ...)]} ', 'The procedure method script example': 'example:{call <procedure-name>[(?,?, ...)]} or {?= call <procedure-name>[(?,?, ...)]}', Script: 'Script', 'Please enter script(required)': 'Please enter script(required)', 'Deploy Mode': 'Deploy Mode', 'Driver Cores': 'Driver Cores', 'Please enter Driver cores': 'Please enter Driver cores', 'Driver Memory': 'Driver Memory', 'Please enter Driver memory': 'Please enter Driver memory', 'Executor Number': 'Executor Number', 'Please enter Executor number': 'Please enter Executor number', 'The Executor number should be a positive integer': 'The Executor number should be a positive integer', 'Executor Memory': 'Executor Memory', 'Please enter Executor memory': 'Please enter Executor memory', 'Executor Cores': 'Executor Cores', 'Please enter Executor cores': 'Please enter Executor cores', 'Memory should be a positive integer': 'Memory should be a positive integer', 'Core number should be positive integer': 'Core number should be positive integer', 'Flink Version': 'Flink Version', 'JobManager Memory': 'JobManager Memory', 'Please enter JobManager memory': 'Please enter JobManager memory', 'TaskManager Memory': 'TaskManager Memory', 'Please enter TaskManager memory': 'Please enter TaskManager memory', 'Slot Number': 'Slot Number', 'Please enter Slot number': 'Please enter Slot number', Parallelism: 'Parallelism', 'Custom Parallelism': 'Configure parallelism', 'Please enter Parallelism': 'Please enter Parallelism', 'Parallelism tip': 'If there are a large number of tasks requiring complement, you can use the custom parallelism to ' + 'set the complement task thread to a reasonable value to avoid too large impact on the server.', 'Parallelism number should be positive integer': 'Parallelism number should be positive integer', 'TaskManager Number': 'TaskManager Number', 'Please enter TaskManager number': 'Please enter TaskManager number', 'App Name': 'App Name', 'Please enter app name(optional)': 'Please enter app name(optional)', 'SQL Type': 'SQL Type', 'Send Email': 'Send Email', 'Log display': 'Log display', 'rows of result': 'rows of result', Title: 'Title', 'Please enter the title of email': 'Please enter the title of email', Table: 'Table', TableMode: 'Table', Attachment: 'Attachment', 'SQL Parameter': 'SQL Parameter', 'SQL Statement': 'SQL Statement', 'UDF Function': 'UDF Function', 'Please enter a SQL Statement(required)': 'Please enter a SQL Statement(required)', 'Please enter a JSON Statement(required)': 'Please enter a JSON Statement(required)', 'One form or attachment must be selected': 'One form or attachment must be selected', 'Mail subject required': 'Mail subject required', 'Child Node': 'Child Node', 'Please select a sub-Process': 'Please select a sub-Process', Edit: 'Edit', 'Switch To This Version': 'Switch To This Version', 'Datasource Name': 'Datasource Name', 'Please enter datasource name': 'Please enter datasource name', IP: 'IP', 'Please enter IP': 'Please enter IP', Port: 'Port', 'Please enter port': 'Please enter port', 'Database Name': 'Database Name', 'Please enter database name': 'Please enter database name', 'Oracle Connect Type': 'ServiceName or SID', 'Oracle Service Name': 'ServiceName', 'Oracle SID': 'SID', 'jdbc connect parameters': 'jdbc connect parameters', 'Test Connect': 'Test Connect', 'Please enter resource name': 'Please enter resource name', 'Please enter resource folder name': 'Please enter resource folder name', 'Please enter a non-query SQL statement': 'Please enter a non-query SQL statement', 'Please enter IP/hostname': 'Please enter IP/hostname', 'jdbc connection parameters is not a correct JSON format': 'jdbc connection parameters is not a correct JSON format', '#': '#', 'Datasource Type': 'Datasource Type', 'Datasource Parameter': 'Datasource Parameter', 'Create Time': 'Create Time', 'Update Time': 'Update Time', Operation: 'Operation', 'Current Version': 'Current Version', 'Click to view': 'Click to view', 'Delete?': 'Delete?', 'Switch Version Successfully': 'Switch Version Successfully', 'Confirm Switch To This Version?': 'Confirm Switch To This Version?', Confirm: 'Confirm', 'Task status statistics': 'Task Status Statistics', Number: 'Number', State: 'State', 'Dry-run flag': 'Dry-run flag', 'Process Status Statistics': 'Process Status Statistics', 'Process Definition Statistics': 'Process Definition Statistics', 'Project Name': 'Project Name', 'Please enter name': 'Please enter name', 'Owned Users': 'Owned Users', 'Process Pid': 'Process Pid', 'Zk registration directory': 'Zk registration directory', cpuUsage: 'cpuUsage', memoryUsage: 'memoryUsage', 'Last heartbeat time': 'Last heartbeat time', 'Edit Tenant': 'Edit Tenant', 'OS Tenant Code': 'OS Tenant Code', 'Tenant Name': 'Tenant Name', Queue: 'Yarn Queue', 'Please select a queue': 'default is tenant association queue', 'Please enter the os tenant code in English': 'Please enter the os tenant code in English', 'Please enter os tenant code in English': 'Please enter os tenant code in English', 'Please enter os tenant code': 'Please enter os tenant code', 'Please enter tenant Name': 'Please enter tenant Name', 'The os tenant code. Only letters or a combination of letters and numbers are allowed': 'The os tenant code. Only letters or a combination of letters and numbers are allowed', 'Edit User': 'Edit User', Tenant: 'Tenant', Email: 'Email', Phone: 'Phone', 'User Type': 'User Type', 'Please enter phone number': 'Please enter phone number', 'Please enter email': 'Please enter email', 'Please enter the correct email format': 'Please enter the correct email format', 'Please enter the correct mobile phone format': 'Please enter the correct mobile phone format', Project: 'Project', Authorize: 'Authorize', 'File resources': 'File resources', 'UDF resources': 'UDF resources', 'UDF resources directory': 'UDF resources directory', 'Please select UDF resources directory': 'Please select UDF resources directory', 'Alarm group': 'Alarm group', 'Alarm group required': 'Alarm group required', 'Edit alarm group': 'Edit alarm group', 'Create alarm group': 'Create alarm group', 'Create Alarm Instance': 'Create Alarm Instance', 'Edit Alarm Instance': 'Edit Alarm Instance', 'Group Name': 'Group Name', 'Alarm instance name': 'Alarm instance name', 'Alarm plugin name': 'Alarm plugin name', 'Select plugin': 'Select plugin', 'Select Alarm plugin': 'Please select an Alarm plugin', 'Please enter group name': 'Please enter group name', 'Instance parameter exception': 'Instance parameter exception', 'Group Type': 'Group Type', 'Alarm plugin instance': 'Alarm plugin instance', 'Select Alarm plugin instance': 'Please select an Alarm plugin instance', Remarks: 'Remarks', SMS: 'SMS', 'Managing Users': 'Managing Users', Permission: 'Permission', Administrator: 'Administrator', 'Confirm Password': 'Confirm Password', 'Please enter confirm password': 'Please enter confirm password', 'Password cannot be in Chinese': 'Password cannot be in Chinese', 'Please enter a password (6-22) character password': 'Please enter a password (6-22) character password', 'Confirmation password cannot be in Chinese': 'Confirmation password cannot be in Chinese', 'Please enter a confirmation password (6-22) character password': 'Please enter a confirmation password (6-22) character password', 'The password is inconsistent with the confirmation password': 'The password is inconsistent with the confirmation password', 'Please select the datasource': 'Please select the datasource', 'Please select resources': 'Please select resources', Query: 'Query', 'Non Query': 'Non Query', 'prop(required)': 'prop(required)', 'value(optional)': 'value(optional)', 'value(required)': 'value(required)', 'prop is empty': 'prop is empty', 'value is empty': 'value is empty', 'prop is repeat': 'prop is repeat', 'Start Time': 'Start Time', 'End Time': 'End Time', crontab: 'crontab', 'Failure Strategy': 'Failure Strategy', online: 'online', offline: 'offline', 'Task Status': 'Task Status', 'Process Instance': 'Process Instance', 'Task Instance': 'Task Instance', 'Select date range': 'Select date range', startDate: 'startDate', endDate: 'endDate', Date: 'Date', Waiting: 'Waiting', Execution: 'Execution', Finish: 'Finish', 'Create File': 'Create File', 'Create folder': 'Create folder', 'File Name': 'File Name', 'Folder Name': 'Folder Name', 'File Format': 'File Format', 'Folder Format': 'Folder Format', 'File Content': 'File Content', 'Upload File Size': 'Upload File size cannot exceed 1g', Create: 'Create', 'Please enter the resource content': 'Please enter the resource content', 'Resource content cannot exceed 3000 lines': 'Resource content cannot exceed 3000 lines', 'File Details': 'File Details', 'Download Details': 'Download Details', Return: 'Return', Save: 'Save', 'File Manage': 'File Manage', 'Upload Files': 'Upload Files', 'Create UDF Function': 'Create UDF Function', 'Upload UDF Resources': 'Upload UDF Resources', 'Service-Master': 'Service-Master', 'Service-Worker': 'Service-Worker', 'Process Name': 'Process Name', Executor: 'Executor', 'Run Type': 'Run Type', 'Scheduling Time': 'Scheduling Time', 'Run Times': 'Run Times', host: 'host', 'fault-tolerant sign': 'fault-tolerant sign', Rerun: 'Rerun', 'Recovery Failed': 'Recovery Failed', Stop: 'Stop', Pause: 'Pause', 'Recovery Suspend': 'Recovery Suspend', Gantt: 'Gantt', 'Node Type': 'Node Type', 'Submit Time': 'Submit Time', Duration: 'Duration', 'Retry Count': 'Retry Count', 'Task Name': 'Task Name', 'Task Date': 'Task Date', 'Source Table': 'Source Table', 'Record Number': 'Record Number', 'Target Table': 'Target Table', 'Online viewing type is not supported': 'Online viewing type is not supported', Size: 'Size', Rename: 'Rename', Download: 'Download', Export: 'Export', 'Version Info': 'Version Info', Submit: 'Submit', 'Edit UDF Function': 'Edit UDF Function', type: 'type', 'UDF Function Name': 'UDF Function Name', FILE: 'FILE', UDF: 'UDF', 'File Subdirectory': 'File Subdirectory', 'Please enter a function name': 'Please enter a function name', 'Package Name': 'Package Name', 'Please enter a Package name': 'Please enter a Package name', Parameter: 'Parameter', 'Please enter a parameter': 'Please enter a parameter', 'UDF Resources': 'UDF Resources', 'Upload Resources': 'Upload Resources', Instructions: 'Instructions', 'Please enter a instructions': 'Please enter a instructions', 'Please enter a UDF function name': 'Please enter a UDF function name', 'Select UDF Resources': 'Select UDF Resources', 'Class Name': 'Class Name', 'Jar Package': 'Jar Package', 'Library Name': 'Library Name', 'UDF Resource Name': 'UDF Resource Name', 'File Size': 'File Size', Description: 'Description', 'Drag Nodes and Selected Items': 'Drag Nodes and Selected Items', 'Select Line Connection': 'Select Line Connection', 'Delete selected lines or nodes': 'Delete selected lines or nodes', 'Full Screen': 'Full Screen', Unpublished: 'Unpublished', 'Start Process': 'Start Process', 'Execute from the current node': 'Execute from the current node', 'Recover tolerance fault process': 'Recover tolerance fault process', 'Resume the suspension process': 'Resume the suspension process', 'Execute from the failed nodes': 'Execute from the failed nodes', 'Complement Data': 'Complement Data', 'Scheduling execution': 'Scheduling execution', 'Recovery waiting thread': 'Recovery waiting thread', 'Submitted successfully': 'Submitted successfully', Executing: 'Executing', 'Ready to pause': 'Ready to pause', 'Ready to stop': 'Ready to stop', 'Need fault tolerance': 'Need fault tolerance', Kill: 'Kill', 'Waiting for thread': 'Waiting for thread', 'Waiting for dependence': 'Waiting for dependence', Start: 'Start', Copy: 'Copy', 'Copy name': 'Copy name', 'Copy path': 'Copy path', 'Please enter keyword': 'Please enter keyword', 'File Upload': 'File Upload', 'Drag the file into the current upload window': 'Drag the file into the current upload window', 'Drag area upload': 'Drag area upload', Upload: 'Upload', 'ReUpload File': 'ReUpload File', 'Please enter file name': 'Please enter file name', 'Please select the file to upload': 'Please select the file to upload', 'Resources manage': 'Resources', Security: 'Security', Logout: 'Logout', 'No data': 'No data', 'Uploading...': 'Uploading...', 'Loading...': 'Loading...', List: 'List', 'Unable to download without proper url': 'Unable to download without proper url', Process: 'Process', 'Process definition': 'Process definition', 'Task record': 'Task record', 'Warning group manage': 'Warning group manage', 'Warning instance manage': 'Warning instance manage', 'Servers manage': 'Servers manage', 'UDF manage': 'UDF manage', 'Resource manage': 'Resource manage', 'Function manage': 'Function manage', 'Edit password': 'Edit password', 'Ordinary users': 'Ordinary users', 'Create process': 'Create process', 'Import process': 'Import process', 'Timing state': 'Timing state', Timing: 'Timing', Timezone: 'Timezone', TreeView: 'TreeView', 'Mailbox already exists! Recipients and copyers cannot repeat': 'Mailbox already exists! Recipients and copyers cannot repeat', 'Mailbox input is illegal': 'Mailbox input is illegal', 'Please set the parameters before starting': 'Please set the parameters before starting', Continue: 'Continue', End: 'End', 'Node execution': 'Node execution', 'Backward execution': 'Backward execution', 'Forward execution': 'Forward execution', 'Execute only the current node': 'Execute only the current node', 'Notification strategy': 'Notification strategy', 'Notification group': 'Notification group', 'Please select a notification group': 'Please select a notification group', 'Whether it is a complement process?': 'Whether it is a complement process?', 'Schedule date': 'Schedule date', 'Mode of execution': 'Mode of execution', 'Serial execution': 'Serial execution', 'Parallel execution': 'Parallel execution', 'Set parameters before timing': 'Set parameters before timing', 'Start and stop time': 'Start and stop time', 'Please select time': 'Please select time', 'Please enter crontab': 'Please enter crontab', none_1: 'none', success_1: 'success', failure_1: 'failure', All_1: 'All', Toolbar: 'Toolbar', 'View variables': 'View variables', 'Format DAG': 'Format DAG', 'Refresh DAG status': 'Refresh DAG status', Return_1: 'Return', 'Please enter format': 'Please enter format', 'connection parameter': 'connection parameter', 'Process definition details': 'Process definition details', 'Create process definition': 'Create process definition', 'Scheduled task list': 'Scheduled task list', 'Process instance details': 'Process instance details', 'Create Resource': 'Create Resource', 'User Center': 'User Center', AllStatus: 'All', None: 'None', Name: 'Name', 'Process priority': 'Process priority', 'Task priority': 'Task priority', 'Task timeout alarm': 'Task timeout alarm', 'Timeout strategy': 'Timeout strategy', 'Timeout alarm': 'Timeout alarm', 'Timeout failure': 'Timeout failure', 'Timeout period': 'Timeout period', 'Waiting Dependent complete': 'Waiting Dependent complete', 'Waiting Dependent start': 'Waiting Dependent start', 'Check interval': 'Check interval', 'Timeout must be longer than check interval': 'Timeout must be longer than check interval', 'Timeout strategy must be selected': 'Timeout strategy must be selected', 'Timeout must be a positive integer': 'Timeout must be a positive integer', 'Add dependency': 'Add dependency', 'Whether dry-run': 'Whether dry-run', and: 'and', or: 'or', month: 'month', week: 'week', day: 'day', hour: 'hour', Running: 'Running', 'Waiting for dependency to complete': 'Waiting for dependency to complete', Selected: 'Selected', CurrentHour: 'CurrentHour', Last1Hour: 'Last1Hour', Last2Hours: 'Last2Hours', Last3Hours: 'Last3Hours', Last24Hours: 'Last24Hours', today: 'today', Last1Days: 'Last1Days', Last2Days: 'Last2Days', Last3Days: 'Last3Days', Last7Days: 'Last7Days', ThisWeek: 'ThisWeek', LastWeek: 'LastWeek', LastMonday: 'LastMonday', LastTuesday: 'LastTuesday', LastWednesday: 'LastWednesday', LastThursday: 'LastThursday', LastFriday: 'LastFriday', LastSaturday: 'LastSaturday', LastSunday: 'LastSunday', ThisMonth: 'ThisMonth', LastMonth: 'LastMonth', LastMonthBegin: 'LastMonthBegin', LastMonthEnd: 'LastMonthEnd', 'Refresh status succeeded': 'Refresh status succeeded', 'Queue manage': 'Yarn Queue manage', 'Create queue': 'Create queue', 'Edit queue': 'Edit queue', 'Datasource manage': 'Datasource', 'History task record': 'History task record', 'Please go online': 'Please go online', 'Queue value': 'Queue value', 'Please enter queue value': 'Please enter queue value', 'Worker group manage': 'Worker group manage', 'Create worker group': 'Create worker group', 'Edit worker group': 'Edit worker group', 'Token manage': 'Token manage', 'Create token': 'Create token', 'Edit token': 'Edit token', Addresses: 'Addresses', 'Worker Addresses': 'Worker Addresses', 'Please select the worker addresses': 'Please select the worker addresses', 'Failure time': 'Failure time', 'Expiration time': 'Expiration time', User: 'User', 'Please enter token': 'Please enter token', 'Generate token': 'Generate token', Monitor: 'Monitor', Group: 'Group', 'Queue statistics': 'Queue statistics', 'Command status statistics': 'Command status statistics', 'Task kill': 'Task Kill', 'Task queue': 'Task queue', 'Error command count': 'Error command count', 'Normal command count': 'Normal command count', Manage: ' Manage', 'Number of connections': 'Number of connections', Sent: 'Sent', Received: 'Received', 'Min latency': 'Min latency', 'Avg latency': 'Avg latency', 'Max latency': 'Max latency', 'Node count': 'Node count', 'Query time': 'Query time', 'Node self-test status': 'Node self-test status', 'Health status': 'Health status', 'Max connections': 'Max connections', 'Threads connections': 'Threads connections', 'Max used connections': 'Max used connections', 'Threads running connections': 'Threads running connections', 'Worker group': 'Worker group', 'Please enter a positive integer greater than 0': 'Please enter a positive integer greater than 0', 'Pre Statement': 'Pre Statement', 'Post Statement': 'Post Statement', 'Statement cannot be empty': 'Statement cannot be empty', 'Process Define Count': 'Work flow Define Count', 'Process Instance Running Count': 'Process Instance Running Count', 'command number of waiting for running': 'command number of waiting for running', 'failure command number': 'failure command number', 'tasks number of waiting running': 'tasks number of waiting running', 'task number of ready to kill': 'task number of ready to kill', 'Statistics manage': 'Statistics Manage', statistics: 'Statistics', 'select tenant': 'select tenant', 'Please enter Principal': 'Please enter Principal', 'Please enter the kerberos authentication parameter java.security.krb5.conf': 'Please enter the kerberos authentication parameter java.security.krb5.conf', 'Please enter the kerberos authentication parameter login.user.keytab.username': 'Please enter the kerberos authentication parameter login.user.keytab.username', 'Please enter the kerberos authentication parameter login.user.keytab.path': 'Please enter the kerberos authentication parameter login.user.keytab.path', 'The start time must not be the same as the end': 'The start time must not be the same as the end', 'Startup parameter': 'Startup parameter', 'Startup type': 'Startup type', 'warning of timeout': 'warning of timeout', 'Next five execution times': 'Next five execution times', 'Execute time': 'Execute time', 'Complement range': 'Complement range', 'Http Url': 'Http Url', 'Http Method': 'Http Method', 'Http Parameters': 'Http Parameters', 'Http Parameters Key': 'Http Parameters Key', 'Http Parameters Position': 'Http Parameters Position', 'Http Parameters Value': 'Http Parameters Value', 'Http Check Condition': 'Http Check Condition', 'Http Condition': 'Http Condition', 'Please Enter Http Url': 'Please Enter Http Url(required)', 'Please Enter Http Condition': 'Please Enter Http Condition', 'There is no data for this period of time': 'There is no data for this period of time', 'Worker addresses cannot be empty': 'Worker addresses cannot be empty', 'Please generate token': 'Please generate token', 'Please Select token': 'Please select the expiration time of token', 'Spark Version': 'Spark Version', TargetDataBase: 'target database', TargetTable: 'target table', TargetJobName: 'target job name', 'Please enter Pigeon job name': 'Please enter Pigeon job name', 'Please enter the table of target': 'Please enter the table of target', 'Please enter a Target Table(required)': 'Please enter a Target Table(required)', SpeedByte: 'speed(byte count)', SpeedRecord: 'speed(record count)', '0 means unlimited by byte': '0 means unlimited', '0 means unlimited by count': '0 means unlimited', 'Modify User': 'Modify User', 'Whether directory': 'Whether directory', Yes: 'Yes', No: 'No', 'Hadoop Custom Params': 'Hadoop Params', 'Sqoop Advanced Parameters': 'Sqoop Params', 'Sqoop Job Name': 'Job Name', 'Please enter Mysql Database(required)': 'Please enter Mysql Database(required)', 'Please enter Mysql Table(required)': 'Please enter Mysql Table(required)', 'Please enter Columns (Comma separated)': 'Please enter Columns (Comma separated)', 'Please enter Target Dir(required)': 'Please enter Target Dir(required)', 'Please enter Export Dir(required)': 'Please enter Export Dir(required)', 'Please enter Hive Database(required)': 'Please enter Hive Databasec(required)', 'Please enter Hive Table(required)': 'Please enter Hive Table(required)', 'Please enter Hive Partition Keys': 'Please enter Hive Partition Key', 'Please enter Hive Partition Values': 'Please enter Partition Value', 'Please enter Replace Delimiter': 'Please enter Replace Delimiter', 'Please enter Fields Terminated': 'Please enter Fields Terminated', 'Please enter Lines Terminated': 'Please enter Lines Terminated', 'Please enter Concurrency': 'Please enter Concurrency', 'Please enter Update Key': 'Please enter Update Key', 'Please enter Job Name(required)': 'Please enter Job Name(required)', 'Please enter Custom Shell(required)': 'Please enter Custom Shell(required)', Direct: 'Direct', Type: 'Type', ModelType: 'ModelType', ColumnType: 'ColumnType', Database: 'Database', Column: 'Column', 'Map Column Hive': 'Map Column Hive', 'Map Column Java': 'Map Column Java', 'Export Dir': 'Export Dir', 'Hive partition Keys': 'Hive partition Keys', 'Hive partition Values': 'Hive partition Values', FieldsTerminated: 'FieldsTerminated', LinesTerminated: 'LinesTerminated', IsUpdate: 'IsUpdate', UpdateKey: 'UpdateKey', UpdateMode: 'UpdateMode', 'Target Dir': 'Target Dir', DeleteTargetDir: 'DeleteTargetDir', FileType: 'FileType', CompressionCodec: 'CompressionCodec', CreateHiveTable: 'CreateHiveTable', DropDelimiter: 'DropDelimiter', OverWriteSrc: 'OverWriteSrc', ReplaceDelimiter: 'ReplaceDelimiter', Concurrency: 'Concurrency', Form: 'Form', OnlyUpdate: 'OnlyUpdate', AllowInsert: 'AllowInsert', 'Data Source': 'Data Source', 'Data Target': 'Data Target', 'All Columns': 'All Columns', 'Some Columns': 'Some Columns', 'Branch flow': 'Branch flow', 'Custom Job': 'Custom Job', 'Custom Script': 'Custom Script', 'Cannot select the same node for successful branch flow and failed branch flow': 'Cannot select the same node for successful branch flow and failed branch flow', 'Successful branch flow and failed branch flow are required': 'conditions node Successful and failed branch flow are required', 'No resources exist': 'No resources exist', 'Please delete all non-existing resources': 'Please delete all non-existing resources', 'Unauthorized or deleted resources': 'Unauthorized or deleted resources', 'Please delete all non-existent resources': 'Please delete all non-existent resources', Kinship: 'Workflow relationship', Reset: 'Reset', KinshipStateActive: 'Current selection', KinshipState1: 'Online', KinshipState0: 'Workflow is not online', KinshipState10: 'Scheduling is not online', 'Dag label display control': 'Dag label display control', Enable: 'Enable', Disable: 'Disable', 'The Worker group no longer exists, please select the correct Worker group!': 'The Worker group no longer exists, please select the correct Worker group!', 'Please confirm whether the workflow has been saved before downloading': 'Please confirm whether the workflow has been saved before downloading', 'User name length is between 3 and 39': 'User name length is between 3 and 39', 'Timeout Settings': 'Timeout Settings', 'Connect Timeout': 'Connect Timeout', 'Socket Timeout': 'Socket Timeout', 'Connect timeout be a positive integer': 'Connect timeout be a positive integer', 'Socket Timeout be a positive integer': 'Socket Timeout be a positive integer', ms: 'ms', 'Please Enter Url': 'Please Enter Url eg. 127.0.0.1:7077', Master: 'Master', 'Please select the waterdrop resources': 'Please select the waterdrop resources', zkDirectory: 'zkDirectory', 'Directory detail': 'Directory detail', 'Connection name': 'Connection name', 'Current connection settings': 'Current connection settings', 'Please save the DAG before formatting': 'Please save the DAG before formatting', 'Batch copy': 'Batch copy', 'Related items': 'Related items', 'Project name is required': 'Project name is required', 'Batch move': 'Batch move', Version: 'Version', 'Pre tasks': 'Pre tasks', 'Running Memory': 'Running Memory', 'Max Memory': 'Max Memory', 'Min Memory': 'Min Memory', 'The workflow canvas is abnormal and cannot be saved, please recreate': 'The workflow canvas is abnormal and cannot be saved, please recreate', Info: 'Info', 'Datasource userName': 'owner', 'Resource userName': 'owner', 'Environment manage': 'Environment manage', 'Create environment': 'Create environment', 'Edit environment': 'Edit environment', 'Environment value': 'Environment value', 'Environment Name': 'Environment Name', 'Environment Code': 'Environment Code', 'Environment Config': 'Environment Config', 'Environment Desc': 'Environment Desc', 'Environment Worker Group': 'Worker Groups', 'Please enter environment config': 'Please enter environment config', 'Please enter environment desc': 'Please enter environment desc', 'Please select worker groups': 'Please select worker groups', condition: 'condition', 'The condition content cannot be empty': 'The condition content cannot be empty', 'Reference from': 'Reference from', 'No more...': 'No more...', 'Task Definition': 'Task Definition', 'Create task': 'Create task', 'Task Type': 'Task Type', 'Process execute type': 'Process execute type', parallel: 'parallel', 'Serial wait': 'Serial wait', 'Serial discard': 'Serial discard', 'Serial priority': 'Serial priority', 'Recover serial wait': 'Recover serial wait', IsEnableProxy: 'Enable Proxy', WebHook: 'WebHook', webHook: 'WebHook', Keyword: 'Keyword', Proxy: 'Proxy', receivers: 'Receivers', receiverCcs: 'ReceiverCcs', transportProtocol: 'Transport Protocol', serverHost: 'SMTP Host', serverPort: 'SMTP Port', sender: 'Sender', enableSmtpAuth: 'SMTP Auth', starttlsEnable: 'SMTP STARTTLS Enable', sslEnable: 'SMTP SSL Enable', smtpSslTrust: 'SMTP SSL Trust', url: 'URL', requestType: 'Request Type', headerParams: 'Headers', bodyParams: 'Body', contentField: 'Content Field', path: 'Script Path', userParams: 'User Params', corpId: 'CorpId', secret: 'Secret', userSendMsg: 'UserSendMsg', agentId: 'AgentId', users: 'Users', Username: 'Username', username: 'Username', showType: 'Show Type', 'Please select a task type (required)': 'Please select a task type (required)', layoutType: 'Layout Type', gridLayout: 'Grid', dagreLayout: 'Dagre', rows: 'Rows', cols: 'Cols', processOnline: 'Online', searchNode: 'Search Node', dagScale: 'Scale', workflowName: 'Workflow Name', scheduleStartTime: 'Schedule Start Time', scheduleEndTime: 'Schedule End Time', crontabExpression: 'Crontab', workflowPublishStatus: 'Workflow Publish Status', schedulePublishStatus: 'Schedule Publish Status', 'Task group manage': 'Task group manage', 'Task group option': 'Task group option', 'Create task group': 'Create task group', 'Edit task group': 'Edit task group', 'Delete task group': 'Delete task group', 'Task group code': 'Task group code', 'Task group name': 'Task group name', 'Task group resource pool size': 'Resource pool size', 'Task group resource pool size be a number': 'The size of the task group resource pool should be more than 1', 'Task group resource used pool size': 'Used resource', 'Task group desc': 'Task group desc', 'Task group status': 'Task group status', 'Task group enable status': 'Enable', 'Task group disable status': 'Disable', 'Please enter task group desc': 'Please enter task group description', 'Please enter task group resource pool size': 'Please enter task group resource pool size', 'Please select project': 'Please select a project', 'Task group queue': 'Task group queue', 'Task group queue priority': 'Priority', 'Task group queue priority be a number': 'The priority of the task group queue should be a positive number', 'Task group queue force starting status': 'Starting status', 'Task group in queue': 'In queue', 'Task group queue status': 'Task status', 'View task group queue': 'View task group queue', 'Task group queue the status of waiting': 'Waiting into the queue', 'Task group queue the status of queuing': 'Queuing', 'Task group queue the status of releasing': 'Released', 'Modify task group queue priority': 'Edit the priority of the task group queue', 'Priority not empty': 'The value of priority can not be empty', 'Priority must be number': 'The value of priority should be number' }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,459
[Bug][Sqoop] running error in parallel
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![](https://files.catbox.moe/pxokv2.jpg) sqoop task misssing --target-dir in 2.0.1-release. Causing error while running in parallel with the same source table in different database. ### What you expected to happen running successfully ### How to reproduce above ### Anything else _No response_ ### Version 2.0.1-release ### Are you willing to submit PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7459
https://github.com/apache/dolphinscheduler/pull/7752
d7d13f7f51c5a2d6fd19076018ff5528f140c49a
c58dbefaa50224f60d581980a43ba3d2ee4634a2
"2021-12-17T03:43:39Z"
java
"2021-12-31T08:26:07Z"
dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default { 'User Name': '用户名', 'Please enter user name': '请输入用户名', Password: '密码', 'Please enter your password': '请输入密码', 'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22': '密码至少包含数字,字母和字符的两种组合,长度在6-22之间', Login: '登录', Home: '首页', 'Failed to create node to save': '未创建节点保存失败', 'Global parameters': '全局参数', 'Local parameters': '局部参数', 'Copy success': '复制成功', 'The browser does not support automatic copying': '该浏览器不支持自动复制', 'Whether to save the DAG graph': '是否保存DAG图', 'Current node settings': '当前节点设置', 'View history': '查看历史', 'View log': '查看日志', 'Force success': '强制成功', 'Enter this child node': '进入该子节点', 'Node name': '节点名称', 'Please enter name (required)': '请输入名称(必填)', 'Run flag': '运行标志', Normal: '正常', 'Prohibition execution': '禁止执行', 'Please enter description': '请输入描述', 'Number of failed retries': '失败重试次数', Times: '次', 'Failed retry interval': '失败重试间隔', Minute: '分', 'Delay execution time': '延时执行时间', 'Delay execution': '延时执行', 'Forced success': '强制成功', Cancel: '取消', 'Confirm add': '确认添加', 'The newly created sub-Process has not yet been executed and cannot enter the sub-Process': '新创建子工作流还未执行,不能进入子工作流', 'The task has not been executed and cannot enter the sub-Process': '该任务还未执行,不能进入子工作流', 'Name already exists': '名称已存在请重新输入', 'Download Log': '下载日志', 'Refresh Log': '刷新日志', 'Enter full screen': '进入全屏', 'Cancel full screen': '取消全屏', Close: '关闭', 'Update log success': '更新日志成功', 'No more logs': '暂无更多日志', 'No log': '暂无日志', 'Loading Log...': '正在努力请求日志中...', 'Set the DAG diagram name': '设置DAG图名称', 'Please enter description(optional)': '请输入描述(选填)', 'Set global': '设置全局', 'Whether to go online the process definition': '是否上线流程定义', 'Whether to update the process definition': '是否更新流程定义', Add: '添加', 'DAG graph name cannot be empty': 'DAG图名称不能为空', 'Create Datasource': '创建数据源', 'Project Home': '工作流监控', 'Project Manage': '项目管理', 'Create Project': '创建项目', 'Cron Manage': '定时管理', 'Copy Workflow': '复制工作流', 'Tenant Manage': '租户管理', 'Create Tenant': '创建租户', 'User Manage': '用户管理', 'Create User': '创建用户', 'User Information': '用户信息', 'Edit Password': '密码修改', Success: '成功', Failed: '失败', Delete: '删除', 'Please choose': '请选择', 'Please enter a positive integer': '请输入正整数', 'Program Type': '程序类型', 'Main Class': '主函数的Class', 'Main Package': '主程序包', 'Please enter main package': '请选择主程序包', 'Please enter main class': '请填写主函数的Class', 'Main Arguments': '主程序参数', 'Please enter main arguments': '请输入主程序参数', 'Option Parameters': '选项参数', 'Please enter option parameters': '请输入选项参数', Resources: '资源', 'Custom Parameters': '自定义参数', 'Custom template': '自定义模版', Datasource: '数据源', methods: '方法', 'Please enter the procedure method': '请输入存储脚本 \n\n调用存储过程:{call <procedure-name>[(<arg1>,<arg2>, ...)]}\n\n调用存储函数:{?= call <procedure-name>[(<arg1>,<arg2>, ...)]} ', 'The procedure method script example': '示例:{call <procedure-name>[(?,?, ...)]} 或 {?= call <procedure-name>[(?,?, ...)]}', Script: '脚本', 'Please enter script(required)': '请输入脚本(必填)', 'Deploy Mode': '部署方式', 'Driver Cores': 'Driver核心数', 'Please enter Driver cores': '请输入Driver核心数', 'Driver Memory': 'Driver内存数', 'Please enter Driver memory': '请输入Driver内存数', 'Executor Number': 'Executor数量', 'Please enter Executor number': '请输入Executor数量', 'The Executor number should be a positive integer': 'Executor数量为正整数', 'Executor Memory': 'Executor内存数', 'Please enter Executor memory': '请输入Executor内存数', 'Executor Cores': 'Executor核心数', 'Please enter Executor cores': '请输入Executor核心数', 'Memory should be a positive integer': '内存数为数字', 'Core number should be positive integer': '核心数为正整数', 'Flink Version': 'Flink版本', 'JobManager Memory': 'JobManager内存数', 'Please enter JobManager memory': '请输入JobManager内存数', 'TaskManager Memory': 'TaskManager内存数', 'Please enter TaskManager memory': '请输入TaskManager内存数', 'Slot Number': 'Slot数量', 'Please enter Slot number': '请输入Slot数量', Parallelism: '并行度', 'Custom Parallelism': '自定义并行度', 'Please enter Parallelism': '请输入并行度', 'Parallelism number should be positive integer': '并行度必须为正整数', 'Parallelism tip': '如果存在大量任务需要补数时,可以利用自定义并行度将补数的任务线程设置成合理的数值,避免对服务器造成过大的影响', 'TaskManager Number': 'TaskManager数量', 'Please enter TaskManager number': '请输入TaskManager数量', 'App Name': '任务名称', 'Please enter app name(optional)': '请输入任务名称(选填)', 'SQL Type': 'sql类型', 'Send Email': '发送邮件', 'Log display': '日志显示', 'rows of result': '行查询结果', Title: '主题', 'Please enter the title of email': '请输入邮件主题', Table: '表名', TableMode: '表格', Attachment: '附件', 'SQL Parameter': 'sql参数', 'SQL Statement': 'sql语句', 'UDF Function': 'UDF函数', 'Please enter a SQL Statement(required)': '请输入sql语句(必填)', 'Please enter a JSON Statement(required)': '请输入json语句(必填)', 'One form or attachment must be selected': '表格、附件必须勾选一个', 'Mail subject required': '邮件主题必填', 'Child Node': '子节点', 'Please select a sub-Process': '请选择子工作流', Edit: '编辑', 'Switch To This Version': '切换到该版本', 'Datasource Name': '数据源名称', 'Please enter datasource name': '请输入数据源名称', IP: 'IP主机名', 'Please enter IP': '请输入IP主机名', Port: '端口', 'Please enter port': '请输入端口', 'Database Name': '数据库名', 'Please enter database name': '请输入数据库名', 'Oracle Connect Type': '服务名或SID', 'Oracle Service Name': '服务名', 'Oracle SID': 'SID', 'jdbc connect parameters': 'jdbc连接参数', 'Test Connect': '测试连接', 'Please enter resource name': '请输入数据源名称', 'Please enter resource folder name': '请输入资源文件夹名称', 'Please enter a non-query SQL statement': '请输入非查询sql语句', 'Please enter IP/hostname': '请输入IP/主机名', 'jdbc connection parameters is not a correct JSON format': 'jdbc连接参数不是一个正确的JSON格式', '#': '编号', 'Datasource Type': '数据源类型', 'Datasource Parameter': '数据源参数', 'Create Time': '创建时间', 'Update Time': '更新时间', Operation: '操作', 'Current Version': '当前版本', 'Click to view': '点击查看', 'Delete?': '确定删除吗?', 'Switch Version Successfully': '切换版本成功', 'Confirm Switch To This Version?': '确定切换到该版本吗?', Confirm: '确定', 'Task status statistics': '任务状态统计', Number: '数量', State: '状态', 'Dry-run flag': '空跑标识', 'Process Status Statistics': '流程状态统计', 'Process Definition Statistics': '流程定义统计', 'Project Name': '项目名称', 'Please enter name': '请输入名称', 'Owned Users': '所属用户', 'Process Pid': '进程Pid', 'Zk registration directory': 'zk注册目录', cpuUsage: 'cpuUsage', memoryUsage: 'memoryUsage', 'Last heartbeat time': '最后心跳时间', 'Edit Tenant': '编辑租户', 'OS Tenant Code': '操作系统租户', 'Tenant Name': '租户名称', Queue: '队列', 'Please select a queue': '默认为租户关联队列', 'Please enter the os tenant code in English': '请输入操作系统租户只允许英文', 'Please enter os tenant code in English': '请输入英文操作系统租户', 'Please enter os tenant code': '请输入操作系统租户', 'Please enter tenant Name': '请输入租户名称', 'The os tenant code. Only letters or a combination of letters and numbers are allowed': '操作系统租户只允许字母或字母与数字组合', 'Edit User': '编辑用户', Tenant: '租户', Email: '邮件', Phone: '手机', 'User Type': '用户类型', 'Please enter phone number': '请输入手机', 'Please enter email': '请输入邮箱', 'Please enter the correct email format': '请输入正确的邮箱格式', 'Please enter the correct mobile phone format': '请输入正确的手机格式', Project: '项目', Authorize: '授权', 'File resources': '文件资源', 'UDF resources': 'UDF资源', 'UDF resources directory': 'UDF资源目录', 'Please select UDF resources directory': '请选择UDF资源目录', 'Alarm group': '告警组', 'Alarm group required': '告警组必填', 'Edit alarm group': '编辑告警组', 'Create alarm group': '创建告警组', 'Create Alarm Instance': '创建告警实例', 'Edit Alarm Instance': '编辑告警实例', 'Group Name': '组名称', 'Alarm instance name': '告警实例名称', 'Alarm plugin name': '告警插件名称', 'Select plugin': '选择插件', 'Select Alarm plugin': '请选择告警插件', 'Please enter group name': '请输入组名称', 'Instance parameter exception': '实例参数异常', 'Group Type': '组类型', 'Alarm plugin instance': '告警插件实例', 'Select Alarm plugin instance': '请选择告警插件实例', Remarks: '备注', SMS: '短信', 'Managing Users': '管理用户', Permission: '权限', Administrator: '管理员', 'Confirm Password': '确认密码', 'Please enter confirm password': '请输入确认密码', 'Password cannot be in Chinese': '密码不能为中文', 'Please enter a password (6-22) character password': '请输入密码(6-22)字符密码', 'Confirmation password cannot be in Chinese': '确认密码不能为中文', 'Please enter a confirmation password (6-22) character password': '请输入确认密码(6-22)字符密码', 'The password is inconsistent with the confirmation password': '密码与确认密码不一致,请重新确认', 'Please select the datasource': '请选择数据源', 'Please select resources': '请选择资源', Query: '查询', 'Non Query': '非查询', 'prop(required)': 'prop(必填)', 'value(optional)': 'value(选填)', 'value(required)': 'value(必填)', 'prop is empty': '自定义参数prop不能为空', 'value is empty': 'value不能为空', 'prop is repeat': 'prop中有重复', 'Start Time': '开始时间', 'End Time': '结束时间', crontab: 'crontab', 'Failure Strategy': '失败策略', online: '上线', offline: '下线', 'Task Status': '任务状态', 'Process Instance': '工作流实例', 'Task Instance': '任务实例', 'Select date range': '选择日期区间', startDate: '开始日期', endDate: '结束日期', Date: '日期', Waiting: '等待', Execution: '执行中', Finish: '完成', 'Create File': '创建文件', 'Create folder': '创建文件夹', 'File Name': '文件名称', 'Folder Name': '文件夹名称', 'File Format': '文件格式', 'Folder Format': '文件夹格式', 'File Content': '文件内容', 'Upload File Size': '文件大小不能超过1G', Create: '创建', 'Please enter the resource content': '请输入资源内容', 'Resource content cannot exceed 3000 lines': '资源内容不能超过3000行', 'File Details': '文件详情', 'Download Details': '下载详情', Return: '返回', Save: '保存', 'File Manage': '文件管理', 'Upload Files': '上传文件', 'Create UDF Function': '创建UDF函数', 'Upload UDF Resources': '上传UDF资源', 'Service-Master': '服务管理-Master', 'Service-Worker': '服务管理-Worker', 'Process Name': '工作流名称', Executor: '执行用户', 'Run Type': '运行类型', 'Scheduling Time': '调度时间', 'Run Times': '运行次数', host: 'host', 'fault-tolerant sign': '容错标识', Rerun: '重跑', 'Recovery Failed': '恢复失败', Stop: '停止', Pause: '暂停', 'Recovery Suspend': '恢复运行', Gantt: '甘特图', 'Node Type': '节点类型', 'Submit Time': '提交时间', Duration: '运行时长', 'Retry Count': '重试次数', 'Task Name': '任务名称', 'Task Date': '任务日期', 'Source Table': '源表', 'Record Number': '记录数', 'Target Table': '目标表', 'Online viewing type is not supported': '不支持在线查看类型', Size: '大小', Rename: '重命名', Download: '下载', Export: '导出', 'Version Info': '版本信息', Submit: '提交', 'Edit UDF Function': '编辑UDF函数', type: '类型', 'UDF Function Name': 'UDF函数名称', FILE: '文件', UDF: 'UDF', 'File Subdirectory': '文件子目录', 'Please enter a function name': '请输入函数名', 'Package Name': '包名类名', 'Please enter a Package name': '请输入包名类名', Parameter: '参数', 'Please enter a parameter': '请输入参数', 'UDF Resources': 'UDF资源', 'Upload Resources': '上传资源', Instructions: '使用说明', 'Please enter a instructions': '请输入使用说明', 'Please enter a UDF function name': '请输入UDF函数名称', 'Select UDF Resources': '请选择UDF资源', 'Class Name': '类名', 'Jar Package': 'jar包', 'Library Name': '库名', 'UDF Resource Name': 'UDF资源名称', 'File Size': '文件大小', Description: '描述', 'Drag Nodes and Selected Items': '拖动节点和选中项', 'Select Line Connection': '选择线条连接', 'Delete selected lines or nodes': '删除选中的线或节点', 'Full Screen': '全屏', Unpublished: '未发布', 'Start Process': '启动工作流', 'Execute from the current node': '从当前节点开始执行', 'Recover tolerance fault process': '恢复被容错的工作流', 'Resume the suspension process': '恢复运行流程', 'Execute from the failed nodes': '从失败节点开始执行', 'Complement Data': '补数', 'Scheduling execution': '调度执行', 'Recovery waiting thread': '恢复等待线程', 'Submitted successfully': '提交成功', Executing: '正在执行', 'Ready to pause': '准备暂停', 'Ready to stop': '准备停止', 'Need fault tolerance': '需要容错', Kill: 'Kill', 'Waiting for thread': '等待线程', 'Waiting for dependence': '等待依赖', Start: '运行', Copy: '复制节点', 'Copy name': '复制名称', 'Copy path': '复制路径', 'Please enter keyword': '请输入关键词', 'File Upload': '文件上传', 'Drag the file into the current upload window': '请将文件拖拽到当前上传窗口内!', 'Drag area upload': '拖动区域上传', Upload: '上传', 'ReUpload File': '重新上传文件', 'Please enter file name': '请输入文件名', 'Please select the file to upload': '请选择要上传的文件', 'Resources manage': '资源中心', Security: '安全中心', Logout: '退出', 'No data': '查询无数据', 'Uploading...': '文件上传中', 'Loading...': '正在努力加载中...', List: '列表', 'Unable to download without proper url': '无下载url无法下载', Process: '工作流', 'Process definition': '工作流定义', 'Task record': '任务记录', 'Warning group manage': '告警组管理', 'Warning instance manage': '告警实例管理', 'Servers manage': '服务管理', 'UDF manage': 'UDF管理', 'Resource manage': '资源管理', 'Function manage': '函数管理', 'Edit password': '修改密码', 'Ordinary users': '普通用户', 'Create process': '创建工作流', 'Import process': '导入工作流', 'Timing state': '定时状态', Timing: '定时', Timezone: '时区', TreeView: '树形图', 'Mailbox already exists! Recipients and copyers cannot repeat': '邮箱已存在!收件人和抄送人不能重复', 'Mailbox input is illegal': '邮箱输入不合法', 'Please set the parameters before starting': '启动前请先设置参数', Continue: '继续', End: '结束', 'Node execution': '节点执行', 'Backward execution': '向后执行', 'Forward execution': '向前执行', 'Execute only the current node': '仅执行当前节点', 'Notification strategy': '通知策略', 'Notification group': '通知组', 'Please select a notification group': '请选择通知组', 'Whether it is a complement process?': '是否补数', 'Schedule date': '调度日期', 'Mode of execution': '执行方式', 'Serial execution': '串行执行', 'Parallel execution': '并行执行', 'Set parameters before timing': '定时前请先设置参数', 'Start and stop time': '起止时间', 'Please select time': '请选择时间', 'Please enter crontab': '请输入crontab', none_1: '都不发', success_1: '成功发', failure_1: '失败发', All_1: '成功或失败都发', Toolbar: '工具栏', 'View variables': '查看变量', 'Format DAG': '格式化DAG', 'Refresh DAG status': '刷新DAG状态', Return_1: '返回上一节点', 'Please enter format': '请输入格式为', 'connection parameter': '连接参数', 'Process definition details': '流程定义详情', 'Create process definition': '创建流程定义', 'Scheduled task list': '定时任务列表', 'Process instance details': '流程实例详情', 'Create Resource': '创建资源', 'User Center': '用户中心', AllStatus: '全部状态', None: '无', Name: '名称', 'Process priority': '流程优先级', 'Task priority': '任务优先级', 'Task timeout alarm': '任务超时告警', 'Timeout strategy': '超时策略', 'Timeout alarm': '超时告警', 'Timeout failure': '超时失败', 'Timeout period': '超时时长', 'Waiting Dependent complete': '等待依赖完成', 'Waiting Dependent start': '等待依赖启动', 'Check interval': '检查间隔', 'Timeout must be longer than check interval': '超时时间必须比检查间隔长', 'Timeout strategy must be selected': '超时策略必须选一个', 'Timeout must be a positive integer': '超时时长必须为正整数', 'Add dependency': '添加依赖', 'Whether dry-run': '是否空跑', and: '且', or: '或', month: '月', week: '周', day: '日', hour: '时', Running: '正在运行', 'Waiting for dependency to complete': '等待依赖完成', Selected: '已选', CurrentHour: '当前小时', Last1Hour: '前1小时', Last2Hours: '前2小时', Last3Hours: '前3小时', Last24Hours: '前24小时', today: '今天', Last1Days: '昨天', Last2Days: '前两天', Last3Days: '前三天', Last7Days: '前七天', ThisWeek: '本周', LastWeek: '上周', LastMonday: '上周一', LastTuesday: '上周二', LastWednesday: '上周三', LastThursday: '上周四', LastFriday: '上周五', LastSaturday: '上周六', LastSunday: '上周日', ThisMonth: '本月', LastMonth: '上月', LastMonthBegin: '上月初', LastMonthEnd: '上月末', 'Refresh status succeeded': '刷新状态成功', 'Queue manage': 'Yarn 队列管理', 'Create queue': '创建队列', 'Edit queue': '编辑队列', 'Datasource manage': '数据源中心', 'History task record': '历史任务记录', 'Please go online': '不要忘记上线', 'Queue value': '队列值', 'Please enter queue value': '请输入队列值', 'Worker group manage': 'Worker分组管理', 'Create worker group': '创建Worker分组', 'Edit worker group': '编辑Worker分组', 'Token manage': '令牌管理', 'Create token': '创建令牌', 'Edit token': '编辑令牌', Addresses: '地址', 'Worker Addresses': 'Worker地址', 'Please select the worker addresses': '请选择Worker地址', 'Failure time': '失效时间', 'Expiration time': '失效时间', User: '用户', 'Please enter token': '请输入令牌', 'Generate token': '生成令牌', Monitor: '监控中心', Group: '分组', 'Queue statistics': '队列统计', 'Command status statistics': '命令状态统计', 'Task kill': '等待kill任务', 'Task queue': '等待执行任务', 'Error command count': '错误指令数', 'Normal command count': '正确指令数', Manage: '管理', 'Number of connections': '连接数', Sent: '发送量', Received: '接收量', 'Min latency': '最低延时', 'Avg latency': '平均延时', 'Max latency': '最大延时', 'Node count': '节点数', 'Query time': '当前查询时间', 'Node self-test status': '节点自检状态', 'Health status': '健康状态', 'Max connections': '最大连接数', 'Threads connections': '当前连接数', 'Max used connections': '同时使用连接最大数', 'Threads running connections': '数据库当前活跃连接数', 'Worker group': 'Worker分组', 'Please enter a positive integer greater than 0': '请输入大于 0 的正整数', 'Pre Statement': '前置sql', 'Post Statement': '后置sql', 'Statement cannot be empty': '语句不能为空', 'Process Define Count': '工作流定义数', 'Process Instance Running Count': '正在运行的流程数', 'command number of waiting for running': '待执行的命令数', 'failure command number': '执行失败的命令数', 'tasks number of waiting running': '待运行任务数', 'task number of ready to kill': '待杀死任务数', 'Statistics manage': '统计管理', statistics: '统计', 'select tenant': '选择租户', 'Please enter Principal': '请输入Principal', 'Please enter the kerberos authentication parameter java.security.krb5.conf': '请输入kerberos认证参数 java.security.krb5.conf', 'Please enter the kerberos authentication parameter login.user.keytab.username': '请输入kerberos认证参数 login.user.keytab.username', 'Please enter the kerberos authentication parameter login.user.keytab.path': '请输入kerberos认证参数 login.user.keytab.path', 'The start time must not be the same as the end': '开始时间和结束时间不能相同', 'Startup parameter': '启动参数', 'Startup type': '启动类型', 'warning of timeout': '超时告警', 'Next five execution times': '接下来五次执行时间', 'Execute time': '执行时间', 'Complement range': '补数范围', 'Http Url': '请求地址', 'Http Method': '请求类型', 'Http Parameters': '请求参数', 'Http Parameters Key': '参数名', 'Http Parameters Position': '参数位置', 'Http Parameters Value': '参数值', 'Http Check Condition': '校验条件', 'Http Condition': '校验内容', 'Please Enter Http Url': '请填写请求地址(必填)', 'Please Enter Http Condition': '请填写校验内容', 'There is no data for this period of time': '该时间段无数据', 'Worker addresses cannot be empty': 'Worker地址不能为空', 'Please generate token': '请生成Token', 'Please Select token': '请选择Token失效时间', 'Spark Version': 'Spark版本', TargetDataBase: '目标库', TargetTable: '目标表', TargetJobName: '目标任务名', 'Please enter Pigeon job name': '请输入Pigeon任务名', 'Please enter the table of target': '请输入目标表名', 'Please enter a Target Table(required)': '请输入目标表(必填)', SpeedByte: '限流(字节数)', SpeedRecord: '限流(记录数)', '0 means unlimited by byte': 'KB,0代表不限制', '0 means unlimited by count': '0代表不限制', 'Modify User': '修改用户', 'Whether directory': '是否文件夹', Yes: '是', No: '否', 'Hadoop Custom Params': 'Hadoop参数', 'Sqoop Advanced Parameters': 'Sqoop参数', 'Sqoop Job Name': '任务名称', 'Please enter Mysql Database(required)': '请输入Mysql数据库(必填)', 'Please enter Mysql Table(required)': '请输入Mysql表名(必填)', 'Please enter Columns (Comma separated)': '请输入列名,用 , 隔开', 'Please enter Target Dir(required)': '请输入目标路径(必填)', 'Please enter Export Dir(required)': '请输入数据源路径(必填)', 'Please enter Hive Database(required)': '请输入Hive数据库(必填)', 'Please enter Hive Table(required)': '请输入Hive表名(必填)', 'Please enter Hive Partition Keys': '请输入分区键', 'Please enter Hive Partition Values': '请输入分区值', 'Please enter Replace Delimiter': '请输入替换分隔符', 'Please enter Fields Terminated': '请输入列分隔符', 'Please enter Lines Terminated': '请输入行分隔符', 'Please enter Concurrency': '请输入并发度', 'Please enter Update Key': '请输入更新列', 'Please enter Job Name(required)': '请输入任务名称(必填)', 'Please enter Custom Shell(required)': '请输入自定义脚本', Direct: '流向', Type: '类型', ModelType: '模式', ColumnType: '列类型', Database: '数据库', Column: '列', 'Map Column Hive': 'Hive类型映射', 'Map Column Java': 'Java类型映射', 'Export Dir': '数据源路径', 'Hive partition Keys': 'Hive 分区键', 'Hive partition Values': 'Hive 分区值', FieldsTerminated: '列分隔符', LinesTerminated: '行分隔符', IsUpdate: '是否更新', UpdateKey: '更新列', UpdateMode: '更新类型', 'Target Dir': '目标路径', DeleteTargetDir: '是否删除目录', FileType: '保存格式', CompressionCodec: '压缩类型', CreateHiveTable: '是否创建新表', DropDelimiter: '是否删除分隔符', OverWriteSrc: '是否覆盖数据源', ReplaceDelimiter: '替换分隔符', Concurrency: '并发度', Form: '表单', OnlyUpdate: '只更新', AllowInsert: '无更新便插入', 'Data Source': '数据来源', 'Data Target': '数据目的', 'All Columns': '全表导入', 'Some Columns': '选择列', 'Branch flow': '分支流转', 'Custom Job': '自定义任务', 'Custom Script': '自定义脚本', 'Cannot select the same node for successful branch flow and failed branch flow': '成功分支流转和失败分支流转不能选择同一个节点', 'Successful branch flow and failed branch flow are required': 'conditions节点成功和失败分支流转必填', 'No resources exist': '不存在资源', 'Please delete all non-existing resources': '请删除所有不存在资源', 'Unauthorized or deleted resources': '未授权或已删除资源', 'Please delete all non-existent resources': '请删除所有未授权或已删除资源', Kinship: '工作流关系', Reset: '重置', KinshipStateActive: '当前选择', KinshipState1: '已上线', KinshipState0: '工作流未上线', KinshipState10: '调度未上线', 'Dag label display control': 'Dag节点名称显隐', Enable: '启用', Disable: '停用', 'The Worker group no longer exists, please select the correct Worker group!': '该Worker分组已经不存在,请选择正确的Worker分组!', 'Please confirm whether the workflow has been saved before downloading': '下载前请确定工作流是否已保存', 'User name length is between 3 and 39': '用户名长度在3~39之间', 'Timeout Settings': '超时设置', 'Connect Timeout': '连接超时', 'Socket Timeout': 'Socket超时', 'Connect timeout be a positive integer': '连接超时必须为数字', 'Socket Timeout be a positive integer': 'Socket超时必须为数字', ms: '毫秒', 'Please Enter Url': '请直接填写地址,例如:127.0.0.1:7077', Master: 'Master', 'Please select the waterdrop resources': '请选择waterdrop配置文件', zkDirectory: 'zk注册目录', 'Directory detail': '查看目录详情', 'Connection name': '连线名', 'Current connection settings': '当前连线设置', 'Please save the DAG before formatting': '格式化前请先保存DAG', 'Batch copy': '批量复制', 'Related items': '关联项目', 'Project name is required': '项目名称必填', 'Batch move': '批量移动', Version: '版本', 'Pre tasks': '前置任务', 'Running Memory': '运行内存', 'Max Memory': '最大内存', 'Min Memory': '最小内存', 'The workflow canvas is abnormal and cannot be saved, please recreate': '该工作流画布异常,无法保存,请重新创建', Info: '提示', 'Datasource userName': '所属用户', 'Resource userName': '所属用户', 'Environment manage': '环境管理', 'Create environment': '创建环境', 'Edit environment': '编辑', 'Environment value': 'Environment value', 'Environment Name': '环境名称', 'Environment Code': '环境编码', 'Environment Config': '环境配置', 'Environment Desc': '详细描述', 'Environment Worker Group': 'Worker组', 'Please enter environment config': '请输入环境配置信息', 'Please enter environment desc': '请输入详细描述', 'Please select worker groups': '请选择Worker分组', condition: '条件', 'The condition content cannot be empty': '条件内容不能为空', 'Reference from': '使用已有任务', 'No more...': '没有更多了...', 'Task Definition': '任务定义', 'Create task': '创建任务', 'Task Type': '任务类型', 'Process execute type': '执行策略', parallel: '并行', 'Serial wait': '串行等待', 'Serial discard': '串行抛弃', 'Serial priority': '串行优先', 'Recover serial wait': '串行恢复', IsEnableProxy: '启用代理', WebHook: 'Web钩子', webHook: 'Web钩子', Keyword: '密钥', Proxy: '代理', receivers: '收件人', receiverCcs: '抄送人', transportProtocol: '邮件协议', serverHost: 'SMTP服务器', serverPort: 'SMTP端口', sender: '发件人', enableSmtpAuth: '请求认证', starttlsEnable: 'STARTTLS连接', sslEnable: 'SSL连接', smtpSslTrust: 'SSL证书信任', url: 'URL', requestType: '请求方式', headerParams: '请求头', bodyParams: '请求体', contentField: '内容字段', path: '脚本路径', userParams: '自定义参数', corpId: '企业ID', secret: '密钥', teamSendMsg: '群发信息', userSendMsg: '群员信息', agentId: '应用ID', users: '群员', Username: '用户名', username: '用户名', showType: '内容展示类型', 'Please select a task type (required)': '请选择任务类型(必选)', layoutType: '布局类型', gridLayout: '网格布局', dagreLayout: '层次布局', rows: '行数', cols: '列数', processOnline: '已上线', searchNode: '搜索节点', dagScale: '缩放', workflowName: '工作流名称', scheduleStartTime: '定时开始时间', scheduleEndTime: '定时结束时间', crontabExpression: 'Crontab', workflowPublishStatus: '工作流上线状态', schedulePublishStatus: '定时状态', 'Task group manage': '任务组管理', 'Task group option': '任务组配置', 'Create task group': '创建任务组', 'Edit task group': '编辑任务组', 'Delete task group': '删除任务组', 'Task group code': '任务组编号', 'Task group name': '任务组名称', 'Task group resource pool size': '资源容量', 'Task group resource used pool size': '已用资源', 'Task group desc': '描述信息', 'Task group status': '任务组状态', 'Task group enable status': '启用', 'Task group disable status': '不可用', 'Please enter task group desc': '请输入任务组描述', 'Please enter task group resource pool size': '请输入资源容量大小', 'Task group resource pool size be a number': '资源容量大小必须大于等于1的数值', 'Please select project': '请选择项目', 'Task group queue': '任务组队列', 'Task group queue priority': '组内优先级', 'Task group queue priority be a number': '优先级必须是大于等于0的数值', 'Task group queue force starting status': '是否强制启动', 'Task group in queue': '是否排队中', 'Task group queue status': '任务状态', 'View task group queue': '查看任务组队列', 'Task group queue the status of waiting': '等待入队', 'Task group queue the status of queuing': '排队中', 'Task group queue the status of releasing': '已释放', 'Modify task group queue priority': '修改优先级', 'Force to start task': '强制启动', 'Priority not empty': '优先级不能为空', 'Priority must be number': '优先级必须是数值' }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,740
[Bug] [Module Name] Bug title upgrade to 2.0.1 failed , 2.0.0/dolphinscheduler_ddl.sql has error sql
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![image](https://user-images.githubusercontent.com/96870549/147748783-e9bb62aa-d920-4e37-9e64-60902312b709.png) The sql has a syntax error, and because it is a function, the exception information will be hidden ### What you expected to happen Upgrade normally ### How to reproduce Upgrade normally ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7740
https://github.com/apache/dolphinscheduler/pull/7761
4514eaeb8b8e209beb8c7e10109d2c5412323378
3e34e69cfbe80016d17af5be753aa4ae8afc9176
"2021-12-30T11:44:07Z"
java
"2021-12-31T10:04:59Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.0.0_schema/mysql/dolphinscheduler_ddl.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY','')); -- uc_dolphin_T_t_ds_user_A_state drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_user_A_state; delimiter d// CREATE PROCEDURE uc_dolphin_T_t_ds_user_A_state() BEGIN IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE TABLE_NAME='t_ds_user' AND TABLE_SCHEMA=(SELECT DATABASE()) AND COLUMN_NAME ='state') THEN ALTER TABLE t_ds_user ADD `state` tinyint(4) DEFAULT '1' COMMENT 'state 0:disable 1:enable'; END IF; END; d// delimiter ; CALL uc_dolphin_T_t_ds_user_A_state; DROP PROCEDURE uc_dolphin_T_t_ds_user_A_state; -- uc_dolphin_T_t_ds_tenant_A_tenant_name drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_tenant_A_tenant_name; delimiter d// CREATE PROCEDURE uc_dolphin_T_t_ds_tenant_A_tenant_name() BEGIN IF EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE TABLE_NAME='t_ds_tenant' AND TABLE_SCHEMA=(SELECT DATABASE()) AND COLUMN_NAME ='tenant_name') THEN ALTER TABLE t_ds_tenant DROP `tenant_name`; END IF; END; d// delimiter ; CALL uc_dolphin_T_t_ds_tenant_A_tenant_name; DROP PROCEDURE uc_dolphin_T_t_ds_tenant_A_tenant_name; -- uc_dolphin_T_t_ds_alertgroup_A_alert_instance_ids drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_alertgroup_A_alert_instance_ids; delimiter d// CREATE PROCEDURE uc_dolphin_T_t_ds_alertgroup_A_alert_instance_ids() BEGIN IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE TABLE_NAME='t_ds_alertgroup' AND TABLE_SCHEMA=(SELECT DATABASE()) AND COLUMN_NAME ='alert_instance_ids') THEN ALTER TABLE t_ds_alertgroup ADD COLUMN `alert_instance_ids` varchar (255) DEFAULT NULL COMMENT 'alert instance ids' AFTER `id`; END IF; END; d// delimiter ; CALL uc_dolphin_T_t_ds_alertgroup_A_alert_instance_ids(); DROP PROCEDURE uc_dolphin_T_t_ds_alertgroup_A_alert_instance_ids; -- uc_dolphin_T_t_ds_alertgroup_A_create_user_id drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_alertgroup_A_create_user_id; delimiter d// CREATE PROCEDURE uc_dolphin_T_t_ds_alertgroup_A_create_user_id() BEGIN IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE TABLE_NAME='t_ds_alertgroup' AND TABLE_SCHEMA=(SELECT DATABASE()) AND COLUMN_NAME ='create_user_id') THEN ALTER TABLE t_ds_alertgroup ADD COLUMN `create_user_id` int(11) DEFAULT NULL COMMENT 'create user id' AFTER `alert_instance_ids`; END IF; END; d// delimiter ; CALL uc_dolphin_T_t_ds_alertgroup_A_create_user_id(); DROP PROCEDURE uc_dolphin_T_t_ds_alertgroup_A_create_user_id; -- uc_dolphin_T_t_ds_alertgroup_A_add_UN_groupName drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_alertgroup_A_add_UN_groupName; delimiter d// CREATE PROCEDURE uc_dolphin_T_t_ds_alertgroup_A_add_UN_groupName() BEGIN IF NOT EXISTS (SELECT 1 FROM information_schema.STATISTICS WHERE TABLE_NAME='t_ds_alertgroup' AND TABLE_SCHEMA=(SELECT DATABASE()) AND INDEX_NAME ='t_ds_alertgroup_name_un') THEN ALTER TABLE t_ds_alertgroup ADD UNIQUE KEY `t_ds_alertgroup_name_un` (`group_name`); END IF; END; d// delimiter ; CALL uc_dolphin_T_t_ds_alertgroup_A_add_UN_groupName(); DROP PROCEDURE uc_dolphin_T_t_ds_alertgroup_A_add_UN_groupName; -- uc_dolphin_T_t_ds_datasource_A_add_UN_datasourceName drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_datasource_A_add_UN_datasourceName; delimiter d// CREATE PROCEDURE uc_dolphin_T_t_ds_datasource_A_add_UN_datasourceName() BEGIN IF NOT EXISTS (SELECT 1 FROM information_schema.STATISTICS WHERE TABLE_NAME='t_ds_datasource' AND TABLE_SCHEMA=(SELECT DATABASE()) AND INDEX_NAME ='t_ds_datasource_name_un') THEN ALTER TABLE t_ds_datasource ADD UNIQUE KEY `t_ds_datasource_name_un` (`name`, `type`); END IF; END; d// delimiter ; CALL uc_dolphin_T_t_ds_datasource_A_add_UN_datasourceName(); DROP PROCEDURE uc_dolphin_T_t_ds_datasource_A_add_UN_datasourceName; -- uc_dolphin_T_t_ds_project_A_add_code drop PROCEDURE if EXISTS uc_dolphin_T_t_ds_project_A_add_code; delimiter d// CREATE PROCEDURE uc_dolphin_T_t_ds_project_A_add_code() BEGIN IF NOT EXISTS (SELECT 1 FROM information_schema.COLUMNS WHERE TABLE_NAME='t_ds_project' AND TABLE_SCHEMA=(SELECT DATABASE()) AND COLUMN_NAME ='code') THEN alter table t_ds_project add `code` bigint(20) NOT NULL COMMENT 'encoding' AFTER `name`; END IF; END; d// delimiter ; CALL uc_dolphin_T_t_ds_project_A_add_code(); DROP PROCEDURE uc_dolphin_T_t_ds_project_A_add_code; -- ---------------------------- -- Table structure for t_ds_plugin_define -- ---------------------------- SET sql_mode=(SELECT REPLACE(@@sql_mode,'ONLY_FULL_GROUP_BY','')); DROP TABLE IF EXISTS `t_ds_plugin_define`; CREATE TABLE `t_ds_plugin_define` ( `id` int NOT NULL AUTO_INCREMENT, `plugin_name` varchar(100) NOT NULL COMMENT 'the name of plugin eg: email', `plugin_type` varchar(100) NOT NULL COMMENT 'plugin type . alert=alert plugin, job=job plugin', `plugin_params` text COMMENT 'plugin params', `create_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `t_ds_plugin_define_UN` (`plugin_name`,`plugin_type`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_alert_plugin_instance -- ---------------------------- DROP TABLE IF EXISTS `t_ds_alert_plugin_instance`; CREATE TABLE `t_ds_alert_plugin_instance` ( `id` int NOT NULL AUTO_INCREMENT, `plugin_define_id` int NOT NULL, `plugin_instance_params` text COMMENT 'plugin instance params. Also contain the params value which user input in web ui.', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `instance_name` varchar(200) DEFAULT NULL COMMENT 'alert instance name', PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_environment -- ---------------------------- DROP TABLE IF EXISTS `t_ds_environment`; CREATE TABLE `t_ds_environment` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `code` bigint(20) DEFAULT NULL COMMENT 'encoding', `name` varchar(100) NOT NULL COMMENT 'environment name', `config` text NULL DEFAULT NULL COMMENT 'this config contains many environment variables config', `description` text NULL DEFAULT NULL COMMENT 'the details', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `environment_name_unique` (`name`), UNIQUE KEY `environment_code_unique` (`code`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_environment_worker_group_relation -- ---------------------------- DROP TABLE IF EXISTS `t_ds_environment_worker_group_relation`; CREATE TABLE `t_ds_environment_worker_group_relation` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `environment_code` bigint(20) NOT NULL COMMENT 'environment code', `worker_group` varchar(255) NOT NULL COMMENT 'worker group id', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `create_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP, `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`id`), UNIQUE KEY `environment_worker_group_unique` (`environment_code`,`worker_group`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_definition_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_definition_log`; CREATE TABLE `t_ds_process_definition_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'process definition name', `version` int(11) DEFAULT '0' COMMENT 'process definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online', `user_id` int(11) DEFAULT NULL COMMENT 'process definition creator id', `global_params` text COMMENT 'global parameters', `flag` tinyint(4) DEFAULT NULL COMMENT '0 not available, 1 available', `locations` text COMMENT 'Node location information', `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id', `timeout` int(11) DEFAULT '0' COMMENT 'time out,unit: minute', `tenant_id` int(11) NOT NULL DEFAULT '-1' COMMENT 'tenant id', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_task_definition -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_definition`; CREATE TABLE `t_ds_task_definition` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'task definition name', `version` int(11) DEFAULT '0' COMMENT 'task definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id', `task_type` varchar(50) NOT NULL COMMENT 'task type', `task_params` longtext COMMENT 'job custom parameters', `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available', `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority', `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries', `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval', `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open', `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail', `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute', `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute', `resource_ids` text COMMENT 'resource id, separated by comma', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`,`code`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_task_definition_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_task_definition_log`; CREATE TABLE `t_ds_task_definition_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `code` bigint(20) NOT NULL COMMENT 'encoding', `name` varchar(200) DEFAULT NULL COMMENT 'task definition name', `version` int(11) DEFAULT '0' COMMENT 'task definition version', `description` text COMMENT 'description', `project_code` bigint(20) NOT NULL COMMENT 'project code', `user_id` int(11) DEFAULT NULL COMMENT 'task definition creator id', `task_type` varchar(50) NOT NULL COMMENT 'task type', `task_params` longtext COMMENT 'job custom parameters', `flag` tinyint(2) DEFAULT NULL COMMENT '0 not available, 1 available', `task_priority` tinyint(4) DEFAULT NULL COMMENT 'job priority', `worker_group` varchar(200) DEFAULT NULL COMMENT 'worker grouping', `environment_code` bigint(20) DEFAULT '-1' COMMENT 'environment code', `fail_retry_times` int(11) DEFAULT NULL COMMENT 'number of failed retries', `fail_retry_interval` int(11) DEFAULT NULL COMMENT 'failed retry interval', `timeout_flag` tinyint(2) DEFAULT '0' COMMENT 'timeout flag:0 close, 1 open', `timeout_notify_strategy` tinyint(4) DEFAULT NULL COMMENT 'timeout notification policy: 0 warning, 1 fail', `timeout` int(11) DEFAULT '0' COMMENT 'timeout length,unit: minute', `delay_time` int(11) DEFAULT '0' COMMENT 'delay execution time,unit: minute', `resource_ids` text DEFAULT NULL COMMENT 'resource id, separated by comma', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `task_group_id` int(11) DEFAULT NULL COMMENT 'task group id', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_task_relation -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_task_relation`; CREATE TABLE `t_ds_process_task_relation` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `name` varchar(200) DEFAULT NULL COMMENT 'relation name', `project_code` bigint(20) NOT NULL COMMENT 'project code', `process_definition_code` bigint(20) NOT NULL COMMENT 'process code', `process_definition_version` int(11) NOT NULL COMMENT 'process version', `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code', `pre_task_version` int(11) NOT NULL COMMENT 'pre task version', `post_task_code` bigint(20) NOT NULL COMMENT 'post task code', `post_task_version` int(11) NOT NULL COMMENT 'post task version', `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay', `condition_params` text COMMENT 'condition params(json)', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- ---------------------------- -- Table structure for t_ds_process_task_relation_log -- ---------------------------- DROP TABLE IF EXISTS `t_ds_process_task_relation_log`; CREATE TABLE `t_ds_process_task_relation_log` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'self-increasing id', `name` varchar(200) DEFAULT NULL COMMENT 'relation name', `project_code` bigint(20) NOT NULL COMMENT 'project code', `process_definition_code` bigint(20) NOT NULL COMMENT 'process code', `process_definition_version` int(11) NOT NULL COMMENT 'process version', `pre_task_code` bigint(20) NOT NULL COMMENT 'pre task code', `pre_task_version` int(11) NOT NULL COMMENT 'pre task version', `post_task_code` bigint(20) NOT NULL COMMENT 'post task code', `post_task_version` int(11) NOT NULL COMMENT 'post task version', `condition_type` tinyint(2) DEFAULT NULL COMMENT 'condition type : 0 none, 1 judge 2 delay', `condition_params` text COMMENT 'condition params(json)', `operator` int(11) DEFAULT NULL COMMENT 'operator user id', `operate_time` datetime DEFAULT NULL COMMENT 'operate time', `create_time` datetime NOT NULL COMMENT 'create time', `update_time` datetime NOT NULL COMMENT 'update time', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- t_ds_worker_group DROP TABLE IF EXISTS `t_ds_worker_group`; CREATE TABLE `t_ds_worker_group` ( `id` bigint(11) NOT NULL AUTO_INCREMENT COMMENT 'id', `name` varchar(255) NOT NULL COMMENT 'worker group name', `addr_list` text NULL DEFAULT NULL COMMENT 'worker addr list. split by [,]', `create_time` datetime NULL DEFAULT NULL COMMENT 'create time', `update_time` datetime NULL DEFAULT NULL COMMENT 'update time', PRIMARY KEY (`id`), UNIQUE KEY `name_unique` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; -- t_ds_command alter table t_ds_command change process_definition_id process_definition_code bigint(20) NOT NULL COMMENT 'process definition code'; alter table t_ds_command add environment_code bigint(20) DEFAULT '-1' COMMENT 'environment code' AFTER worker_group; alter table t_ds_command add dry_run tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run' AFTER environment_code; alter table t_ds_command add process_definition_version int(11) DEFAULT '0' COMMENT 'process definition version' AFTER process_definition_code; alter table t_ds_command add process_instance_id int(11) DEFAULT '0' COMMENT 'process instance id' AFTER process_definition_version; alter table t_ds_command add KEY `priority_id_index` (`process_instance_priority`,`id`) USING BTREE; -- t_ds_error_command alter table t_ds_error_command change process_definition_id process_definition_code bigint(20) NOT NULL COMMENT 'process definition code'; alter table t_ds_error_command add environment_code bigint(20) DEFAULT '-1' COMMENT 'environment code' AFTER worker_group; alter table t_ds_error_command add dry_run tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run' AFTER message; alter table t_ds_error_command add process_definition_version int(11) DEFAULT '0' COMMENT 'process definition version' AFTER process_definition_code; alter table t_ds_error_command add process_instance_id int(11) DEFAULT '0' COMMENT 'process instance id' AFTER process_definition_version; -- t_ds_process_instance note: Data migration is not supported alter table t_ds_process_instance change process_definition_id process_definition_code bigint(20) NOT NULL COMMENT 'process definition code'; alter table t_ds_process_instance add process_definition_version int(11) DEFAULT '0' COMMENT 'process definition version' AFTER process_definition_code; alter table t_ds_process_instance add environment_code bigint(20) DEFAULT '-1' COMMENT 'environment code' AFTER worker_group; alter table t_ds_process_instance add var_pool longtext COMMENT 'var_pool' AFTER tenant_id; alter table t_ds_process_instance add dry_run tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run' AFTER var_pool; alter table t_ds_process_instance drop KEY `process_instance_index`; alter table t_ds_process_instance add KEY `process_instance_index` (`process_definition_code`,`id`) USING BTREE; alter table t_ds_process_instance drop process_instance_json; alter table t_ds_process_instance drop locations; alter table t_ds_process_instance drop connects; alter table t_ds_process_instance drop dependence_schedule_times; -- t_ds_task_instance note: Data migration is not supported alter table t_ds_task_instance change process_definition_id task_code bigint(20) NOT NULL COMMENT 'task definition code'; alter table t_ds_task_instance add task_definition_version int(11) DEFAULT '0' COMMENT 'task definition version' AFTER task_code; alter table t_ds_task_instance add task_params text COMMENT 'job custom parameters' AFTER app_link; alter table t_ds_task_instance add environment_code bigint(20) DEFAULT '-1' COMMENT 'environment code' AFTER worker_group; alter table t_ds_task_instance add environment_config text COMMENT 'this config contains many environment variables config' AFTER environment_code; alter table t_ds_task_instance add first_submit_time datetime DEFAULT NULL COMMENT 'task first submit time' AFTER executor_id; alter table t_ds_task_instance add delay_time int(4) DEFAULT '0' COMMENT 'task delay execution time' AFTER first_submit_time; alter table t_ds_task_instance add var_pool longtext COMMENT 'var_pool' AFTER delay_time; alter table t_ds_task_instance add dry_run tinyint(4) DEFAULT '0' COMMENT 'dry run flag:0 normal, 1 dry run' AFTER var_pool; alter table t_ds_task_instance drop KEY `task_instance_index`; alter table t_ds_task_instance drop task_json; -- t_ds_schedules alter table t_ds_schedules change process_definition_id process_definition_code bigint(20) NOT NULL COMMENT 'process definition code'; alter table t_ds_schedules add timezone_id varchar(40) DEFAULT NULL COMMENT 'timezoneId' AFTER end_time; alter table t_ds_schedules add environment_code bigint(20) DEFAULT '-1' COMMENT 'environment code' AFTER worker_group; -- t_ds_process_definition alter table t_ds_process_definition add `code` bigint(20) NOT NULL COMMENT 'encoding' AFTER `id`; alter table t_ds_process_definition change project_id project_code bigint(20) NOT NULL COMMENT 'project code' AFTER `description`; alter table t_ds_process_definition add `warning_group_id` int(11) DEFAULT NULL COMMENT 'alert group id' AFTER `locations`; alter table t_ds_process_definition add UNIQUE KEY `process_unique` (`name`,`project_code`) USING BTREE; alter table t_ds_process_definition modify `description` text COMMENT 'description' after `version`; alter table t_ds_process_definition modify `release_state` tinyint(4) DEFAULT NULL COMMENT 'process definition release state:0:offline,1:online' after `project_code`; alter table t_ds_process_definition modify `create_time` datetime DEFAULT NULL COMMENT 'create time' after `tenant_id`;
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,740
[Bug] [Module Name] Bug title upgrade to 2.0.1 failed , 2.0.0/dolphinscheduler_ddl.sql has error sql
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar issues. ### What happened ![image](https://user-images.githubusercontent.com/96870549/147748783-e9bb62aa-d920-4e37-9e64-60902312b709.png) The sql has a syntax error, and because it is a function, the exception information will be hidden ### What you expected to happen Upgrade normally ### How to reproduce Upgrade normally ### Anything else _No response_ ### Version 2.0.1 ### Are you willing to submit PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7740
https://github.com/apache/dolphinscheduler/pull/7761
4514eaeb8b8e209beb8c7e10109d2c5412323378
3e34e69cfbe80016d17af5be753aa4ae8afc9176
"2021-12-30T11:44:07Z"
java
"2021-12-31T10:04:59Z"
dolphinscheduler-dao/src/main/resources/sql/upgrade/2.0.0_schema/postgresql/dolphinscheduler_ddl.sql
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ delimiter d// CREATE OR REPLACE FUNCTION public.dolphin_update_metadata( ) RETURNS character varying LANGUAGE 'plpgsql' COST 100 VOLATILE PARALLEL UNSAFE AS $BODY$ DECLARE v_schema varchar; BEGIN ---get schema name v_schema =current_schema(); --- rename columns EXECUTE 'ALTER TABLE IF EXISTS ' || quote_ident(v_schema) ||'.t_ds_command RENAME COLUMN process_definition_id to process_definition_code'; EXECUTE 'ALTER TABLE IF EXISTS ' || quote_ident(v_schema) ||'.t_ds_error_command RENAME COLUMN process_definition_id to process_definition_code'; EXECUTE 'ALTER TABLE IF EXISTS ' || quote_ident(v_schema) ||'.t_ds_process_instance RENAME COLUMN process_definition_id to process_definition_code'; EXECUTE 'ALTER TABLE IF EXISTS ' || quote_ident(v_schema) ||'.t_ds_task_instance RENAME COLUMN process_definition_id to task_code'; EXECUTE 'ALTER TABLE IF EXISTS ' || quote_ident(v_schema) ||'.t_ds_schedules RENAME COLUMN process_definition_id to process_definition_code'; EXECUTE 'ALTER TABLE IF EXISTS ' || quote_ident(v_schema) ||'.t_ds_process_definition RENAME COLUMN project_id to project_code'; --- alter column type EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_command ALTER COLUMN process_definition_code TYPE bigint'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_error_command ALTER COLUMN process_definition_code TYPE bigint'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance ALTER COLUMN process_definition_code TYPE bigint'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ALTER COLUMN task_code TYPE bigint'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_schedules ALTER COLUMN process_definition_code TYPE bigint'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_definition ALTER COLUMN project_code TYPE bigint'; --- add columns EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_user ADD COLUMN IF NOT EXISTS "state" int DEFAULT 1'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_alertgroup ADD COLUMN IF NOT EXISTS "alert_instance_ids" varchar(255) DEFAULT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_alertgroup ADD COLUMN IF NOT EXISTS "create_user_id" int4 DEFAULT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_project ADD COLUMN IF NOT EXISTS "code" bigint NOT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_command ADD COLUMN IF NOT EXISTS "environment_code" bigint DEFAULT -1'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_command ADD COLUMN IF NOT EXISTS "dry_run" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_command ADD COLUMN IF NOT EXISTS "process_definition_version" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_command ADD COLUMN IF NOT EXISTS "process_instance_id" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_error_command ADD COLUMN IF NOT EXISTS "environment_code" bigint DEFAULT -1'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_error_command ADD COLUMN IF NOT EXISTS "dry_run" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_error_command ADD COLUMN IF NOT EXISTS "process_definition_version" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_error_command ADD COLUMN IF NOT EXISTS "process_instance_id" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance ADD COLUMN IF NOT EXISTS "process_definition_version" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance ADD COLUMN IF NOT EXISTS "environment_code" bigint DEFAULT -1'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance ADD COLUMN IF NOT EXISTS "var_pool" text'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance ADD COLUMN IF NOT EXISTS "dry_run" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance ADD COLUMN IF NOT EXISTS "next_process_instance_id" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "task_definition_version" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "task_params" text'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "environment_code" bigint DEFAULT -1'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "environment_config" text'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "first_submit_time" timestamp DEFAULT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "delay_time" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "var_pool" text'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "dry_run" int DEFAULT 0'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance ADD COLUMN IF NOT EXISTS "task_group_id" int DEFAULT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_schedules ADD COLUMN IF NOT EXISTS "timezone_id" varchar(40) DEFAULT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_schedules ADD COLUMN IF NOT EXISTS "environment_code" int DEFAULT -1'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_definition ADD COLUMN IF NOT EXISTS "code" bigint'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_definition ADD COLUMN IF NOT EXISTS "warning_group_id" int'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_definition ADD COLUMN IF NOT EXISTS "execution_type" int DEFAULT 0'; ---drop columns EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_tenant DROP COLUMN IF EXISTS "tenant_name"'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance DROP COLUMN IF EXISTS "process_instance_json"'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance DROP COLUMN IF EXISTS "locations"'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance DROP COLUMN IF EXISTS "connects"'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_process_instance DROP COLUMN IF EXISTS "dependence_schedule_times"'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'.t_ds_task_instance DROP COLUMN IF EXISTS "task_json"'; -- add CONSTRAINT EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_alertgroup" ADD CONSTRAINT "t_ds_alertgroup_name_un" UNIQUE ("group_name")'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_datasource" ADD CONSTRAINT "t_ds_datasource_name_un" UNIQUE ("name","type")'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_command" ALTER COLUMN "process_definition_code" SET NOT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_process_instance" ALTER COLUMN "process_definition_code" SET NOT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_task_instance" ALTER COLUMN "task_code" SET NOT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_schedules" ALTER COLUMN "process_definition_code" SET NOT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_process_definition" ALTER COLUMN "code" SET NOT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_process_definition" ALTER COLUMN "project_code" SET NOT NULL'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_process_definition" ADD CONSTRAINT "process_unique" UNIQUE ("name","project_code")'; EXECUTE 'ALTER TABLE ' || quote_ident(v_schema) ||'."t_ds_process_definition" ALTER COLUMN "description" SET NOT NULL'; --- drop index EXECUTE 'DROP INDEX IF EXISTS "process_instance_index"'; EXECUTE 'DROP INDEX IF EXISTS "task_instance_index"'; --- create index EXECUTE 'CREATE INDEX IF NOT EXISTS priority_id_index ON ' || quote_ident(v_schema) ||'.t_ds_command USING Btree("process_instance_priority","id")'; EXECUTE 'CREATE INDEX IF NOT EXISTS process_instance_index ON ' || quote_ident(v_schema) ||'.t_ds_process_instance USING Btree("process_definition_code","id")'; ---add comment EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_user.state is ''state 0:disable 1:enable'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_alertgroup.alert_instance_ids is ''alert instance ids'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_alertgroup.create_user_id is ''create user id'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_project.code is ''coding'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_command.process_definition_code is ''process definition code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_command.environment_code is ''environment code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_command.dry_run is ''dry run flag:0 normal, 1 dry run'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_command.process_definition_version is ''process definition version'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_command.process_instance_id is ''process instance id'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_error_command.process_definition_code is ''process definition code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_error_command.environment_code is ''environment code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_error_command.dry_run is ''dry run flag:0 normal, 1 dry run'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_error_command.process_definition_version is ''process definition version'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_error_command.process_instance_id is ''process instance id'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_process_instance.process_definition_code is ''process instance code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_process_instance.process_definition_version is ''process instance version'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_process_instance.environment_code is ''environment code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_process_instance.var_pool is ''var pool'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_process_instance.dry_run is ''dry run flag:0 normal, 1 dry run'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.task_code is ''task definition code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.task_definition_version is ''task definition version'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.task_params is ''task params'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.environment_code is ''environment code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.environment_config is ''this config contains many environment variables config'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.first_submit_time is ''task first submit time'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.delay_time is ''task delay execution time'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.var_pool is ''var pool'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_task_instance.dry_run is ''dry run flag:0 normal, 1 dry run'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_schedules.process_definition_code is ''process definition code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_schedules.timezone_id is ''timezone id'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_schedules.environment_code is ''environment code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_process_definition.code is ''encoding'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_process_definition.project_code is ''project code'''; EXECUTE 'comment on column ' || quote_ident(v_schema) ||'.t_ds_process_definition.warning_group_id is ''alert group id'''; --create table EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_plugin_define" ( id serial NOT NULL, plugin_name varchar(100) NOT NULL, plugin_type varchar(100) NOT NULL, plugin_params text NULL, create_time timestamp NULL, update_time timestamp NULL, CONSTRAINT t_ds_plugin_define_pk PRIMARY KEY (id), CONSTRAINT t_ds_plugin_define_un UNIQUE (plugin_name, plugin_type) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_alert_plugin_instance" ( id serial NOT NULL, plugin_define_id int4 NOT NULL, plugin_instance_params text NULL, create_time timestamp NULL, update_time timestamp NULL, instance_name varchar(200) NULL, CONSTRAINT t_ds_alert_plugin_instance_pk PRIMARY KEY (id) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_environment" ( id serial NOT NULL, code bigint NOT NULL, name varchar(100) DEFAULT NULL, config text DEFAULT NULL, description text, operator int DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id), CONSTRAINT environment_name_unique UNIQUE (name), CONSTRAINT environment_code_unique UNIQUE (code) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_environment_worker_group_relation" ( id serial NOT NULL, environment_code bigint NOT NULL, worker_group varchar(255) NOT NULL, operator int DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id) , CONSTRAINT environment_worker_group_unique UNIQUE (environment_code,worker_group) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_process_definition_log" ( id serial NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , release_state int DEFAULT NULL , user_id int DEFAULT NULL , global_params text , locations text , warning_group_id int DEFAULT NULL , flag int DEFAULT NULL , timeout int DEFAULT 0 , tenant_id int DEFAULT -1 , execution_type int DEFAULT 0, operator int DEFAULT NULL , operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_task_definition" ( id serial NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , user_id int DEFAULT NULL , task_type varchar(50) DEFAULT NULL , task_params text , flag int DEFAULT NULL , task_priority int DEFAULT NULL , worker_group varchar(255) DEFAULT NULL , environment_code bigint DEFAULT -1, fail_retry_times int DEFAULT NULL , fail_retry_interval int DEFAULT NULL , timeout_flag int DEFAULT NULL , timeout_notify_strategy int DEFAULT NULL , timeout int DEFAULT 0 , delay_time int DEFAULT 0 , task_group_id int DEFAULT NULL, resource_ids text , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_task_definition_log" ( id serial NOT NULL , code bigint NOT NULL, name varchar(255) DEFAULT NULL , version int NOT NULL , description text , project_code bigint DEFAULT NULL , user_id int DEFAULT NULL , task_type varchar(50) DEFAULT NULL , task_params text , flag int DEFAULT NULL , task_priority int DEFAULT NULL , worker_group varchar(255) DEFAULT NULL , environment_code bigint DEFAULT -1, fail_retry_times int DEFAULT NULL , fail_retry_interval int DEFAULT NULL , timeout_flag int DEFAULT NULL , timeout_notify_strategy int DEFAULT NULL , timeout int DEFAULT 0 , delay_time int DEFAULT 0 , task_group_id int DEFAULT NULL, resource_ids text , operator int DEFAULT NULL , operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_process_task_relation" ( id serial NOT NULL , name varchar(255) DEFAULT NULL , project_code bigint DEFAULT NULL , process_definition_code bigint DEFAULT NULL , process_definition_version int DEFAULT NULL , pre_task_code bigint DEFAULT NULL , pre_task_version int DEFAULT 0 , post_task_code bigint DEFAULT NULL , post_task_version int DEFAULT 0 , condition_type int DEFAULT NULL , condition_params text , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_process_task_relation_log" ( id serial NOT NULL , name varchar(255) DEFAULT NULL , project_code bigint DEFAULT NULL , process_definition_code bigint DEFAULT NULL , process_definition_version int DEFAULT NULL , pre_task_code bigint DEFAULT NULL , pre_task_version int DEFAULT 0 , post_task_code bigint DEFAULT NULL , post_task_version int DEFAULT 0 , condition_type int DEFAULT NULL , condition_params text , operator int DEFAULT NULL , operate_time timestamp DEFAULT NULL , create_time timestamp DEFAULT NULL , update_time timestamp DEFAULT NULL , PRIMARY KEY (id) )'; EXECUTE 'CREATE TABLE IF NOT EXISTS '|| quote_ident(v_schema) ||'."t_ds_worker_group" ( id serial NOT NULL, name varchar(255) NOT NULL, addr_list text DEFAULT NULL, create_time timestamp DEFAULT NULL, update_time timestamp DEFAULT NULL, PRIMARY KEY (id), CONSTRAINT name_unique UNIQUE (name) )'; return 'Success!'; exception when others then ---Raise EXCEPTION '(%)',SQLERRM; return SQLERRM; END; $BODY$; select dolphin_update_metadata(); d//
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,714
[Feature][UI Next] Home page.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description - [ ] Home > - [x] Task State Statistics > - [x] Process State Statistics > - [x] Process Definition Statistics ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7714
https://github.com/apache/dolphinscheduler/pull/7765
3e34e69cfbe80016d17af5be753aa4ae8afc9176
7e378ea6728f881232824d0cb95d547d21a47759
"2021-12-29T13:08:22Z"
java
"2021-12-31T15:27:09Z"
dolphinscheduler-ui-next/src/components/chart/index.ts
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { getCurrentInstance, onMounted, onBeforeUnmount, watch } from 'vue' import { useThemeStore } from '@/store/theme/theme' import { throttle } from 'echarts' import { useI18n } from 'vue-i18n' import type { Ref } from 'vue' import type { ECharts } from 'echarts' import type { ECBasicOption } from 'echarts/types/dist/shared' function initChart<Opt extends ECBasicOption>( domRef: Ref<HTMLDivElement | null>, option: Opt ): ECharts | null { let chart: ECharts | null = null const themeStore = useThemeStore() const { locale } = useI18n() const globalProperties = getCurrentInstance()?.appContext.config.globalProperties option['backgroundColor'] = '' const init = () => { chart = globalProperties?.echarts.init( domRef.value, themeStore.darkTheme ? 'dark-bold' : 'macarons' ) chart && chart.setOption(option) } const resize = throttle(() => { chart && chart.resize() }, 20) watch( () => themeStore.darkTheme, () => { chart?.dispose() init() } ) watch( () => locale.value, () => { chart?.dispose() init() } ) onMounted(() => { init() addEventListener('resize', resize) }) onBeforeUnmount(() => { removeEventListener('resize', resize) }) return chart } export default initChart
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,714
[Feature][UI Next] Home page.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description - [ ] Home > - [x] Task State Statistics > - [x] Process State Statistics > - [x] Process Definition Statistics ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7714
https://github.com/apache/dolphinscheduler/pull/7765
3e34e69cfbe80016d17af5be753aa4ae8afc9176
7e378ea6728f881232824d0cb95d547d21a47759
"2021-12-29T13:08:22Z"
java
"2021-12-31T15:27:09Z"
dolphinscheduler-ui-next/src/components/chart/modules/Pie.tsx
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { defineComponent, PropType, ref } from 'vue' import initChart from '@/components/chart' import type { Ref } from 'vue' const props = { height: { type: [String, Number] as PropType<string | number>, default: 400, }, width: { type: [String, Number] as PropType<string | number>, default: 400, }, } const PieChart = defineComponent({ name: 'PieChart', props, setup() { const pieChartRef: Ref<HTMLDivElement | null> = ref(null) const option = { tooltip: { trigger: 'item', backgroundColor: '#fff', }, legend: { top: '5%', left: 'center', }, series: [ { name: 'Access From', type: 'pie', radius: ['40%', '70%'], avoidLabelOverlap: false, label: { show: false, position: 'center', }, labelLine: { show: false, }, data: [ { value: 1048, name: 'Search Engine' }, { value: 735, name: 'Direct' }, { value: 580, name: 'Email' }, { value: 484, name: 'Union Ads' }, { value: 300, name: 'Video Ads' }, ], }, ], } initChart(pieChartRef, option) return { pieChartRef } }, render() { const { height, width } = this return ( <div ref='pieChartRef' style={{ height: typeof height === 'number' ? height + 'px' : height, width: typeof width === 'number' ? width + 'px' : width, }} /> ) }, }) export default PieChart
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,714
[Feature][UI Next] Home page.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description - [ ] Home > - [x] Task State Statistics > - [x] Process State Statistics > - [x] Process Definition Statistics ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7714
https://github.com/apache/dolphinscheduler/pull/7765
3e34e69cfbe80016d17af5be753aa4ae8afc9176
7e378ea6728f881232824d0cb95d547d21a47759
"2021-12-29T13:08:22Z"
java
"2021-12-31T15:27:09Z"
dolphinscheduler-ui-next/src/views/home/index.module.scss
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ .card-table { display: flex; justify-content: space-between; }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,714
[Feature][UI Next] Home page.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description - [ ] Home > - [x] Task State Statistics > - [x] Process State Statistics > - [x] Process Definition Statistics ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7714
https://github.com/apache/dolphinscheduler/pull/7765
3e34e69cfbe80016d17af5be753aa4ae8afc9176
7e378ea6728f881232824d0cb95d547d21a47759
"2021-12-29T13:08:22Z"
java
"2021-12-31T15:27:09Z"
dolphinscheduler-ui-next/src/views/home/index.tsx
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { defineComponent, Ref, onMounted, ref, watch } from 'vue' import { NGrid, NGi } from 'naive-ui' import { startOfToday, getTime } from 'date-fns' import { useI18n } from 'vue-i18n' import { useTaskState } from './use-task-state' import StateCard from './state-card' import DefinitionCard from './definition-card' import styles from './index.module.scss' export default defineComponent({ name: 'home', setup() { const { t } = useI18n() const dateRef = ref([getTime(startOfToday()), Date.now()]) const { getTaskState } = useTaskState() let taskStateRef: Ref<any> = ref([]) onMounted(() => { taskStateRef.value = getTaskState(dateRef.value) }) const handleTaskDate = (val: any) => { taskStateRef.value = getTaskState(val) } return { t, dateRef, handleTaskDate, taskStateRef } }, render() { const { t, dateRef, handleTaskDate } = this return ( <div> <NGrid x-gap={12} cols={2}> <NGi> <StateCard title={t('home.task_state_statistics')} date={dateRef} tableData={this.taskStateRef.value} onUpdateDatePickerValue={handleTaskDate} /> </NGi> <NGi class={styles['card-table']}> <StateCard title={t('home.process_state_statistics')} date={dateRef} /> </NGi> </NGrid> <NGrid cols={1} style='margin-top: 12px;'> <NGi> <DefinitionCard title={t('home.process_definition_statistics')} /> </NGi> </NGrid> </div> ) }, })
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,714
[Feature][UI Next] Home page.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description - [ ] Home > - [x] Task State Statistics > - [x] Process State Statistics > - [x] Process Definition Statistics ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7714
https://github.com/apache/dolphinscheduler/pull/7765
3e34e69cfbe80016d17af5be753aa4ae8afc9176
7e378ea6728f881232824d0cb95d547d21a47759
"2021-12-29T13:08:22Z"
java
"2021-12-31T15:27:09Z"
dolphinscheduler-ui-next/src/views/home/state-card.tsx
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { defineComponent, PropType } from 'vue' import { useTable } from './use-table' import styles from '@/views/home/index.module.scss' import PieChart from '@/components/chart/modules/Pie' import { NDataTable, NDatePicker } from 'naive-ui' import Card from '@/components/card' const props = { title: { type: String as PropType<string>, }, date: { type: Array as PropType<Array<any>>, }, tableData: { type: [Array, Boolean] as PropType<Array<any> | false>, }, } const StateCard = defineComponent({ name: 'StateCard', props, emits: ['updateDatePickerValue'], setup(props, ctx) { const onUpdateDatePickerValue = (val: any) => { ctx.emit('updateDatePickerValue', val) } return { onUpdateDatePickerValue } }, render() { const { title, date, tableData, onUpdateDatePickerValue } = this const { columnsRef } = useTable() return ( <Card title={title}> {{ default: () => ( <div class={styles['card-table']}> <PieChart /> {tableData && ( <NDataTable columns={columnsRef} data={tableData} striped size={'small'} /> )} </div> ), 'header-extra': () => ( <NDatePicker default-value={date} onUpdateValue={onUpdateDatePickerValue} size='small' type='datetimerange' clearable /> ), }} </Card> ) }, }) export default StateCard
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,714
[Feature][UI Next] Home page.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description - [ ] Home > - [x] Task State Statistics > - [x] Process State Statistics > - [x] Process Definition Statistics ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7714
https://github.com/apache/dolphinscheduler/pull/7765
3e34e69cfbe80016d17af5be753aa4ae8afc9176
7e378ea6728f881232824d0cb95d547d21a47759
"2021-12-29T13:08:22Z"
java
"2021-12-31T15:27:09Z"
dolphinscheduler-ui-next/src/views/home/types.ts
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ interface ChartData { xAxisData: Array<string> seriesData: Array<number> } interface TaskStateTableData { id: number number: number state: string } export { ChartData, TaskStateTableData }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,714
[Feature][UI Next] Home page.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description - [ ] Home > - [x] Task State Statistics > - [x] Process State Statistics > - [x] Process Definition Statistics ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7714
https://github.com/apache/dolphinscheduler/pull/7765
3e34e69cfbe80016d17af5be753aa4ae8afc9176
7e378ea6728f881232824d0cb95d547d21a47759
"2021-12-29T13:08:22Z"
java
"2021-12-31T15:27:09Z"
dolphinscheduler-ui-next/src/views/home/use-process-definition.ts
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { useAsyncState } from '@vueuse/core' import { countDefinitionByUser } from '@/service/modules/projects-analysis' import type { ProcessDefinitionRes } from '@/service/modules/projects-analysis/types' import type { ChartData } from './types' export function useProcessDefinition() { const getProcessDefinition = () => { const { state } = useAsyncState( countDefinitionByUser({ projectCode: 0 }).then( (res: ProcessDefinitionRes): ChartData => { const xAxisData = res.userList.map((item) => item.userName) const seriesData = res.userList.map((item) => item.count) return { xAxisData, seriesData } } ), { xAxisData: [], seriesData: [] } ) return state } return { getProcessDefinition } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,714
[Feature][UI Next] Home page.
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description - [ ] Home > - [x] Task State Statistics > - [x] Process State Statistics > - [x] Process Definition Statistics ### Use case _No response_ ### Related issues [#7332](https://github.com/apache/dolphinscheduler/issues/7332) ### Are you willing to submit a PR? - [X] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7714
https://github.com/apache/dolphinscheduler/pull/7765
3e34e69cfbe80016d17af5be753aa4ae8afc9176
7e378ea6728f881232824d0cb95d547d21a47759
"2021-12-29T13:08:22Z"
java
"2021-12-31T15:27:09Z"
dolphinscheduler-ui-next/src/views/home/use-task-state.ts
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import { useAsyncState } from '@vueuse/core' import { format } from 'date-fns' import { countTaskState } from '@/service/modules/projects-analysis' import type { TaskStateRes } from '@/service/modules/projects-analysis/types' import type { TaskStateTableData } from './types' export function useTaskState() { const getTaskState = (date: Array<number>) => { const { state } = useAsyncState( countTaskState({ startDate: format(date[0], 'yyyy-MM-dd HH:mm:ss'), endDate: format(date[1], 'yyyy-MM-dd HH:mm:ss'), projectCode: 0, }).then((res: TaskStateRes): Array<TaskStateTableData> => { return res.taskCountDtos.map((item, index) => { return { id: index + 1, state: item.taskStateType, number: item.count, } }) }), [] ) return state } return { getTaskState } }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,493
[Feature][UI]Fuzzy search and case-insensitive search should be supported in process definition details page
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description 1、click a process and enter the process definition details page eg:http://XXXXX:12345/dolphinscheduler/ui/#/projects/3880463913664/definition/list/3880470637248 2、search node with the search button in the upper right corner of the page 3、Enter a partial name or enter it regardless of case ,and click search button [expect] 1、search node successful [actual] 1、The corresponding node is not found ### Use case Fuzzy search and case-insensitive search should be supported。 This is useful when there are many nodes ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7493
https://github.com/apache/dolphinscheduler/pull/7766
7e378ea6728f881232824d0cb95d547d21a47759
4203304b08847d57218f9e9cb4edf7447f6eab8b
"2021-12-19T09:53:48Z"
java
"2022-01-01T04:46:40Z"
dolphinscheduler-ui/src/js/conf/home/pages/dag/_source/canvas/toolbar.vue
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ <template> <div class="dag-toolbar"> <h3>{{ dagChart.name || $t("Create process") }}</h3> <el-tooltip v-if="dagChart.name" class="toolbar-operation" :content="$t('Copy name')" placement="bottom" > <em class="el-icon-copy-document" @click="copyName"></em> </el-tooltip> <textarea ref="textarea" cols="30" rows="10" class="transparent"></textarea> <div class="toolbar-left"> <el-tag class="process-online-tag" size="small" v-if="dagChart.type === 'definition' && releaseState === 'ONLINE'" >{{ $t("processOnline") }}</el-tag > <el-tooltip :content="$t('View variables')" placement="bottom" class="toolbar-operation" > <em class="custom-ico view-variables" v-if="$route.name === 'projects-instance-details'" @click="toggleVariableView" ></em> </el-tooltip> <el-tooltip :content="$t('Startup parameter')" placement="bottom" class="toolbar-operation" > <em class="custom-ico startup-parameters" v-if="$route.name === 'projects-instance-details'" @click="toggleParamView" ></em> </el-tooltip> </div> <div class="toolbar-right"> <el-tooltip class="toolbar-operation" :content="$t('searchNode')" placement="bottom" v-if="!searchInputVisible" > <em class="el-icon-search" @click="showSearchInput" ></em> </el-tooltip> <div :class="{ 'search-box': true, 'visible': searchInputVisible }" > <el-input v-model="searchText" placeholder="" prefix-icon="el-icon-search" size="mini" @keyup.enter.native="onSearch" clearable @blur="searchInputBlur" ref="searchInput" ></el-input> </div> <el-tooltip class="toolbar-operation" :content="$t('Delete selected lines or nodes')" placement="bottom" v-if="!isDetails" > <em class="el-icon-delete" @click="removeCells"></em> </el-tooltip> <el-tooltip class="toolbar-operation" :content="$t('Download')" placement="bottom" > <em class="el-icon-download" @click="downloadPNG"></em> </el-tooltip> <el-tooltip class="toolbar-operation" :content="$t('Refresh DAG status')" placement="bottom" v-if="dagChart.type === 'instance'" > <em class="el-icon-refresh" @click="refreshTaskStatus"></em> </el-tooltip> <el-tooltip class="toolbar-operation" :content="$t('Format DAG')" placement="bottom" v-if="!isDetails" > <em class="custom-ico graph-format" @click="chartFormat"></em> </el-tooltip> <el-tooltip class="toolbar-operation last" :content="$t('Full Screen')" placement="bottom" > <em :class="[ 'custom-ico', dagChart.fullScreen ? 'full-screen-close' : 'full-screen-open', ]" @click="toggleFullScreen" ></em> </el-tooltip> <el-button class="toolbar-el-btn" type="primary" size="mini" v-if="dagChart.type === 'definition'" @click="showVersions" icon="el-icon-info" >{{ $t("Version Info") }}</el-button > <el-button class="toolbar-el-btn" type="primary" size="mini" @click="saveProcess" id="btnSave" >{{ $t("Save") }}</el-button > <el-button class="toolbar-el-btn" v-if="$route.query.subs" type="primary" size="mini" icon="el-icon-back" @click="dagChart.returnToPrevProcess" > {{ $t("Return_1") }} </el-button> <el-button class="toolbar-el-btn" type="primary" icon="el-icon-switch-button" size="mini" @click="returnToListPage" > {{ $t("Close") }} </el-button> </div> </div> </template> <script> import { findComponentDownward } from '@/module/util/' import { mapState } from 'vuex' export default { name: 'dag-toolbar', inject: ['dagChart'], data () { return { canvasRef: null, searchText: '', searchInputVisible: false } }, computed: { ...mapState('dag', ['isDetails', 'releaseState']) }, methods: { onSearch () { const canvas = this.getDagCanvasRef() canvas.navigateTo(this.searchText) }, showSearchInput () { this.searchInputVisible = true this.$refs.searchInput.focus() }, searchInputBlur () { if (!this.searchText) { this.searchInputVisible = false } }, getDagCanvasRef () { if (this.canvasRef) { return this.canvasRef } else { const canvas = findComponentDownward(this.dagChart, 'dag-canvas') this.canvasRef = canvas return canvas } }, toggleVariableView () { findComponentDownward(this.$root, 'assist-dag-index')._toggleView() }, toggleParamView () { findComponentDownward( this.$root, 'starting-params-dag-index' )._toggleParam() }, toggleFullScreen () { this.dagChart.toggleFullScreen() }, saveProcess () { const canvas = this.getDagCanvasRef() const nodes = canvas.getNodes() if (!nodes.length) { this.$message.error(this.$t('Failed to create node to save')) return } this.dagChart.toggleSaveDialog(true) }, downloadPNG () { const canvas = this.getDagCanvasRef() canvas.downloadPNG(this.processName) }, removeCells () { const canvas = this.getDagCanvasRef() const selections = canvas.getSelections() canvas.removeCells(selections) }, copyName () { const textarea = this.$refs.textarea textarea.value = this.dagChart.name textarea.select() document.execCommand('copy') this.$message(this.$t('Copy success')) }, chartFormat () { const canvas = this.getDagCanvasRef() canvas.showLayoutModal() }, refreshTaskStatus () { this.dagChart.refreshTaskStatus() }, returnToListPage () { let $name = this.$route.name if ($name && $name.indexOf('definition') !== -1) { this.$router.push({ name: 'projects-definition-list' }) } else { this.$router.push({ name: 'projects-instance-list' }) } }, showVersions () { this.dagChart.showVersions() } } } </script> <style lang="scss" scoped> @import "./toolbar"; </style>
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,493
[Feature][UI]Fuzzy search and case-insensitive search should be supported in process definition details page
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description 1、click a process and enter the process definition details page eg:http://XXXXX:12345/dolphinscheduler/ui/#/projects/3880463913664/definition/list/3880470637248 2、search node with the search button in the upper right corner of the page 3、Enter a partial name or enter it regardless of case ,and click search button [expect] 1、search node successful [actual] 1、The corresponding node is not found ### Use case Fuzzy search and case-insensitive search should be supported。 This is useful when there are many nodes ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7493
https://github.com/apache/dolphinscheduler/pull/7766
7e378ea6728f881232824d0cb95d547d21a47759
4203304b08847d57218f9e9cb4edf7447f6eab8b
"2021-12-19T09:53:48Z"
java
"2022-01-01T04:46:40Z"
dolphinscheduler-ui/src/js/module/i18n/locale/en_US.js
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default { 'User Name': 'User Name', 'Please enter user name': 'Please enter user name', Password: 'Password', 'Please enter your password': 'Please enter your password', 'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22': 'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22', Login: 'Login', Home: 'Home', 'Failed to create node to save': 'Failed to create node to save', 'Global parameters': 'Global parameters', 'Local parameters': 'Local parameters', 'Copy success': 'Copy success', 'The browser does not support automatic copying': 'The browser does not support automatic copying', 'Whether to save the DAG graph': 'Whether to save the DAG graph', 'Current node settings': 'Current node settings', 'View history': 'View history', 'View log': 'View log', 'Force success': 'Force success', 'Enter this child node': 'Enter this child node', 'Node name': 'Node name', 'Please enter name (required)': 'Please enter name (required)', 'Run flag': 'Run flag', Normal: 'Normal', 'Prohibition execution': 'Prohibition execution', 'Please enter description': 'Please enter description', 'Number of failed retries': 'Number of failed retries', Times: 'Times', 'Failed retry interval': 'Failed retry interval', Minute: 'Minute', 'Delay execution time': 'Delay execution time', 'Delay execution': 'Delay execution', 'Forced success': 'Forced success', Cancel: 'Cancel', 'Confirm add': 'Confirm add', 'The newly created sub-Process has not yet been executed and cannot enter the sub-Process': 'The newly created sub-Process has not yet been executed and cannot enter the sub-Process', 'The task has not been executed and cannot enter the sub-Process': 'The task has not been executed and cannot enter the sub-Process', 'Name already exists': 'Name already exists', 'Download Log': 'Download Log', 'Refresh Log': 'Refresh Log', 'Enter full screen': 'Enter full screen', 'Cancel full screen': 'Cancel full screen', Close: 'Close', 'Update log success': 'Update log success', 'No more logs': 'No more logs', 'No log': 'No log', 'Loading Log...': 'Loading Log...', 'Set the DAG diagram name': 'Set the DAG diagram name', 'Please enter description(optional)': 'Please enter description(optional)', 'Set global': 'Set global', 'Whether to go online the process definition': 'Whether to go online the process definition', 'Whether to update the process definition': 'Whether to update the process definition', Add: 'Add', 'DAG graph name cannot be empty': 'DAG graph name cannot be empty', 'Create Datasource': 'Create Datasource', 'Project Home': 'Workflow Monitor', 'Project Manage': 'Project', 'Create Project': 'Create Project', 'Cron Manage': 'Cron Manage', 'Copy Workflow': 'Copy Workflow', 'Tenant Manage': 'Tenant Manage', 'Create Tenant': 'Create Tenant', 'User Manage': 'User Manage', 'Create User': 'Create User', 'User Information': 'User Information', 'Edit Password': 'Edit Password', Success: 'Success', Failed: 'Failed', Delete: 'Delete', 'Please choose': 'Please choose', 'Please enter a positive integer': 'Please enter a positive integer', 'Program Type': 'Program Type', 'Main Class': 'Main Class', 'Main Package': 'Main Package', 'Please enter main package': 'Please enter main package', 'Please enter main class': 'Please enter main class', 'Main Arguments': 'Main Arguments', 'Please enter main arguments': 'Please enter main arguments', 'Option Parameters': 'Option Parameters', 'Please enter option parameters': 'Please enter option parameters', Resources: 'Resources', 'Custom Parameters': 'Custom Parameters', 'Custom template': 'Custom template', Datasource: 'Datasource', methods: 'methods', 'Please enter the procedure method': 'Please enter the procedure script \n\ncall procedure:{call <procedure-name>[(<arg1>,<arg2>, ...)]}\n\ncall function:{?= call <procedure-name>[(<arg1>,<arg2>, ...)]} ', 'The procedure method script example': 'example:{call <procedure-name>[(?,?, ...)]} or {?= call <procedure-name>[(?,?, ...)]}', Script: 'Script', 'Please enter script(required)': 'Please enter script(required)', 'Deploy Mode': 'Deploy Mode', 'Driver Cores': 'Driver Cores', 'Please enter Driver cores': 'Please enter Driver cores', 'Driver Memory': 'Driver Memory', 'Please enter Driver memory': 'Please enter Driver memory', 'Executor Number': 'Executor Number', 'Please enter Executor number': 'Please enter Executor number', 'The Executor number should be a positive integer': 'The Executor number should be a positive integer', 'Executor Memory': 'Executor Memory', 'Please enter Executor memory': 'Please enter Executor memory', 'Executor Cores': 'Executor Cores', 'Please enter Executor cores': 'Please enter Executor cores', 'Memory should be a positive integer': 'Memory should be a positive integer', 'Core number should be positive integer': 'Core number should be positive integer', 'Flink Version': 'Flink Version', 'JobManager Memory': 'JobManager Memory', 'Please enter JobManager memory': 'Please enter JobManager memory', 'TaskManager Memory': 'TaskManager Memory', 'Please enter TaskManager memory': 'Please enter TaskManager memory', 'Slot Number': 'Slot Number', 'Please enter Slot number': 'Please enter Slot number', Parallelism: 'Parallelism', 'Custom Parallelism': 'Configure parallelism', 'Please enter Parallelism': 'Please enter Parallelism', 'Parallelism tip': 'If there are a large number of tasks requiring complement, you can use the custom parallelism to ' + 'set the complement task thread to a reasonable value to avoid too large impact on the server.', 'Parallelism number should be positive integer': 'Parallelism number should be positive integer', 'TaskManager Number': 'TaskManager Number', 'Please enter TaskManager number': 'Please enter TaskManager number', 'App Name': 'App Name', 'Please enter app name(optional)': 'Please enter app name(optional)', 'SQL Type': 'SQL Type', 'Send Email': 'Send Email', 'Log display': 'Log display', 'rows of result': 'rows of result', Title: 'Title', 'Please enter the title of email': 'Please enter the title of email', Table: 'Table', TableMode: 'Table', Attachment: 'Attachment', 'SQL Parameter': 'SQL Parameter', 'SQL Statement': 'SQL Statement', 'UDF Function': 'UDF Function', 'Please enter a SQL Statement(required)': 'Please enter a SQL Statement(required)', 'Please enter a JSON Statement(required)': 'Please enter a JSON Statement(required)', 'One form or attachment must be selected': 'One form or attachment must be selected', 'Mail subject required': 'Mail subject required', 'Child Node': 'Child Node', 'Please select a sub-Process': 'Please select a sub-Process', Edit: 'Edit', 'Switch To This Version': 'Switch To This Version', 'Datasource Name': 'Datasource Name', 'Please enter datasource name': 'Please enter datasource name', IP: 'IP', 'Please enter IP': 'Please enter IP', Port: 'Port', 'Please enter port': 'Please enter port', 'Database Name': 'Database Name', 'Please enter database name': 'Please enter database name', 'Oracle Connect Type': 'ServiceName or SID', 'Oracle Service Name': 'ServiceName', 'Oracle SID': 'SID', 'jdbc connect parameters': 'jdbc connect parameters', 'Test Connect': 'Test Connect', 'Please enter resource name': 'Please enter resource name', 'Please enter resource folder name': 'Please enter resource folder name', 'Please enter a non-query SQL statement': 'Please enter a non-query SQL statement', 'Please enter IP/hostname': 'Please enter IP/hostname', 'jdbc connection parameters is not a correct JSON format': 'jdbc connection parameters is not a correct JSON format', '#': '#', 'Datasource Type': 'Datasource Type', 'Datasource Parameter': 'Datasource Parameter', 'Create Time': 'Create Time', 'Update Time': 'Update Time', Operation: 'Operation', 'Current Version': 'Current Version', 'Click to view': 'Click to view', 'Delete?': 'Delete?', 'Switch Version Successfully': 'Switch Version Successfully', 'Confirm Switch To This Version?': 'Confirm Switch To This Version?', Confirm: 'Confirm', 'Task status statistics': 'Task Status Statistics', Number: 'Number', State: 'State', 'Dry-run flag': 'Dry-run flag', 'Process Status Statistics': 'Process Status Statistics', 'Process Definition Statistics': 'Process Definition Statistics', 'Project Name': 'Project Name', 'Please enter name': 'Please enter name', 'Owned Users': 'Owned Users', 'Process Pid': 'Process Pid', 'Zk registration directory': 'Zk registration directory', cpuUsage: 'cpuUsage', memoryUsage: 'memoryUsage', 'Last heartbeat time': 'Last heartbeat time', 'Edit Tenant': 'Edit Tenant', 'OS Tenant Code': 'OS Tenant Code', 'Tenant Name': 'Tenant Name', Queue: 'Yarn Queue', 'Please select a queue': 'default is tenant association queue', 'Please enter the os tenant code in English': 'Please enter the os tenant code in English', 'Please enter os tenant code in English': 'Please enter os tenant code in English', 'Please enter os tenant code': 'Please enter os tenant code', 'Please enter tenant Name': 'Please enter tenant Name', 'The os tenant code. Only letters or a combination of letters and numbers are allowed': 'The os tenant code. Only letters or a combination of letters and numbers are allowed', 'Edit User': 'Edit User', Tenant: 'Tenant', Email: 'Email', Phone: 'Phone', 'User Type': 'User Type', 'Please enter phone number': 'Please enter phone number', 'Please enter email': 'Please enter email', 'Please enter the correct email format': 'Please enter the correct email format', 'Please enter the correct mobile phone format': 'Please enter the correct mobile phone format', Project: 'Project', Authorize: 'Authorize', 'File resources': 'File resources', 'UDF resources': 'UDF resources', 'UDF resources directory': 'UDF resources directory', 'Please select UDF resources directory': 'Please select UDF resources directory', 'Alarm group': 'Alarm group', 'Alarm group required': 'Alarm group required', 'Edit alarm group': 'Edit alarm group', 'Create alarm group': 'Create alarm group', 'Create Alarm Instance': 'Create Alarm Instance', 'Edit Alarm Instance': 'Edit Alarm Instance', 'Group Name': 'Group Name', 'Alarm instance name': 'Alarm instance name', 'Alarm plugin name': 'Alarm plugin name', 'Select plugin': 'Select plugin', 'Select Alarm plugin': 'Please select an Alarm plugin', 'Please enter group name': 'Please enter group name', 'Instance parameter exception': 'Instance parameter exception', 'Group Type': 'Group Type', 'Alarm plugin instance': 'Alarm plugin instance', 'Select Alarm plugin instance': 'Please select an Alarm plugin instance', Remarks: 'Remarks', SMS: 'SMS', 'Managing Users': 'Managing Users', Permission: 'Permission', Administrator: 'Administrator', 'Confirm Password': 'Confirm Password', 'Please enter confirm password': 'Please enter confirm password', 'Password cannot be in Chinese': 'Password cannot be in Chinese', 'Please enter a password (6-22) character password': 'Please enter a password (6-22) character password', 'Confirmation password cannot be in Chinese': 'Confirmation password cannot be in Chinese', 'Please enter a confirmation password (6-22) character password': 'Please enter a confirmation password (6-22) character password', 'The password is inconsistent with the confirmation password': 'The password is inconsistent with the confirmation password', 'Please select the datasource': 'Please select the datasource', 'Please select resources': 'Please select resources', Query: 'Query', 'Non Query': 'Non Query', 'prop(required)': 'prop(required)', 'value(optional)': 'value(optional)', 'value(required)': 'value(required)', 'prop is empty': 'prop is empty', 'value is empty': 'value is empty', 'prop is repeat': 'prop is repeat', 'Start Time': 'Start Time', 'End Time': 'End Time', crontab: 'crontab', 'Failure Strategy': 'Failure Strategy', online: 'online', offline: 'offline', 'Task Status': 'Task Status', 'Process Instance': 'Process Instance', 'Task Instance': 'Task Instance', 'Select date range': 'Select date range', startDate: 'startDate', endDate: 'endDate', Date: 'Date', Waiting: 'Waiting', Execution: 'Execution', Finish: 'Finish', 'Create File': 'Create File', 'Create folder': 'Create folder', 'File Name': 'File Name', 'Folder Name': 'Folder Name', 'File Format': 'File Format', 'Folder Format': 'Folder Format', 'File Content': 'File Content', 'Upload File Size': 'Upload File size cannot exceed 1g', Create: 'Create', 'Please enter the resource content': 'Please enter the resource content', 'Resource content cannot exceed 3000 lines': 'Resource content cannot exceed 3000 lines', 'File Details': 'File Details', 'Download Details': 'Download Details', Return: 'Return', Save: 'Save', 'File Manage': 'File Manage', 'Upload Files': 'Upload Files', 'Create UDF Function': 'Create UDF Function', 'Upload UDF Resources': 'Upload UDF Resources', 'Service-Master': 'Service-Master', 'Service-Worker': 'Service-Worker', 'Process Name': 'Process Name', Executor: 'Executor', 'Run Type': 'Run Type', 'Scheduling Time': 'Scheduling Time', 'Run Times': 'Run Times', host: 'host', 'fault-tolerant sign': 'fault-tolerant sign', Rerun: 'Rerun', 'Recovery Failed': 'Recovery Failed', Stop: 'Stop', Pause: 'Pause', 'Recovery Suspend': 'Recovery Suspend', Gantt: 'Gantt', 'Node Type': 'Node Type', 'Submit Time': 'Submit Time', Duration: 'Duration', 'Retry Count': 'Retry Count', 'Task Name': 'Task Name', 'Task Date': 'Task Date', 'Source Table': 'Source Table', 'Record Number': 'Record Number', 'Target Table': 'Target Table', 'Online viewing type is not supported': 'Online viewing type is not supported', Size: 'Size', Rename: 'Rename', Download: 'Download', Export: 'Export', 'Version Info': 'Version Info', Submit: 'Submit', 'Edit UDF Function': 'Edit UDF Function', type: 'type', 'UDF Function Name': 'UDF Function Name', FILE: 'FILE', UDF: 'UDF', 'File Subdirectory': 'File Subdirectory', 'Please enter a function name': 'Please enter a function name', 'Package Name': 'Package Name', 'Please enter a Package name': 'Please enter a Package name', Parameter: 'Parameter', 'Please enter a parameter': 'Please enter a parameter', 'UDF Resources': 'UDF Resources', 'Upload Resources': 'Upload Resources', Instructions: 'Instructions', 'Please enter a instructions': 'Please enter a instructions', 'Please enter a UDF function name': 'Please enter a UDF function name', 'Select UDF Resources': 'Select UDF Resources', 'Class Name': 'Class Name', 'Jar Package': 'Jar Package', 'Library Name': 'Library Name', 'UDF Resource Name': 'UDF Resource Name', 'File Size': 'File Size', Description: 'Description', 'Drag Nodes and Selected Items': 'Drag Nodes and Selected Items', 'Select Line Connection': 'Select Line Connection', 'Delete selected lines or nodes': 'Delete selected lines or nodes', 'Full Screen': 'Full Screen', Unpublished: 'Unpublished', 'Start Process': 'Start Process', 'Execute from the current node': 'Execute from the current node', 'Recover tolerance fault process': 'Recover tolerance fault process', 'Resume the suspension process': 'Resume the suspension process', 'Execute from the failed nodes': 'Execute from the failed nodes', 'Complement Data': 'Complement Data', 'Scheduling execution': 'Scheduling execution', 'Recovery waiting thread': 'Recovery waiting thread', 'Submitted successfully': 'Submitted successfully', Executing: 'Executing', 'Ready to pause': 'Ready to pause', 'Ready to stop': 'Ready to stop', 'Need fault tolerance': 'Need fault tolerance', Kill: 'Kill', 'Waiting for thread': 'Waiting for thread', 'Waiting for dependence': 'Waiting for dependence', Start: 'Start', Copy: 'Copy', 'Copy name': 'Copy name', 'Copy path': 'Copy path', 'Please enter keyword': 'Please enter keyword', 'File Upload': 'File Upload', 'Drag the file into the current upload window': 'Drag the file into the current upload window', 'Drag area upload': 'Drag area upload', Upload: 'Upload', 'ReUpload File': 'ReUpload File', 'Please enter file name': 'Please enter file name', 'Please select the file to upload': 'Please select the file to upload', 'Resources manage': 'Resources', Security: 'Security', Logout: 'Logout', 'No data': 'No data', 'Uploading...': 'Uploading...', 'Loading...': 'Loading...', List: 'List', 'Unable to download without proper url': 'Unable to download without proper url', Process: 'Process', 'Process definition': 'Process definition', 'Task record': 'Task record', 'Warning group manage': 'Warning group manage', 'Warning instance manage': 'Warning instance manage', 'Servers manage': 'Servers manage', 'UDF manage': 'UDF manage', 'Resource manage': 'Resource manage', 'Function manage': 'Function manage', 'Edit password': 'Edit password', 'Ordinary users': 'Ordinary users', 'Create process': 'Create process', 'Import process': 'Import process', 'Timing state': 'Timing state', Timing: 'Timing', Timezone: 'Timezone', TreeView: 'TreeView', 'Mailbox already exists! Recipients and copyers cannot repeat': 'Mailbox already exists! Recipients and copyers cannot repeat', 'Mailbox input is illegal': 'Mailbox input is illegal', 'Please set the parameters before starting': 'Please set the parameters before starting', Continue: 'Continue', End: 'End', 'Node execution': 'Node execution', 'Backward execution': 'Backward execution', 'Forward execution': 'Forward execution', 'Execute only the current node': 'Execute only the current node', 'Notification strategy': 'Notification strategy', 'Notification group': 'Notification group', 'Please select a notification group': 'Please select a notification group', 'Whether it is a complement process?': 'Whether it is a complement process?', 'Schedule date': 'Schedule date', 'Mode of execution': 'Mode of execution', 'Serial execution': 'Serial execution', 'Parallel execution': 'Parallel execution', 'Set parameters before timing': 'Set parameters before timing', 'Start and stop time': 'Start and stop time', 'Please select time': 'Please select time', 'Please enter crontab': 'Please enter crontab', none_1: 'none', success_1: 'success', failure_1: 'failure', All_1: 'All', Toolbar: 'Toolbar', 'View variables': 'View variables', 'Format DAG': 'Format DAG', 'Refresh DAG status': 'Refresh DAG status', Return_1: 'Return', 'Please enter format': 'Please enter format', 'connection parameter': 'connection parameter', 'Process definition details': 'Process definition details', 'Create process definition': 'Create process definition', 'Scheduled task list': 'Scheduled task list', 'Process instance details': 'Process instance details', 'Create Resource': 'Create Resource', 'User Center': 'User Center', AllStatus: 'All', None: 'None', Name: 'Name', 'Process priority': 'Process priority', 'Task priority': 'Task priority', 'Task timeout alarm': 'Task timeout alarm', 'Timeout strategy': 'Timeout strategy', 'Timeout alarm': 'Timeout alarm', 'Timeout failure': 'Timeout failure', 'Timeout period': 'Timeout period', 'Waiting Dependent complete': 'Waiting Dependent complete', 'Waiting Dependent start': 'Waiting Dependent start', 'Check interval': 'Check interval', 'Timeout must be longer than check interval': 'Timeout must be longer than check interval', 'Timeout strategy must be selected': 'Timeout strategy must be selected', 'Timeout must be a positive integer': 'Timeout must be a positive integer', 'Add dependency': 'Add dependency', 'Whether dry-run': 'Whether dry-run', and: 'and', or: 'or', month: 'month', week: 'week', day: 'day', hour: 'hour', Running: 'Running', 'Waiting for dependency to complete': 'Waiting for dependency to complete', Selected: 'Selected', CurrentHour: 'CurrentHour', Last1Hour: 'Last1Hour', Last2Hours: 'Last2Hours', Last3Hours: 'Last3Hours', Last24Hours: 'Last24Hours', today: 'today', Last1Days: 'Last1Days', Last2Days: 'Last2Days', Last3Days: 'Last3Days', Last7Days: 'Last7Days', ThisWeek: 'ThisWeek', LastWeek: 'LastWeek', LastMonday: 'LastMonday', LastTuesday: 'LastTuesday', LastWednesday: 'LastWednesday', LastThursday: 'LastThursday', LastFriday: 'LastFriday', LastSaturday: 'LastSaturday', LastSunday: 'LastSunday', ThisMonth: 'ThisMonth', LastMonth: 'LastMonth', LastMonthBegin: 'LastMonthBegin', LastMonthEnd: 'LastMonthEnd', 'Refresh status succeeded': 'Refresh status succeeded', 'Queue manage': 'Yarn Queue manage', 'Create queue': 'Create queue', 'Edit queue': 'Edit queue', 'Datasource manage': 'Datasource', 'History task record': 'History task record', 'Please go online': 'Please go online', 'Queue value': 'Queue value', 'Please enter queue value': 'Please enter queue value', 'Worker group manage': 'Worker group manage', 'Create worker group': 'Create worker group', 'Edit worker group': 'Edit worker group', 'Token manage': 'Token manage', 'Create token': 'Create token', 'Edit token': 'Edit token', Addresses: 'Addresses', 'Worker Addresses': 'Worker Addresses', 'Please select the worker addresses': 'Please select the worker addresses', 'Failure time': 'Failure time', 'Expiration time': 'Expiration time', User: 'User', 'Please enter token': 'Please enter token', 'Generate token': 'Generate token', Monitor: 'Monitor', Group: 'Group', 'Queue statistics': 'Queue statistics', 'Command status statistics': 'Command status statistics', 'Task kill': 'Task Kill', 'Task queue': 'Task queue', 'Error command count': 'Error command count', 'Normal command count': 'Normal command count', Manage: ' Manage', 'Number of connections': 'Number of connections', Sent: 'Sent', Received: 'Received', 'Min latency': 'Min latency', 'Avg latency': 'Avg latency', 'Max latency': 'Max latency', 'Node count': 'Node count', 'Query time': 'Query time', 'Node self-test status': 'Node self-test status', 'Health status': 'Health status', 'Max connections': 'Max connections', 'Threads connections': 'Threads connections', 'Max used connections': 'Max used connections', 'Threads running connections': 'Threads running connections', 'Worker group': 'Worker group', 'Please enter a positive integer greater than 0': 'Please enter a positive integer greater than 0', 'Pre Statement': 'Pre Statement', 'Post Statement': 'Post Statement', 'Statement cannot be empty': 'Statement cannot be empty', 'Process Define Count': 'Work flow Define Count', 'Process Instance Running Count': 'Process Instance Running Count', 'command number of waiting for running': 'command number of waiting for running', 'failure command number': 'failure command number', 'tasks number of waiting running': 'tasks number of waiting running', 'task number of ready to kill': 'task number of ready to kill', 'Statistics manage': 'Statistics Manage', statistics: 'Statistics', 'select tenant': 'select tenant', 'Please enter Principal': 'Please enter Principal', 'Please enter the kerberos authentication parameter java.security.krb5.conf': 'Please enter the kerberos authentication parameter java.security.krb5.conf', 'Please enter the kerberos authentication parameter login.user.keytab.username': 'Please enter the kerberos authentication parameter login.user.keytab.username', 'Please enter the kerberos authentication parameter login.user.keytab.path': 'Please enter the kerberos authentication parameter login.user.keytab.path', 'The start time must not be the same as the end': 'The start time must not be the same as the end', 'Startup parameter': 'Startup parameter', 'Startup type': 'Startup type', 'warning of timeout': 'warning of timeout', 'Next five execution times': 'Next five execution times', 'Execute time': 'Execute time', 'Complement range': 'Complement range', 'Http Url': 'Http Url', 'Http Method': 'Http Method', 'Http Parameters': 'Http Parameters', 'Http Parameters Key': 'Http Parameters Key', 'Http Parameters Position': 'Http Parameters Position', 'Http Parameters Value': 'Http Parameters Value', 'Http Check Condition': 'Http Check Condition', 'Http Condition': 'Http Condition', 'Please Enter Http Url': 'Please Enter Http Url(required)', 'Please Enter Http Condition': 'Please Enter Http Condition', 'There is no data for this period of time': 'There is no data for this period of time', 'Worker addresses cannot be empty': 'Worker addresses cannot be empty', 'Please generate token': 'Please generate token', 'Please Select token': 'Please select the expiration time of token', 'Spark Version': 'Spark Version', TargetDataBase: 'target database', TargetTable: 'target table', TargetJobName: 'target job name', 'Please enter Pigeon job name': 'Please enter Pigeon job name', 'Please enter the table of target': 'Please enter the table of target', 'Please enter a Target Table(required)': 'Please enter a Target Table(required)', SpeedByte: 'speed(byte count)', SpeedRecord: 'speed(record count)', '0 means unlimited by byte': '0 means unlimited', '0 means unlimited by count': '0 means unlimited', 'Modify User': 'Modify User', 'Whether directory': 'Whether directory', Yes: 'Yes', No: 'No', 'Hadoop Custom Params': 'Hadoop Params', 'Sqoop Advanced Parameters': 'Sqoop Params', 'Sqoop Job Name': 'Job Name', 'Please enter Mysql Database(required)': 'Please enter Mysql Database(required)', 'Please enter Mysql Table(required)': 'Please enter Mysql Table(required)', 'Please enter Columns (Comma separated)': 'Please enter Columns (Comma separated)', 'Please enter Target Dir(required)': 'Please enter Target Dir(required)', 'Please enter Export Dir(required)': 'Please enter Export Dir(required)', 'Please enter Hive Database(required)': 'Please enter Hive Databasec(required)', 'Please enter Hive Table(required)': 'Please enter Hive Table(required)', 'Please enter hive target dir': 'Please enter hive target dir', 'Please enter Hive Partition Keys': 'Please enter Hive Partition Key', 'Please enter Hive Partition Values': 'Please enter Partition Value', 'Please enter Replace Delimiter': 'Please enter Replace Delimiter', 'Please enter Fields Terminated': 'Please enter Fields Terminated', 'Please enter Lines Terminated': 'Please enter Lines Terminated', 'Please enter Concurrency': 'Please enter Concurrency', 'Please enter Update Key': 'Please enter Update Key', 'Please enter Job Name(required)': 'Please enter Job Name(required)', 'Please enter Custom Shell(required)': 'Please enter Custom Shell(required)', Direct: 'Direct', Type: 'Type', ModelType: 'ModelType', ColumnType: 'ColumnType', Database: 'Database', Column: 'Column', 'Map Column Hive': 'Map Column Hive', 'Map Column Java': 'Map Column Java', 'Export Dir': 'Export Dir', 'Hive partition Keys': 'Hive partition Keys', 'Hive partition Values': 'Hive partition Values', FieldsTerminated: 'FieldsTerminated', LinesTerminated: 'LinesTerminated', IsUpdate: 'IsUpdate', UpdateKey: 'UpdateKey', UpdateMode: 'UpdateMode', 'Target Dir': 'Target Dir', DeleteTargetDir: 'DeleteTargetDir', FileType: 'FileType', CompressionCodec: 'CompressionCodec', CreateHiveTable: 'CreateHiveTable', DropDelimiter: 'DropDelimiter', OverWriteSrc: 'OverWriteSrc', ReplaceDelimiter: 'ReplaceDelimiter', Concurrency: 'Concurrency', Form: 'Form', OnlyUpdate: 'OnlyUpdate', AllowInsert: 'AllowInsert', 'Data Source': 'Data Source', 'Data Target': 'Data Target', 'All Columns': 'All Columns', 'Some Columns': 'Some Columns', 'Branch flow': 'Branch flow', 'Custom Job': 'Custom Job', 'Custom Script': 'Custom Script', 'Cannot select the same node for successful branch flow and failed branch flow': 'Cannot select the same node for successful branch flow and failed branch flow', 'Successful branch flow and failed branch flow are required': 'conditions node Successful and failed branch flow are required', 'No resources exist': 'No resources exist', 'Please delete all non-existing resources': 'Please delete all non-existing resources', 'Unauthorized or deleted resources': 'Unauthorized or deleted resources', 'Please delete all non-existent resources': 'Please delete all non-existent resources', Kinship: 'Workflow relationship', Reset: 'Reset', KinshipStateActive: 'Current selection', KinshipState1: 'Online', KinshipState0: 'Workflow is not online', KinshipState10: 'Scheduling is not online', 'Dag label display control': 'Dag label display control', Enable: 'Enable', Disable: 'Disable', 'The Worker group no longer exists, please select the correct Worker group!': 'The Worker group no longer exists, please select the correct Worker group!', 'Please confirm whether the workflow has been saved before downloading': 'Please confirm whether the workflow has been saved before downloading', 'User name length is between 3 and 39': 'User name length is between 3 and 39', 'Timeout Settings': 'Timeout Settings', 'Connect Timeout': 'Connect Timeout', 'Socket Timeout': 'Socket Timeout', 'Connect timeout be a positive integer': 'Connect timeout be a positive integer', 'Socket Timeout be a positive integer': 'Socket Timeout be a positive integer', ms: 'ms', 'Please Enter Url': 'Please Enter Url eg. 127.0.0.1:7077', Master: 'Master', 'Please select the waterdrop resources': 'Please select the waterdrop resources', zkDirectory: 'zkDirectory', 'Directory detail': 'Directory detail', 'Connection name': 'Connection name', 'Current connection settings': 'Current connection settings', 'Please save the DAG before formatting': 'Please save the DAG before formatting', 'Batch copy': 'Batch copy', 'Related items': 'Related items', 'Project name is required': 'Project name is required', 'Batch move': 'Batch move', Version: 'Version', 'Pre tasks': 'Pre tasks', 'Running Memory': 'Running Memory', 'Max Memory': 'Max Memory', 'Min Memory': 'Min Memory', 'The workflow canvas is abnormal and cannot be saved, please recreate': 'The workflow canvas is abnormal and cannot be saved, please recreate', Info: 'Info', 'Datasource userName': 'owner', 'Resource userName': 'owner', 'Environment manage': 'Environment manage', 'Create environment': 'Create environment', 'Edit environment': 'Edit environment', 'Environment value': 'Environment value', 'Environment Name': 'Environment Name', 'Environment Code': 'Environment Code', 'Environment Config': 'Environment Config', 'Environment Desc': 'Environment Desc', 'Environment Worker Group': 'Worker Groups', 'Please enter environment config': 'Please enter environment config', 'Please enter environment desc': 'Please enter environment desc', 'Please select worker groups': 'Please select worker groups', condition: 'condition', 'The condition content cannot be empty': 'The condition content cannot be empty', 'Reference from': 'Reference from', 'No more...': 'No more...', 'Task Definition': 'Task Definition', 'Create task': 'Create task', 'Task Type': 'Task Type', 'Process execute type': 'Process execute type', parallel: 'parallel', 'Serial wait': 'Serial wait', 'Serial discard': 'Serial discard', 'Serial priority': 'Serial priority', 'Recover serial wait': 'Recover serial wait', IsEnableProxy: 'Enable Proxy', WebHook: 'WebHook', webHook: 'WebHook', Keyword: 'Keyword', Proxy: 'Proxy', receivers: 'Receivers', receiverCcs: 'ReceiverCcs', transportProtocol: 'Transport Protocol', serverHost: 'SMTP Host', serverPort: 'SMTP Port', sender: 'Sender', enableSmtpAuth: 'SMTP Auth', starttlsEnable: 'SMTP STARTTLS Enable', sslEnable: 'SMTP SSL Enable', smtpSslTrust: 'SMTP SSL Trust', url: 'URL', requestType: 'Request Type', headerParams: 'Headers', bodyParams: 'Body', contentField: 'Content Field', path: 'Script Path', userParams: 'User Params', corpId: 'CorpId', secret: 'Secret', userSendMsg: 'UserSendMsg', agentId: 'AgentId', users: 'Users', Username: 'Username', username: 'Username', showType: 'Show Type', 'Please select a task type (required)': 'Please select a task type (required)', layoutType: 'Layout Type', gridLayout: 'Grid', dagreLayout: 'Dagre', rows: 'Rows', cols: 'Cols', processOnline: 'Online', searchNode: 'Search Node', dagScale: 'Scale', workflowName: 'Workflow Name', scheduleStartTime: 'Schedule Start Time', scheduleEndTime: 'Schedule End Time', crontabExpression: 'Crontab', workflowPublishStatus: 'Workflow Publish Status', schedulePublishStatus: 'Schedule Publish Status', 'Task group manage': 'Task group manage', 'Task group option': 'Task group option', 'Create task group': 'Create task group', 'Edit task group': 'Edit task group', 'Delete task group': 'Delete task group', 'Task group code': 'Task group code', 'Task group name': 'Task group name', 'Task group resource pool size': 'Resource pool size', 'Task group resource pool size be a number': 'The size of the task group resource pool should be more than 1', 'Task group resource used pool size': 'Used resource', 'Task group desc': 'Task group desc', 'Task group status': 'Task group status', 'Task group enable status': 'Enable', 'Task group disable status': 'Disable', 'Please enter task group desc': 'Please enter task group description', 'Please enter task group resource pool size': 'Please enter task group resource pool size', 'Please select project': 'Please select a project', 'Task group queue': 'Task group queue', 'Task group queue priority': 'Priority', 'Task group queue priority be a number': 'The priority of the task group queue should be a positive number', 'Task group queue force starting status': 'Starting status', 'Task group in queue': 'In queue', 'Task group queue status': 'Task status', 'View task group queue': 'View task group queue', 'Task group queue the status of waiting': 'Waiting into the queue', 'Task group queue the status of queuing': 'Queuing', 'Task group queue the status of releasing': 'Released', 'Modify task group queue priority': 'Edit the priority of the task group queue', 'Priority not empty': 'The value of priority can not be empty', 'Priority must be number': 'The value of priority should be number' }
closed
apache/dolphinscheduler
https://github.com/apache/dolphinscheduler
7,493
[Feature][UI]Fuzzy search and case-insensitive search should be supported in process definition details page
### Search before asking - [X] I had searched in the [issues](https://github.com/apache/dolphinscheduler/issues?q=is%3Aissue) and found no similar feature requirement. ### Description 1、click a process and enter the process definition details page eg:http://XXXXX:12345/dolphinscheduler/ui/#/projects/3880463913664/definition/list/3880470637248 2、search node with the search button in the upper right corner of the page 3、Enter a partial name or enter it regardless of case ,and click search button [expect] 1、search node successful [actual] 1、The corresponding node is not found ### Use case Fuzzy search and case-insensitive search should be supported。 This is useful when there are many nodes ### Related issues _No response_ ### Are you willing to submit a PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [X] I agree to follow this project's [Code of Conduct](https://www.apache.org/foundation/policies/conduct)
https://github.com/apache/dolphinscheduler/issues/7493
https://github.com/apache/dolphinscheduler/pull/7766
7e378ea6728f881232824d0cb95d547d21a47759
4203304b08847d57218f9e9cb4edf7447f6eab8b
"2021-12-19T09:53:48Z"
java
"2022-01-01T04:46:40Z"
dolphinscheduler-ui/src/js/module/i18n/locale/zh_CN.js
/* * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ export default { 'User Name': '用户名', 'Please enter user name': '请输入用户名', Password: '密码', 'Please enter your password': '请输入密码', 'Password consists of at least two combinations of numbers, letters, and characters, and the length is between 6-22': '密码至少包含数字,字母和字符的两种组合,长度在6-22之间', Login: '登录', Home: '首页', 'Failed to create node to save': '未创建节点保存失败', 'Global parameters': '全局参数', 'Local parameters': '局部参数', 'Copy success': '复制成功', 'The browser does not support automatic copying': '该浏览器不支持自动复制', 'Whether to save the DAG graph': '是否保存DAG图', 'Current node settings': '当前节点设置', 'View history': '查看历史', 'View log': '查看日志', 'Force success': '强制成功', 'Enter this child node': '进入该子节点', 'Node name': '节点名称', 'Please enter name (required)': '请输入名称(必填)', 'Run flag': '运行标志', Normal: '正常', 'Prohibition execution': '禁止执行', 'Please enter description': '请输入描述', 'Number of failed retries': '失败重试次数', Times: '次', 'Failed retry interval': '失败重试间隔', Minute: '分', 'Delay execution time': '延时执行时间', 'Delay execution': '延时执行', 'Forced success': '强制成功', Cancel: '取消', 'Confirm add': '确认添加', 'The newly created sub-Process has not yet been executed and cannot enter the sub-Process': '新创建子工作流还未执行,不能进入子工作流', 'The task has not been executed and cannot enter the sub-Process': '该任务还未执行,不能进入子工作流', 'Name already exists': '名称已存在请重新输入', 'Download Log': '下载日志', 'Refresh Log': '刷新日志', 'Enter full screen': '进入全屏', 'Cancel full screen': '取消全屏', Close: '关闭', 'Update log success': '更新日志成功', 'No more logs': '暂无更多日志', 'No log': '暂无日志', 'Loading Log...': '正在努力请求日志中...', 'Set the DAG diagram name': '设置DAG图名称', 'Please enter description(optional)': '请输入描述(选填)', 'Set global': '设置全局', 'Whether to go online the process definition': '是否上线流程定义', 'Whether to update the process definition': '是否更新流程定义', Add: '添加', 'DAG graph name cannot be empty': 'DAG图名称不能为空', 'Create Datasource': '创建数据源', 'Project Home': '工作流监控', 'Project Manage': '项目管理', 'Create Project': '创建项目', 'Cron Manage': '定时管理', 'Copy Workflow': '复制工作流', 'Tenant Manage': '租户管理', 'Create Tenant': '创建租户', 'User Manage': '用户管理', 'Create User': '创建用户', 'User Information': '用户信息', 'Edit Password': '密码修改', Success: '成功', Failed: '失败', Delete: '删除', 'Please choose': '请选择', 'Please enter a positive integer': '请输入正整数', 'Program Type': '程序类型', 'Main Class': '主函数的Class', 'Main Package': '主程序包', 'Please enter main package': '请选择主程序包', 'Please enter main class': '请填写主函数的Class', 'Main Arguments': '主程序参数', 'Please enter main arguments': '请输入主程序参数', 'Option Parameters': '选项参数', 'Please enter option parameters': '请输入选项参数', Resources: '资源', 'Custom Parameters': '自定义参数', 'Custom template': '自定义模版', Datasource: '数据源', methods: '方法', 'Please enter the procedure method': '请输入存储脚本 \n\n调用存储过程:{call <procedure-name>[(<arg1>,<arg2>, ...)]}\n\n调用存储函数:{?= call <procedure-name>[(<arg1>,<arg2>, ...)]} ', 'The procedure method script example': '示例:{call <procedure-name>[(?,?, ...)]} 或 {?= call <procedure-name>[(?,?, ...)]}', Script: '脚本', 'Please enter script(required)': '请输入脚本(必填)', 'Deploy Mode': '部署方式', 'Driver Cores': 'Driver核心数', 'Please enter Driver cores': '请输入Driver核心数', 'Driver Memory': 'Driver内存数', 'Please enter Driver memory': '请输入Driver内存数', 'Executor Number': 'Executor数量', 'Please enter Executor number': '请输入Executor数量', 'The Executor number should be a positive integer': 'Executor数量为正整数', 'Executor Memory': 'Executor内存数', 'Please enter Executor memory': '请输入Executor内存数', 'Executor Cores': 'Executor核心数', 'Please enter Executor cores': '请输入Executor核心数', 'Memory should be a positive integer': '内存数为数字', 'Core number should be positive integer': '核心数为正整数', 'Flink Version': 'Flink版本', 'JobManager Memory': 'JobManager内存数', 'Please enter JobManager memory': '请输入JobManager内存数', 'TaskManager Memory': 'TaskManager内存数', 'Please enter TaskManager memory': '请输入TaskManager内存数', 'Slot Number': 'Slot数量', 'Please enter Slot number': '请输入Slot数量', Parallelism: '并行度', 'Custom Parallelism': '自定义并行度', 'Please enter Parallelism': '请输入并行度', 'Parallelism number should be positive integer': '并行度必须为正整数', 'Parallelism tip': '如果存在大量任务需要补数时,可以利用自定义并行度将补数的任务线程设置成合理的数值,避免对服务器造成过大的影响', 'TaskManager Number': 'TaskManager数量', 'Please enter TaskManager number': '请输入TaskManager数量', 'App Name': '任务名称', 'Please enter app name(optional)': '请输入任务名称(选填)', 'SQL Type': 'sql类型', 'Send Email': '发送邮件', 'Log display': '日志显示', 'rows of result': '行查询结果', Title: '主题', 'Please enter the title of email': '请输入邮件主题', Table: '表名', TableMode: '表格', Attachment: '附件', 'SQL Parameter': 'sql参数', 'SQL Statement': 'sql语句', 'UDF Function': 'UDF函数', 'Please enter a SQL Statement(required)': '请输入sql语句(必填)', 'Please enter a JSON Statement(required)': '请输入json语句(必填)', 'One form or attachment must be selected': '表格、附件必须勾选一个', 'Mail subject required': '邮件主题必填', 'Child Node': '子节点', 'Please select a sub-Process': '请选择子工作流', Edit: '编辑', 'Switch To This Version': '切换到该版本', 'Datasource Name': '数据源名称', 'Please enter datasource name': '请输入数据源名称', IP: 'IP主机名', 'Please enter IP': '请输入IP主机名', Port: '端口', 'Please enter port': '请输入端口', 'Database Name': '数据库名', 'Please enter database name': '请输入数据库名', 'Oracle Connect Type': '服务名或SID', 'Oracle Service Name': '服务名', 'Oracle SID': 'SID', 'jdbc connect parameters': 'jdbc连接参数', 'Test Connect': '测试连接', 'Please enter resource name': '请输入数据源名称', 'Please enter resource folder name': '请输入资源文件夹名称', 'Please enter a non-query SQL statement': '请输入非查询sql语句', 'Please enter IP/hostname': '请输入IP/主机名', 'jdbc connection parameters is not a correct JSON format': 'jdbc连接参数不是一个正确的JSON格式', '#': '编号', 'Datasource Type': '数据源类型', 'Datasource Parameter': '数据源参数', 'Create Time': '创建时间', 'Update Time': '更新时间', Operation: '操作', 'Current Version': '当前版本', 'Click to view': '点击查看', 'Delete?': '确定删除吗?', 'Switch Version Successfully': '切换版本成功', 'Confirm Switch To This Version?': '确定切换到该版本吗?', Confirm: '确定', 'Task status statistics': '任务状态统计', Number: '数量', State: '状态', 'Dry-run flag': '空跑标识', 'Process Status Statistics': '流程状态统计', 'Process Definition Statistics': '流程定义统计', 'Project Name': '项目名称', 'Please enter name': '请输入名称', 'Owned Users': '所属用户', 'Process Pid': '进程Pid', 'Zk registration directory': 'zk注册目录', cpuUsage: 'cpuUsage', memoryUsage: 'memoryUsage', 'Last heartbeat time': '最后心跳时间', 'Edit Tenant': '编辑租户', 'OS Tenant Code': '操作系统租户', 'Tenant Name': '租户名称', Queue: '队列', 'Please select a queue': '默认为租户关联队列', 'Please enter the os tenant code in English': '请输入操作系统租户只允许英文', 'Please enter os tenant code in English': '请输入英文操作系统租户', 'Please enter os tenant code': '请输入操作系统租户', 'Please enter tenant Name': '请输入租户名称', 'The os tenant code. Only letters or a combination of letters and numbers are allowed': '操作系统租户只允许字母或字母与数字组合', 'Edit User': '编辑用户', Tenant: '租户', Email: '邮件', Phone: '手机', 'User Type': '用户类型', 'Please enter phone number': '请输入手机', 'Please enter email': '请输入邮箱', 'Please enter the correct email format': '请输入正确的邮箱格式', 'Please enter the correct mobile phone format': '请输入正确的手机格式', Project: '项目', Authorize: '授权', 'File resources': '文件资源', 'UDF resources': 'UDF资源', 'UDF resources directory': 'UDF资源目录', 'Please select UDF resources directory': '请选择UDF资源目录', 'Alarm group': '告警组', 'Alarm group required': '告警组必填', 'Edit alarm group': '编辑告警组', 'Create alarm group': '创建告警组', 'Create Alarm Instance': '创建告警实例', 'Edit Alarm Instance': '编辑告警实例', 'Group Name': '组名称', 'Alarm instance name': '告警实例名称', 'Alarm plugin name': '告警插件名称', 'Select plugin': '选择插件', 'Select Alarm plugin': '请选择告警插件', 'Please enter group name': '请输入组名称', 'Instance parameter exception': '实例参数异常', 'Group Type': '组类型', 'Alarm plugin instance': '告警插件实例', 'Select Alarm plugin instance': '请选择告警插件实例', Remarks: '备注', SMS: '短信', 'Managing Users': '管理用户', Permission: '权限', Administrator: '管理员', 'Confirm Password': '确认密码', 'Please enter confirm password': '请输入确认密码', 'Password cannot be in Chinese': '密码不能为中文', 'Please enter a password (6-22) character password': '请输入密码(6-22)字符密码', 'Confirmation password cannot be in Chinese': '确认密码不能为中文', 'Please enter a confirmation password (6-22) character password': '请输入确认密码(6-22)字符密码', 'The password is inconsistent with the confirmation password': '密码与确认密码不一致,请重新确认', 'Please select the datasource': '请选择数据源', 'Please select resources': '请选择资源', Query: '查询', 'Non Query': '非查询', 'prop(required)': 'prop(必填)', 'value(optional)': 'value(选填)', 'value(required)': 'value(必填)', 'prop is empty': '自定义参数prop不能为空', 'value is empty': 'value不能为空', 'prop is repeat': 'prop中有重复', 'Start Time': '开始时间', 'End Time': '结束时间', crontab: 'crontab', 'Failure Strategy': '失败策略', online: '上线', offline: '下线', 'Task Status': '任务状态', 'Process Instance': '工作流实例', 'Task Instance': '任务实例', 'Select date range': '选择日期区间', startDate: '开始日期', endDate: '结束日期', Date: '日期', Waiting: '等待', Execution: '执行中', Finish: '完成', 'Create File': '创建文件', 'Create folder': '创建文件夹', 'File Name': '文件名称', 'Folder Name': '文件夹名称', 'File Format': '文件格式', 'Folder Format': '文件夹格式', 'File Content': '文件内容', 'Upload File Size': '文件大小不能超过1G', Create: '创建', 'Please enter the resource content': '请输入资源内容', 'Resource content cannot exceed 3000 lines': '资源内容不能超过3000行', 'File Details': '文件详情', 'Download Details': '下载详情', Return: '返回', Save: '保存', 'File Manage': '文件管理', 'Upload Files': '上传文件', 'Create UDF Function': '创建UDF函数', 'Upload UDF Resources': '上传UDF资源', 'Service-Master': '服务管理-Master', 'Service-Worker': '服务管理-Worker', 'Process Name': '工作流名称', Executor: '执行用户', 'Run Type': '运行类型', 'Scheduling Time': '调度时间', 'Run Times': '运行次数', host: 'host', 'fault-tolerant sign': '容错标识', Rerun: '重跑', 'Recovery Failed': '恢复失败', Stop: '停止', Pause: '暂停', 'Recovery Suspend': '恢复运行', Gantt: '甘特图', 'Node Type': '节点类型', 'Submit Time': '提交时间', Duration: '运行时长', 'Retry Count': '重试次数', 'Task Name': '任务名称', 'Task Date': '任务日期', 'Source Table': '源表', 'Record Number': '记录数', 'Target Table': '目标表', 'Online viewing type is not supported': '不支持在线查看类型', Size: '大小', Rename: '重命名', Download: '下载', Export: '导出', 'Version Info': '版本信息', Submit: '提交', 'Edit UDF Function': '编辑UDF函数', type: '类型', 'UDF Function Name': 'UDF函数名称', FILE: '文件', UDF: 'UDF', 'File Subdirectory': '文件子目录', 'Please enter a function name': '请输入函数名', 'Package Name': '包名类名', 'Please enter a Package name': '请输入包名类名', Parameter: '参数', 'Please enter a parameter': '请输入参数', 'UDF Resources': 'UDF资源', 'Upload Resources': '上传资源', Instructions: '使用说明', 'Please enter a instructions': '请输入使用说明', 'Please enter a UDF function name': '请输入UDF函数名称', 'Select UDF Resources': '请选择UDF资源', 'Class Name': '类名', 'Jar Package': 'jar包', 'Library Name': '库名', 'UDF Resource Name': 'UDF资源名称', 'File Size': '文件大小', Description: '描述', 'Drag Nodes and Selected Items': '拖动节点和选中项', 'Select Line Connection': '选择线条连接', 'Delete selected lines or nodes': '删除选中的线或节点', 'Full Screen': '全屏', Unpublished: '未发布', 'Start Process': '启动工作流', 'Execute from the current node': '从当前节点开始执行', 'Recover tolerance fault process': '恢复被容错的工作流', 'Resume the suspension process': '恢复运行流程', 'Execute from the failed nodes': '从失败节点开始执行', 'Complement Data': '补数', 'Scheduling execution': '调度执行', 'Recovery waiting thread': '恢复等待线程', 'Submitted successfully': '提交成功', Executing: '正在执行', 'Ready to pause': '准备暂停', 'Ready to stop': '准备停止', 'Need fault tolerance': '需要容错', Kill: 'Kill', 'Waiting for thread': '等待线程', 'Waiting for dependence': '等待依赖', Start: '运行', Copy: '复制节点', 'Copy name': '复制名称', 'Copy path': '复制路径', 'Please enter keyword': '请输入关键词', 'File Upload': '文件上传', 'Drag the file into the current upload window': '请将文件拖拽到当前上传窗口内!', 'Drag area upload': '拖动区域上传', Upload: '上传', 'ReUpload File': '重新上传文件', 'Please enter file name': '请输入文件名', 'Please select the file to upload': '请选择要上传的文件', 'Resources manage': '资源中心', Security: '安全中心', Logout: '退出', 'No data': '查询无数据', 'Uploading...': '文件上传中', 'Loading...': '正在努力加载中...', List: '列表', 'Unable to download without proper url': '无下载url无法下载', Process: '工作流', 'Process definition': '工作流定义', 'Task record': '任务记录', 'Warning group manage': '告警组管理', 'Warning instance manage': '告警实例管理', 'Servers manage': '服务管理', 'UDF manage': 'UDF管理', 'Resource manage': '资源管理', 'Function manage': '函数管理', 'Edit password': '修改密码', 'Ordinary users': '普通用户', 'Create process': '创建工作流', 'Import process': '导入工作流', 'Timing state': '定时状态', Timing: '定时', Timezone: '时区', TreeView: '树形图', 'Mailbox already exists! Recipients and copyers cannot repeat': '邮箱已存在!收件人和抄送人不能重复', 'Mailbox input is illegal': '邮箱输入不合法', 'Please set the parameters before starting': '启动前请先设置参数', Continue: '继续', End: '结束', 'Node execution': '节点执行', 'Backward execution': '向后执行', 'Forward execution': '向前执行', 'Execute only the current node': '仅执行当前节点', 'Notification strategy': '通知策略', 'Notification group': '通知组', 'Please select a notification group': '请选择通知组', 'Whether it is a complement process?': '是否补数', 'Schedule date': '调度日期', 'Mode of execution': '执行方式', 'Serial execution': '串行执行', 'Parallel execution': '并行执行', 'Set parameters before timing': '定时前请先设置参数', 'Start and stop time': '起止时间', 'Please select time': '请选择时间', 'Please enter crontab': '请输入crontab', none_1: '都不发', success_1: '成功发', failure_1: '失败发', All_1: '成功或失败都发', Toolbar: '工具栏', 'View variables': '查看变量', 'Format DAG': '格式化DAG', 'Refresh DAG status': '刷新DAG状态', Return_1: '返回上一节点', 'Please enter format': '请输入格式为', 'connection parameter': '连接参数', 'Process definition details': '流程定义详情', 'Create process definition': '创建流程定义', 'Scheduled task list': '定时任务列表', 'Process instance details': '流程实例详情', 'Create Resource': '创建资源', 'User Center': '用户中心', AllStatus: '全部状态', None: '无', Name: '名称', 'Process priority': '流程优先级', 'Task priority': '任务优先级', 'Task timeout alarm': '任务超时告警', 'Timeout strategy': '超时策略', 'Timeout alarm': '超时告警', 'Timeout failure': '超时失败', 'Timeout period': '超时时长', 'Waiting Dependent complete': '等待依赖完成', 'Waiting Dependent start': '等待依赖启动', 'Check interval': '检查间隔', 'Timeout must be longer than check interval': '超时时间必须比检查间隔长', 'Timeout strategy must be selected': '超时策略必须选一个', 'Timeout must be a positive integer': '超时时长必须为正整数', 'Add dependency': '添加依赖', 'Whether dry-run': '是否空跑', and: '且', or: '或', month: '月', week: '周', day: '日', hour: '时', Running: '正在运行', 'Waiting for dependency to complete': '等待依赖完成', Selected: '已选', CurrentHour: '当前小时', Last1Hour: '前1小时', Last2Hours: '前2小时', Last3Hours: '前3小时', Last24Hours: '前24小时', today: '今天', Last1Days: '昨天', Last2Days: '前两天', Last3Days: '前三天', Last7Days: '前七天', ThisWeek: '本周', LastWeek: '上周', LastMonday: '上周一', LastTuesday: '上周二', LastWednesday: '上周三', LastThursday: '上周四', LastFriday: '上周五', LastSaturday: '上周六', LastSunday: '上周日', ThisMonth: '本月', LastMonth: '上月', LastMonthBegin: '上月初', LastMonthEnd: '上月末', 'Refresh status succeeded': '刷新状态成功', 'Queue manage': 'Yarn 队列管理', 'Create queue': '创建队列', 'Edit queue': '编辑队列', 'Datasource manage': '数据源中心', 'History task record': '历史任务记录', 'Please go online': '不要忘记上线', 'Queue value': '队列值', 'Please enter queue value': '请输入队列值', 'Worker group manage': 'Worker分组管理', 'Create worker group': '创建Worker分组', 'Edit worker group': '编辑Worker分组', 'Token manage': '令牌管理', 'Create token': '创建令牌', 'Edit token': '编辑令牌', Addresses: '地址', 'Worker Addresses': 'Worker地址', 'Please select the worker addresses': '请选择Worker地址', 'Failure time': '失效时间', 'Expiration time': '失效时间', User: '用户', 'Please enter token': '请输入令牌', 'Generate token': '生成令牌', Monitor: '监控中心', Group: '分组', 'Queue statistics': '队列统计', 'Command status statistics': '命令状态统计', 'Task kill': '等待kill任务', 'Task queue': '等待执行任务', 'Error command count': '错误指令数', 'Normal command count': '正确指令数', Manage: '管理', 'Number of connections': '连接数', Sent: '发送量', Received: '接收量', 'Min latency': '最低延时', 'Avg latency': '平均延时', 'Max latency': '最大延时', 'Node count': '节点数', 'Query time': '当前查询时间', 'Node self-test status': '节点自检状态', 'Health status': '健康状态', 'Max connections': '最大连接数', 'Threads connections': '当前连接数', 'Max used connections': '同时使用连接最大数', 'Threads running connections': '数据库当前活跃连接数', 'Worker group': 'Worker分组', 'Please enter a positive integer greater than 0': '请输入大于 0 的正整数', 'Pre Statement': '前置sql', 'Post Statement': '后置sql', 'Statement cannot be empty': '语句不能为空', 'Process Define Count': '工作流定义数', 'Process Instance Running Count': '正在运行的流程数', 'command number of waiting for running': '待执行的命令数', 'failure command number': '执行失败的命令数', 'tasks number of waiting running': '待运行任务数', 'task number of ready to kill': '待杀死任务数', 'Statistics manage': '统计管理', statistics: '统计', 'select tenant': '选择租户', 'Please enter Principal': '请输入Principal', 'Please enter the kerberos authentication parameter java.security.krb5.conf': '请输入kerberos认证参数 java.security.krb5.conf', 'Please enter the kerberos authentication parameter login.user.keytab.username': '请输入kerberos认证参数 login.user.keytab.username', 'Please enter the kerberos authentication parameter login.user.keytab.path': '请输入kerberos认证参数 login.user.keytab.path', 'The start time must not be the same as the end': '开始时间和结束时间不能相同', 'Startup parameter': '启动参数', 'Startup type': '启动类型', 'warning of timeout': '超时告警', 'Next five execution times': '接下来五次执行时间', 'Execute time': '执行时间', 'Complement range': '补数范围', 'Http Url': '请求地址', 'Http Method': '请求类型', 'Http Parameters': '请求参数', 'Http Parameters Key': '参数名', 'Http Parameters Position': '参数位置', 'Http Parameters Value': '参数值', 'Http Check Condition': '校验条件', 'Http Condition': '校验内容', 'Please Enter Http Url': '请填写请求地址(必填)', 'Please Enter Http Condition': '请填写校验内容', 'There is no data for this period of time': '该时间段无数据', 'Worker addresses cannot be empty': 'Worker地址不能为空', 'Please generate token': '请生成Token', 'Please Select token': '请选择Token失效时间', 'Spark Version': 'Spark版本', TargetDataBase: '目标库', TargetTable: '目标表', TargetJobName: '目标任务名', 'Please enter Pigeon job name': '请输入Pigeon任务名', 'Please enter the table of target': '请输入目标表名', 'Please enter a Target Table(required)': '请输入目标表(必填)', SpeedByte: '限流(字节数)', SpeedRecord: '限流(记录数)', '0 means unlimited by byte': 'KB,0代表不限制', '0 means unlimited by count': '0代表不限制', 'Modify User': '修改用户', 'Whether directory': '是否文件夹', Yes: '是', No: '否', 'Hadoop Custom Params': 'Hadoop参数', 'Sqoop Advanced Parameters': 'Sqoop参数', 'Sqoop Job Name': '任务名称', 'Please enter Mysql Database(required)': '请输入Mysql数据库(必填)', 'Please enter Mysql Table(required)': '请输入Mysql表名(必填)', 'Please enter Columns (Comma separated)': '请输入列名,用 , 隔开', 'Please enter Target Dir(required)': '请输入目标路径(必填)', 'Please enter Export Dir(required)': '请输入数据源路径(必填)', 'Please enter Hive Database(required)': '请输入Hive数据库(必填)', 'Please enter Hive Table(required)': '请输入Hive表名(必填)', 'Please enter hive target dir': '请输入Hive临时目录', 'Please enter Hive Partition Keys': '请输入分区键', 'Please enter Hive Partition Values': '请输入分区值', 'Please enter Replace Delimiter': '请输入替换分隔符', 'Please enter Fields Terminated': '请输入列分隔符', 'Please enter Lines Terminated': '请输入行分隔符', 'Please enter Concurrency': '请输入并发度', 'Please enter Update Key': '请输入更新列', 'Please enter Job Name(required)': '请输入任务名称(必填)', 'Please enter Custom Shell(required)': '请输入自定义脚本', Direct: '流向', Type: '类型', ModelType: '模式', ColumnType: '列类型', Database: '数据库', Column: '列', 'Map Column Hive': 'Hive类型映射', 'Map Column Java': 'Java类型映射', 'Export Dir': '数据源路径', 'Hive partition Keys': 'Hive 分区键', 'Hive partition Values': 'Hive 分区值', FieldsTerminated: '列分隔符', LinesTerminated: '行分隔符', IsUpdate: '是否更新', UpdateKey: '更新列', UpdateMode: '更新类型', 'Target Dir': '目标路径', DeleteTargetDir: '是否删除目录', FileType: '保存格式', CompressionCodec: '压缩类型', CreateHiveTable: '是否创建新表', DropDelimiter: '是否删除分隔符', OverWriteSrc: '是否覆盖数据源', ReplaceDelimiter: '替换分隔符', Concurrency: '并发度', Form: '表单', OnlyUpdate: '只更新', AllowInsert: '无更新便插入', 'Data Source': '数据来源', 'Data Target': '数据目的', 'All Columns': '全表导入', 'Some Columns': '选择列', 'Branch flow': '分支流转', 'Custom Job': '自定义任务', 'Custom Script': '自定义脚本', 'Cannot select the same node for successful branch flow and failed branch flow': '成功分支流转和失败分支流转不能选择同一个节点', 'Successful branch flow and failed branch flow are required': 'conditions节点成功和失败分支流转必填', 'No resources exist': '不存在资源', 'Please delete all non-existing resources': '请删除所有不存在资源', 'Unauthorized or deleted resources': '未授权或已删除资源', 'Please delete all non-existent resources': '请删除所有未授权或已删除资源', Kinship: '工作流关系', Reset: '重置', KinshipStateActive: '当前选择', KinshipState1: '已上线', KinshipState0: '工作流未上线', KinshipState10: '调度未上线', 'Dag label display control': 'Dag节点名称显隐', Enable: '启用', Disable: '停用', 'The Worker group no longer exists, please select the correct Worker group!': '该Worker分组已经不存在,请选择正确的Worker分组!', 'Please confirm whether the workflow has been saved before downloading': '下载前请确定工作流是否已保存', 'User name length is between 3 and 39': '用户名长度在3~39之间', 'Timeout Settings': '超时设置', 'Connect Timeout': '连接超时', 'Socket Timeout': 'Socket超时', 'Connect timeout be a positive integer': '连接超时必须为数字', 'Socket Timeout be a positive integer': 'Socket超时必须为数字', ms: '毫秒', 'Please Enter Url': '请直接填写地址,例如:127.0.0.1:7077', Master: 'Master', 'Please select the waterdrop resources': '请选择waterdrop配置文件', zkDirectory: 'zk注册目录', 'Directory detail': '查看目录详情', 'Connection name': '连线名', 'Current connection settings': '当前连线设置', 'Please save the DAG before formatting': '格式化前请先保存DAG', 'Batch copy': '批量复制', 'Related items': '关联项目', 'Project name is required': '项目名称必填', 'Batch move': '批量移动', Version: '版本', 'Pre tasks': '前置任务', 'Running Memory': '运行内存', 'Max Memory': '最大内存', 'Min Memory': '最小内存', 'The workflow canvas is abnormal and cannot be saved, please recreate': '该工作流画布异常,无法保存,请重新创建', Info: '提示', 'Datasource userName': '所属用户', 'Resource userName': '所属用户', 'Environment manage': '环境管理', 'Create environment': '创建环境', 'Edit environment': '编辑', 'Environment value': 'Environment value', 'Environment Name': '环境名称', 'Environment Code': '环境编码', 'Environment Config': '环境配置', 'Environment Desc': '详细描述', 'Environment Worker Group': 'Worker组', 'Please enter environment config': '请输入环境配置信息', 'Please enter environment desc': '请输入详细描述', 'Please select worker groups': '请选择Worker分组', condition: '条件', 'The condition content cannot be empty': '条件内容不能为空', 'Reference from': '使用已有任务', 'No more...': '没有更多了...', 'Task Definition': '任务定义', 'Create task': '创建任务', 'Task Type': '任务类型', 'Process execute type': '执行策略', parallel: '并行', 'Serial wait': '串行等待', 'Serial discard': '串行抛弃', 'Serial priority': '串行优先', 'Recover serial wait': '串行恢复', IsEnableProxy: '启用代理', WebHook: 'Web钩子', webHook: 'Web钩子', Keyword: '密钥', Proxy: '代理', receivers: '收件人', receiverCcs: '抄送人', transportProtocol: '邮件协议', serverHost: 'SMTP服务器', serverPort: 'SMTP端口', sender: '发件人', enableSmtpAuth: '请求认证', starttlsEnable: 'STARTTLS连接', sslEnable: 'SSL连接', smtpSslTrust: 'SSL证书信任', url: 'URL', requestType: '请求方式', headerParams: '请求头', bodyParams: '请求体', contentField: '内容字段', path: '脚本路径', userParams: '自定义参数', corpId: '企业ID', secret: '密钥', teamSendMsg: '群发信息', userSendMsg: '群员信息', agentId: '应用ID', users: '群员', Username: '用户名', username: '用户名', showType: '内容展示类型', 'Please select a task type (required)': '请选择任务类型(必选)', layoutType: '布局类型', gridLayout: '网格布局', dagreLayout: '层次布局', rows: '行数', cols: '列数', processOnline: '已上线', searchNode: '搜索节点', dagScale: '缩放', workflowName: '工作流名称', scheduleStartTime: '定时开始时间', scheduleEndTime: '定时结束时间', crontabExpression: 'Crontab', workflowPublishStatus: '工作流上线状态', schedulePublishStatus: '定时状态', 'Task group manage': '任务组管理', 'Task group option': '任务组配置', 'Create task group': '创建任务组', 'Edit task group': '编辑任务组', 'Delete task group': '删除任务组', 'Task group code': '任务组编号', 'Task group name': '任务组名称', 'Task group resource pool size': '资源容量', 'Task group resource used pool size': '已用资源', 'Task group desc': '描述信息', 'Task group status': '任务组状态', 'Task group enable status': '启用', 'Task group disable status': '不可用', 'Please enter task group desc': '请输入任务组描述', 'Please enter task group resource pool size': '请输入资源容量大小', 'Task group resource pool size be a number': '资源容量大小必须大于等于1的数值', 'Please select project': '请选择项目', 'Task group queue': '任务组队列', 'Task group queue priority': '组内优先级', 'Task group queue priority be a number': '优先级必须是大于等于0的数值', 'Task group queue force starting status': '是否强制启动', 'Task group in queue': '是否排队中', 'Task group queue status': '任务状态', 'View task group queue': '查看任务组队列', 'Task group queue the status of waiting': '等待入队', 'Task group queue the status of queuing': '排队中', 'Task group queue the status of releasing': '已释放', 'Modify task group queue priority': '修改优先级', 'Force to start task': '强制启动', 'Priority not empty': '优先级不能为空', 'Priority must be number': '优先级必须是数值' }