idx
int64
0
522k
project
stringclasses
631 values
commit_id
stringlengths
7
40
project_url
stringclasses
630 values
commit_url
stringlengths
4
164
commit_message
stringlengths
0
11.5k
target
int64
0
1
func
stringlengths
5
484k
func_hash
float64
1,559,120,642,045,605,000,000,000B
340,279,892,905,069,500,000,000,000,000B
file_name
stringlengths
4
45
file_hash
float64
25,942,829,220,065,710,000,000,000B
340,272,304,251,680,200,000,000,000,000B
cwe
sequencelengths
0
1
cve
stringlengths
4
16
cve_desc
stringlengths
0
2.3k
nvd_url
stringlengths
37
49
9,233
linux
6160968cee8b90a5dd95318d716e31d7775c4ef3
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/6160968cee8b90a5dd95318d716e31d7775c4ef3
userns: unshare_userns(&cred) should not populate cred on failure unshare_userns(new_cred) does *new_cred = prepare_creds() before create_user_ns() which can fail. However, the caller expects that it doesn't need to take care of new_cred if unshare_userns() fails. We could change the single caller, sys_unshare(), but I think it would be more clean to avoid the side effects on failure, so with this patch unshare_userns() does put_cred() itself and initializes *new_cred only if create_user_ns() succeeeds. Cc: stable@vger.kernel.org Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1
int unshare_userns(unsigned long unshare_flags, struct cred **new_cred) { struct cred *cred; if (!(unshare_flags & CLONE_NEWUSER)) return 0; cred = prepare_creds(); if (!cred) return -ENOMEM; *new_cred = cred; return create_user_ns(cred); }
11,304,749,673,525,120,000,000,000,000,000,000,000
user_namespace.c
9,948,127,664,871,343,000,000,000,000,000,000,000
[ "CWE-399" ]
CVE-2013-4205
Memory leak in the unshare_userns function in kernel/user_namespace.c in the Linux kernel before 3.10.6 allows local users to cause a denial of service (memory consumption) via an invalid CLONE_NEWUSER unshare call.
https://nvd.nist.gov/vuln/detail/CVE-2013-4205
9,235
linux
c8c499175f7d295ef867335bceb9a76a2c3cdc38
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/c8c499175f7d295ef867335bceb9a76a2c3cdc38
Bluetooth: SCO - Fix missing msg_namelen update in sco_sock_recvmsg() If the socket is in state BT_CONNECT2 and BT_SK_DEFER_SETUP is set in the flags, sco_sock_recvmsg() returns early with 0 without updating the possibly set msg_namelen member. This, in turn, leads to a 128 byte kernel stack leak in net/socket.c. Fix this by updating msg_namelen in this case. For all other cases it will be handled in bt_sock_recvmsg(). Cc: Marcel Holtmann <marcel@holtmann.org> Cc: Gustavo Padovan <gustavo@padovan.org> Cc: Johan Hedberg <johan.hedberg@gmail.com> Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
static int sco_sock_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg, size_t len, int flags) { struct sock *sk = sock->sk; struct sco_pinfo *pi = sco_pi(sk); lock_sock(sk); if (sk->sk_state == BT_CONNECT2 && test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) { hci_conn_accept(pi->conn->hcon, 0); sk->sk_state = BT_CONFIG; release_sock(sk); return 0; } release_sock(sk); return bt_sock_recvmsg(iocb, sock, msg, len, flags); }
241,725,848,163,111,900,000,000,000,000,000,000,000
sco.c
26,587,629,362,037,386,000,000,000,000,000,000,000
[ "CWE-200" ]
CVE-2013-3226
The sco_sock_recvmsg function in net/bluetooth/sco.c in the Linux kernel before 3.9-rc7 does not initialize a certain length variable, which allows local users to obtain sensitive information from kernel stack memory via a crafted recvmsg or recvfrom system call.
https://nvd.nist.gov/vuln/detail/CVE-2013-3226
9,236
linux
72a763d805a48ac8c0bf48fdb510e84c12de51fe
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/72a763d805a48ac8c0bf48fdb510e84c12de51fe
crypto: algif - suppress sending source address information in recvmsg The current code does not set the msg_namelen member to 0 and therefore makes net/socket.c leak the local sockaddr_storage variable to userland -- 128 bytes of kernel stack memory. Fix that. Cc: <stable@vger.kernel.org> # 2.6.38 Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
1
static int hash_recvmsg(struct kiocb *unused, struct socket *sock, struct msghdr *msg, size_t len, int flags) { struct sock *sk = sock->sk; struct alg_sock *ask = alg_sk(sk); struct hash_ctx *ctx = ask->private; unsigned ds = crypto_ahash_digestsize(crypto_ahash_reqtfm(&ctx->req)); int err; if (len > ds) len = ds; else if (len < ds) msg->msg_flags |= MSG_TRUNC; lock_sock(sk); if (ctx->more) { ctx->more = 0; ahash_request_set_crypt(&ctx->req, NULL, ctx->result, 0); err = af_alg_wait_for_completion(crypto_ahash_final(&ctx->req), &ctx->completion); if (err) goto unlock; } err = memcpy_toiovec(msg->msg_iov, ctx->result, len); unlock: release_sock(sk); return err ?: len; }
223,283,461,660,244,750,000,000,000,000,000,000,000
algif_hash.c
274,659,569,534,280,800,000,000,000,000,000,000,000
[ "CWE-200" ]
CVE-2013-3076
The crypto API in the Linux kernel through 3.9-rc8 does not initialize certain length variables, which allows local users to obtain sensitive information from kernel stack memory via a crafted recvmsg or recvfrom system call, related to the hash_recvmsg function in crypto/algif_hash.c and the skcipher_recvmsg function in crypto/algif_skcipher.c.
https://nvd.nist.gov/vuln/detail/CVE-2013-3076
9,241
linux
f1923820c447e986a9da0fc6bf60c1dccdf0408e
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/f1923820c447e986a9da0fc6bf60c1dccdf0408e
perf/x86: Fix offcore_rsp valid mask for SNB/IVB The valid mask for both offcore_response_0 and offcore_response_1 was wrong for SNB/SNB-EP, IVB/IVB-EP. It was possible to write to reserved bit and cause a GP fault crashing the kernel. This patch fixes the problem by correctly marking the reserved bits in the valid mask for all the processors mentioned above. A distinction between desktop and server parts is introduced because bits 24-30 are only available on the server parts. This version of the patch is just a rebase to perf/urgent tree and should apply to older kernels as well. Signed-off-by: Stephane Eranian <eranian@google.com> Cc: peterz@infradead.org Cc: jolsa@redhat.com Cc: gregkh@linuxfoundation.org Cc: security@kernel.org Cc: ak@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
1
__init int intel_pmu_init(void) { union cpuid10_edx edx; union cpuid10_eax eax; union cpuid10_ebx ebx; struct event_constraint *c; unsigned int unused; int version; if (!cpu_has(&boot_cpu_data, X86_FEATURE_ARCH_PERFMON)) { switch (boot_cpu_data.x86) { case 0x6: return p6_pmu_init(); case 0xb: return knc_pmu_init(); case 0xf: return p4_pmu_init(); } return -ENODEV; } /* * Check whether the Architectural PerfMon supports * Branch Misses Retired hw_event or not. */ cpuid(10, &eax.full, &ebx.full, &unused, &edx.full); if (eax.split.mask_length < ARCH_PERFMON_EVENTS_COUNT) return -ENODEV; version = eax.split.version_id; if (version < 2) x86_pmu = core_pmu; else x86_pmu = intel_pmu; x86_pmu.version = version; x86_pmu.num_counters = eax.split.num_counters; x86_pmu.cntval_bits = eax.split.bit_width; x86_pmu.cntval_mask = (1ULL << eax.split.bit_width) - 1; x86_pmu.events_maskl = ebx.full; x86_pmu.events_mask_len = eax.split.mask_length; x86_pmu.max_pebs_events = min_t(unsigned, MAX_PEBS_EVENTS, x86_pmu.num_counters); /* * Quirk: v2 perfmon does not report fixed-purpose events, so * assume at least 3 events: */ if (version > 1) x86_pmu.num_counters_fixed = max((int)edx.split.num_counters_fixed, 3); /* * v2 and above have a perf capabilities MSR */ if (version > 1) { u64 capabilities; rdmsrl(MSR_IA32_PERF_CAPABILITIES, capabilities); x86_pmu.intel_cap.capabilities = capabilities; } intel_ds_init(); x86_add_quirk(intel_arch_events_quirk); /* Install first, so it runs last */ /* * Install the hw-cache-events table: */ switch (boot_cpu_data.x86_model) { case 14: /* 65 nm core solo/duo, "Yonah" */ pr_cont("Core events, "); break; case 15: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */ x86_add_quirk(intel_clovertown_quirk); case 22: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */ case 23: /* current 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */ case 29: /* six-core 45 nm xeon "Dunnington" */ memcpy(hw_cache_event_ids, core2_hw_cache_event_ids, sizeof(hw_cache_event_ids)); intel_pmu_lbr_init_core(); x86_pmu.event_constraints = intel_core2_event_constraints; x86_pmu.pebs_constraints = intel_core2_pebs_event_constraints; pr_cont("Core2 events, "); break; case 26: /* 45 nm nehalem, "Bloomfield" */ case 30: /* 45 nm nehalem, "Lynnfield" */ case 46: /* 45 nm nehalem-ex, "Beckton" */ memcpy(hw_cache_event_ids, nehalem_hw_cache_event_ids, sizeof(hw_cache_event_ids)); memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs, sizeof(hw_cache_extra_regs)); intel_pmu_lbr_init_nhm(); x86_pmu.event_constraints = intel_nehalem_event_constraints; x86_pmu.pebs_constraints = intel_nehalem_pebs_event_constraints; x86_pmu.enable_all = intel_pmu_nhm_enable_all; x86_pmu.extra_regs = intel_nehalem_extra_regs; /* UOPS_ISSUED.STALLED_CYCLES */ intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1); /* UOPS_EXECUTED.CORE_ACTIVE_CYCLES,c=1,i=1 */ intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = X86_CONFIG(.event=0xb1, .umask=0x3f, .inv=1, .cmask=1); x86_add_quirk(intel_nehalem_quirk); pr_cont("Nehalem events, "); break; case 28: /* Atom */ case 38: /* Lincroft */ case 39: /* Penwell */ case 53: /* Cloverview */ case 54: /* Cedarview */ memcpy(hw_cache_event_ids, atom_hw_cache_event_ids, sizeof(hw_cache_event_ids)); intel_pmu_lbr_init_atom(); x86_pmu.event_constraints = intel_gen_event_constraints; x86_pmu.pebs_constraints = intel_atom_pebs_event_constraints; pr_cont("Atom events, "); break; case 37: /* 32 nm nehalem, "Clarkdale" */ case 44: /* 32 nm nehalem, "Gulftown" */ case 47: /* 32 nm Xeon E7 */ memcpy(hw_cache_event_ids, westmere_hw_cache_event_ids, sizeof(hw_cache_event_ids)); memcpy(hw_cache_extra_regs, nehalem_hw_cache_extra_regs, sizeof(hw_cache_extra_regs)); intel_pmu_lbr_init_nhm(); x86_pmu.event_constraints = intel_westmere_event_constraints; x86_pmu.enable_all = intel_pmu_nhm_enable_all; x86_pmu.pebs_constraints = intel_westmere_pebs_event_constraints; x86_pmu.extra_regs = intel_westmere_extra_regs; x86_pmu.er_flags |= ERF_HAS_RSP_1; /* UOPS_ISSUED.STALLED_CYCLES */ intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1); /* UOPS_EXECUTED.CORE_ACTIVE_CYCLES,c=1,i=1 */ intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = X86_CONFIG(.event=0xb1, .umask=0x3f, .inv=1, .cmask=1); pr_cont("Westmere events, "); break; case 42: /* SandyBridge */ case 45: /* SandyBridge, "Romely-EP" */ x86_add_quirk(intel_sandybridge_quirk); memcpy(hw_cache_event_ids, snb_hw_cache_event_ids, sizeof(hw_cache_event_ids)); memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs, sizeof(hw_cache_extra_regs)); intel_pmu_lbr_init_snb(); x86_pmu.event_constraints = intel_snb_event_constraints; x86_pmu.pebs_constraints = intel_snb_pebs_event_constraints; x86_pmu.pebs_aliases = intel_pebs_aliases_snb; x86_pmu.extra_regs = intel_snb_extra_regs; /* all extra regs are per-cpu when HT is on */ x86_pmu.er_flags |= ERF_HAS_RSP_1; x86_pmu.er_flags |= ERF_NO_HT_SHARING; /* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */ intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1); /* UOPS_DISPATCHED.THREAD,c=1,i=1 to count stall cycles*/ intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = X86_CONFIG(.event=0xb1, .umask=0x01, .inv=1, .cmask=1); pr_cont("SandyBridge events, "); break; case 58: /* IvyBridge */ case 62: /* IvyBridge EP */ memcpy(hw_cache_event_ids, snb_hw_cache_event_ids, sizeof(hw_cache_event_ids)); memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs, sizeof(hw_cache_extra_regs)); intel_pmu_lbr_init_snb(); x86_pmu.event_constraints = intel_ivb_event_constraints; x86_pmu.pebs_constraints = intel_ivb_pebs_event_constraints; x86_pmu.pebs_aliases = intel_pebs_aliases_snb; x86_pmu.extra_regs = intel_snb_extra_regs; /* all extra regs are per-cpu when HT is on */ x86_pmu.er_flags |= ERF_HAS_RSP_1; x86_pmu.er_flags |= ERF_NO_HT_SHARING; /* UOPS_ISSUED.ANY,c=1,i=1 to count stall cycles */ intel_perfmon_event_map[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = X86_CONFIG(.event=0x0e, .umask=0x01, .inv=1, .cmask=1); pr_cont("IvyBridge events, "); break; default: switch (x86_pmu.version) { case 1: x86_pmu.event_constraints = intel_v1_event_constraints; pr_cont("generic architected perfmon v1, "); break; default: /* * default constraints for v2 and up */ x86_pmu.event_constraints = intel_gen_event_constraints; pr_cont("generic architected perfmon, "); break; } } if (x86_pmu.num_counters > INTEL_PMC_MAX_GENERIC) { WARN(1, KERN_ERR "hw perf events %d > max(%d), clipping!", x86_pmu.num_counters, INTEL_PMC_MAX_GENERIC); x86_pmu.num_counters = INTEL_PMC_MAX_GENERIC; } x86_pmu.intel_ctrl = (1 << x86_pmu.num_counters) - 1; if (x86_pmu.num_counters_fixed > INTEL_PMC_MAX_FIXED) { WARN(1, KERN_ERR "hw perf events fixed %d > max(%d), clipping!", x86_pmu.num_counters_fixed, INTEL_PMC_MAX_FIXED); x86_pmu.num_counters_fixed = INTEL_PMC_MAX_FIXED; } x86_pmu.intel_ctrl |= ((1LL << x86_pmu.num_counters_fixed)-1) << INTEL_PMC_IDX_FIXED; if (x86_pmu.event_constraints) { /* * event on fixed counter2 (REF_CYCLES) only works on this * counter, so do not extend mask to generic counters */ for_each_event_constraint(c, x86_pmu.event_constraints) { if (c->cmask != X86_RAW_EVENT_MASK || c->idxmsk64 == INTEL_PMC_MSK_FIXED_REF_CYCLES) { continue; } c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1; c->weight += x86_pmu.num_counters; } } return 0; }
168,687,725,266,084,830,000,000,000,000,000,000,000
perf_event_intel.c
333,137,357,155,144,160,000,000,000,000,000,000,000
[ "CWE-20" ]
CVE-2013-2146
arch/x86/kernel/cpu/perf_event_intel.c in the Linux kernel before 3.8.9, when the Performance Events Subsystem is enabled, specifies an incorrect bitmask, which allows local users to cause a denial of service (general protection fault and system crash) by attempting to set a reserved bit.
https://nvd.nist.gov/vuln/detail/CVE-2013-2146
9,242
linux
8176cced706b5e5d15887584150764894e94e02f
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/8176cced706b5e5d15887584150764894e94e02f
perf: Treat attr.config as u64 in perf_swevent_init() Trinity discovered that we fail to check all 64 bits of attr.config passed by user space, resulting to out-of-bounds access of the perf_swevent_enabled array in sw_perf_event_destroy(). Introduced in commit b0a873ebb ("perf: Register PMU implementations"). Signed-off-by: Tommi Rantala <tt.rantala@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: davej@redhat.com Cc: Paul Mackerras <paulus@samba.org> Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net> Link: http://lkml.kernel.org/r/1365882554-30259-1-git-send-email-tt.rantala@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
1
static int perf_swevent_init(struct perf_event *event) { int event_id = event->attr.config; if (event->attr.type != PERF_TYPE_SOFTWARE) return -ENOENT; /* * no branch sampling for software events */ if (has_branch_stack(event)) return -EOPNOTSUPP; switch (event_id) { case PERF_COUNT_SW_CPU_CLOCK: case PERF_COUNT_SW_TASK_CLOCK: return -ENOENT; default: break; } if (event_id >= PERF_COUNT_SW_MAX) return -ENOENT; if (!event->parent) { int err; err = swevent_hlist_get(event); if (err) return err; static_key_slow_inc(&perf_swevent_enabled[event_id]); event->destroy = sw_perf_event_destroy; } return 0; }
19,870,798,287,114,217,000,000,000,000,000,000,000
core.c
25,070,996,430,960,593,000,000,000,000,000,000,000
[ "CWE-189" ]
CVE-2013-2094
The perf_swevent_init function in kernel/events/core.c in the Linux kernel before 3.8.9 uses an incorrect integer data type, which allows local users to gain privileges via a crafted perf_event_open system call.
https://nvd.nist.gov/vuln/detail/CVE-2013-2094
9,246
linux
5f00110f7273f9ff04ac69a5f85bb535a4fd0987
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/5f00110f7273f9ff04ac69a5f85bb535a4fd0987
tmpfs: fix use-after-free of mempolicy object The tmpfs remount logic preserves filesystem mempolicy if the mpol=M option is not specified in the remount request. A new policy can be specified if mpol=M is given. Before this patch remounting an mpol bound tmpfs without specifying mpol= mount option in the remount request would set the filesystem's mempolicy object to a freed mempolicy object. To reproduce the problem boot a DEBUG_PAGEALLOC kernel and run: # mkdir /tmp/x # mount -t tmpfs -o size=100M,mpol=interleave nodev /tmp/x # grep /tmp/x /proc/mounts nodev /tmp/x tmpfs rw,relatime,size=102400k,mpol=interleave:0-3 0 0 # mount -o remount,size=200M nodev /tmp/x # grep /tmp/x /proc/mounts nodev /tmp/x tmpfs rw,relatime,size=204800k,mpol=??? 0 0 # note ? garbage in mpol=... output above # dd if=/dev/zero of=/tmp/x/f count=1 # panic here Panic: BUG: unable to handle kernel NULL pointer dereference at (null) IP: [< (null)>] (null) [...] Oops: 0010 [#1] SMP DEBUG_PAGEALLOC Call Trace: mpol_shared_policy_init+0xa5/0x160 shmem_get_inode+0x209/0x270 shmem_mknod+0x3e/0xf0 shmem_create+0x18/0x20 vfs_create+0xb5/0x130 do_last+0x9a1/0xea0 path_openat+0xb3/0x4d0 do_filp_open+0x42/0xa0 do_sys_open+0xfe/0x1e0 compat_sys_open+0x1b/0x20 cstar_dispatch+0x7/0x1f Non-debug kernels will not crash immediately because referencing the dangling mpol will not cause a fault. Instead the filesystem will reference a freed mempolicy object, which will cause unpredictable behavior. The problem boils down to a dropped mpol reference below if shmem_parse_options() does not allocate a new mpol: config = *sbinfo shmem_parse_options(data, &config, true) mpol_put(sbinfo->mpol) sbinfo->mpol = config.mpol /* BUG: saves unreferenced mpol */ This patch avoids the crash by not releasing the mempolicy if shmem_parse_options() doesn't create a new mpol. How far back does this issue go? I see it in both 2.6.36 and 3.3. I did not look back further. Signed-off-by: Greg Thelen <gthelen@google.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1
static int shmem_remount_fs(struct super_block *sb, int *flags, char *data) { struct shmem_sb_info *sbinfo = SHMEM_SB(sb); struct shmem_sb_info config = *sbinfo; unsigned long inodes; int error = -EINVAL; if (shmem_parse_options(data, &config, true)) return error; spin_lock(&sbinfo->stat_lock); inodes = sbinfo->max_inodes - sbinfo->free_inodes; if (percpu_counter_compare(&sbinfo->used_blocks, config.max_blocks) > 0) goto out; if (config.max_inodes < inodes) goto out; /* * Those tests disallow limited->unlimited while any are in use; * but we must separately disallow unlimited->limited, because * in that case we have no record of how much is already in use. */ if (config.max_blocks && !sbinfo->max_blocks) goto out; if (config.max_inodes && !sbinfo->max_inodes) goto out; error = 0; sbinfo->max_blocks = config.max_blocks; sbinfo->max_inodes = config.max_inodes; sbinfo->free_inodes = config.max_inodes - inodes; mpol_put(sbinfo->mpol); sbinfo->mpol = config.mpol; /* transfers initial ref */ out: spin_unlock(&sbinfo->stat_lock); return error; }
69,198,739,652,828,040,000,000,000,000,000,000,000
shmem.c
324,391,302,440,655,720,000,000,000,000,000,000,000
[ "CWE-399" ]
CVE-2013-1767
Use-after-free vulnerability in the shmem_remount_fs function in mm/shmem.c in the Linux kernel before 3.7.10 allows local users to gain privileges or cause a denial of service (system crash) by remounting a tmpfs filesystem without specifying a required mpol (aka mempolicy) mount option.
https://nvd.nist.gov/vuln/detail/CVE-2013-1767
9,247
linux
6e601a53566d84e1ffd25e7b6fe0b6894ffd79c0
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/6e601a53566d84e1ffd25e7b6fe0b6894ffd79c0
sock_diag: Fix out-of-bounds access to sock_diag_handlers[] Userland can send a netlink message requesting SOCK_DIAG_BY_FAMILY with a family greater or equal then AF_MAX -- the array size of sock_diag_handlers[]. The current code does not test for this condition therefore is vulnerable to an out-of-bound access opening doors for a privilege escalation. Signed-off-by: Mathias Krause <minipli@googlemail.com> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
static int __sock_diag_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh) { int err; struct sock_diag_req *req = nlmsg_data(nlh); const struct sock_diag_handler *hndl; if (nlmsg_len(nlh) < sizeof(*req)) return -EINVAL; hndl = sock_diag_lock_handler(req->sdiag_family); if (hndl == NULL) err = -ENOENT; else err = hndl->dump(skb, nlh); sock_diag_unlock_handler(hndl); return err; }
81,691,601,596,043,680,000,000,000,000,000,000,000
sock_diag.c
5,227,707,971,742,808,000,000,000,000,000,000,000
[ "CWE-20" ]
CVE-2013-1763
Array index error in the __sock_diag_rcv_msg function in net/core/sock_diag.c in the Linux kernel before 3.7.10 allows local users to gain privileges via a large family value in a Netlink message.
https://nvd.nist.gov/vuln/detail/CVE-2013-1763
9,252
linux
43da5f2e0d0c69ded3d51907d9552310a6b545e8
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/43da5f2e0d0c69ded3d51907d9552310a6b545e8
net: fix info leak in compat dev_ifconf() The implementation of dev_ifconf() for the compat ioctl interface uses an intermediate ifc structure allocated in userland for the duration of the syscall. Though, it fails to initialize the padding bytes inserted for alignment and that for leaks four bytes of kernel stack. Add an explicit memset(0) before filling the structure to avoid the info leak. Signed-off-by: Mathias Krause <minipli@googlemail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
static int dev_ifconf(struct net *net, struct compat_ifconf __user *uifc32) { struct compat_ifconf ifc32; struct ifconf ifc; struct ifconf __user *uifc; struct compat_ifreq __user *ifr32; struct ifreq __user *ifr; unsigned int i, j; int err; if (copy_from_user(&ifc32, uifc32, sizeof(struct compat_ifconf))) return -EFAULT; if (ifc32.ifcbuf == 0) { ifc32.ifc_len = 0; ifc.ifc_len = 0; ifc.ifc_req = NULL; uifc = compat_alloc_user_space(sizeof(struct ifconf)); } else { size_t len = ((ifc32.ifc_len / sizeof(struct compat_ifreq)) + 1) * sizeof(struct ifreq); uifc = compat_alloc_user_space(sizeof(struct ifconf) + len); ifc.ifc_len = len; ifr = ifc.ifc_req = (void __user *)(uifc + 1); ifr32 = compat_ptr(ifc32.ifcbuf); for (i = 0; i < ifc32.ifc_len; i += sizeof(struct compat_ifreq)) { if (copy_in_user(ifr, ifr32, sizeof(struct compat_ifreq))) return -EFAULT; ifr++; ifr32++; } } if (copy_to_user(uifc, &ifc, sizeof(struct ifconf))) return -EFAULT; err = dev_ioctl(net, SIOCGIFCONF, uifc); if (err) return err; if (copy_from_user(&ifc, uifc, sizeof(struct ifconf))) return -EFAULT; ifr = ifc.ifc_req; ifr32 = compat_ptr(ifc32.ifcbuf); for (i = 0, j = 0; i + sizeof(struct compat_ifreq) <= ifc32.ifc_len && j < ifc.ifc_len; i += sizeof(struct compat_ifreq), j += sizeof(struct ifreq)) { if (copy_in_user(ifr32, ifr, sizeof(struct compat_ifreq))) return -EFAULT; ifr32++; ifr++; } if (ifc32.ifcbuf == 0) { /* Translate from 64-bit structure multiple to * a 32-bit one. */ i = ifc.ifc_len; i = ((i / sizeof(struct ifreq)) * sizeof(struct compat_ifreq)); ifc32.ifc_len = i; } else { ifc32.ifc_len = i; } if (copy_to_user(uifc32, &ifc32, sizeof(struct compat_ifconf))) return -EFAULT; return 0; }
260,748,882,966,970,700,000,000,000,000,000,000,000
socket.c
117,057,233,606,366,900,000,000,000,000,000,000,000
[ "CWE-200" ]
CVE-2012-6539
The dev_ifconf function in net/socket.c in the Linux kernel before 3.6 does not initialize a certain structure, which allows local users to obtain sensitive information from kernel stack memory via a crafted application.
https://nvd.nist.gov/vuln/detail/CVE-2012-6539
9,253
linux
4c87308bdea31a7b4828a51f6156e6f721a1fcc9
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/4c87308bdea31a7b4828a51f6156e6f721a1fcc9
xfrm_user: fix info leak in copy_to_user_auth() copy_to_user_auth() fails to initialize the remainder of alg_name and therefore discloses up to 54 bytes of heap memory via netlink to userland. Use strncpy() instead of strcpy() to fill the trailing bytes of alg_name with null bytes. Signed-off-by: Mathias Krause <minipli@googlemail.com> Acked-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
static int copy_to_user_auth(struct xfrm_algo_auth *auth, struct sk_buff *skb) { struct xfrm_algo *algo; struct nlattr *nla; nla = nla_reserve(skb, XFRMA_ALG_AUTH, sizeof(*algo) + (auth->alg_key_len + 7) / 8); if (!nla) return -EMSGSIZE; algo = nla_data(nla); strcpy(algo->alg_name, auth->alg_name); memcpy(algo->alg_key, auth->alg_key, (auth->alg_key_len + 7) / 8); algo->alg_key_len = auth->alg_key_len; return 0; }
16,551,793,964,917,886,000,000,000,000,000,000,000
xfrm_user.c
146,233,539,701,132,340,000,000,000,000,000,000,000
[ "CWE-200" ]
CVE-2012-6538
The copy_to_user_auth function in net/xfrm/xfrm_user.c in the Linux kernel before 3.6 uses an incorrect C library function for copying a string, which allows local users to obtain sensitive information from kernel heap memory by leveraging the CAP_NET_ADMIN capability.
https://nvd.nist.gov/vuln/detail/CVE-2012-6538
9,258
krb5
d1f707024f1d0af6e54a18885322d70fa15ec4d3
https://github.com/krb5/krb5
https://github.com/krb5/krb5/commit/d1f707024f1d0af6e54a18885322d70fa15ec4d3
Fix LDAP misused policy name crash [CVE-2014-5353] In krb5_ldap_get_password_policy_from_dn, if LDAP_SEARCH returns successfully with no results, return KRB5_KDB_NOENTRY instead of returning success with a zeroed-out policy object. This fixes a null dereference when an admin attempts to use an LDAP ticket policy name as a password policy name. CVE-2014-5353: In MIT krb5, when kadmind is configured to use LDAP for the KDC database, an authenticated remote attacker can cause a NULL dereference by attempting to use a named ticket policy object as a password policy for a principal. The attacker needs to be authenticated as a user who has the elevated privilege for setting password policy by adding or modifying principals. Queries to LDAP scoped to the krbPwdPolicy object class will correctly not return entries of other classes, such as ticket policy objects, but may return success with no returned elements if an object with the requested DN exists in a different object class. In this case, the routine to retrieve a password policy returned success with a password policy object that consisted entirely of zeroed memory. In particular, accesses to the policy name will dereference a NULL pointer. KDC operation does not access the policy name field, but most kadmin operations involving the principal with incorrect password policy will trigger the crash. Thanks to Patrik Kis for reporting this problem. CVSSv2 Vector: AV:N/AC:M/Au:S/C:N/I:N/A:C/E:H/RL:OF/RC:C [kaduk@mit.edu: CVE description and CVSS score] ticket: 8051 (new) target_version: 1.13.1 tags: pullup
1
krb5_ldap_get_password_policy_from_dn(krb5_context context, char *pol_name, char *pol_dn, osa_policy_ent_t *policy) { krb5_error_code st=0, tempst=0; LDAP *ld=NULL; LDAPMessage *result=NULL,*ent=NULL; kdb5_dal_handle *dal_handle=NULL; krb5_ldap_context *ldap_context=NULL; krb5_ldap_server_handle *ldap_server_handle=NULL; /* Clear the global error string */ krb5_clear_error_message(context); /* validate the input parameters */ if (pol_dn == NULL) return EINVAL; *policy = NULL; SETUP_CONTEXT(); GET_HANDLE(); *(policy) = (osa_policy_ent_t) malloc(sizeof(osa_policy_ent_rec)); if (*policy == NULL) { st = ENOMEM; goto cleanup; } memset(*policy, 0, sizeof(osa_policy_ent_rec)); LDAP_SEARCH(pol_dn, LDAP_SCOPE_BASE, "(objectclass=krbPwdPolicy)", password_policy_attributes); ent=ldap_first_entry(ld, result); if (ent != NULL) { if ((st = populate_policy(context, ld, ent, pol_name, *policy)) != 0) goto cleanup; } cleanup: ldap_msgfree(result); if (st != 0) { if (*policy != NULL) { krb5_ldap_free_password_policy(context, *policy); *policy = NULL; } } krb5_ldap_put_handle_to_pool(ldap_context, ldap_server_handle); return st; }
312,745,617,712,492,800,000,000,000,000,000,000,000
ldap_pwd_policy.c
258,732,680,614,208,330,000,000,000,000,000,000,000
[ "CWE-476" ]
CVE-2014-5353
The krb5_ldap_get_password_policy_from_dn function in plugins/kdb/ldap/libkdb_ldap/ldap_pwd_policy.c in MIT Kerberos 5 (aka krb5) before 1.13.1, when the KDC uses LDAP, allows remote authenticated users to cause a denial of service (daemon crash) via a successful LDAP query with no results, as demonstrated by using an incorrect object type for a password policy.
https://nvd.nist.gov/vuln/detail/CVE-2014-5353
9,260
linux
07f4d9d74a04aa7c72c5dae0ef97565f28f17b92
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/07f4d9d74a04aa7c72c5dae0ef97565f28f17b92
ALSA: control: Protect user controls against concurrent access The user-control put and get handlers as well as the tlv do not protect against concurrent access from multiple threads. Since the state of the control is not updated atomically it is possible that either two write operations or a write and a read operation race against each other. Both can lead to arbitrary memory disclosure. This patch introduces a new lock that protects user-controls from concurrent access. Since applications typically access controls sequentially than in parallel a single lock per card should be fine. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Jaroslav Kysela <perex@perex.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
1
static int snd_ctl_elem_user_tlv(struct snd_kcontrol *kcontrol, int op_flag, unsigned int size, unsigned int __user *tlv) { struct user_element *ue = kcontrol->private_data; int change = 0; void *new_data; if (op_flag > 0) { if (size > 1024 * 128) /* sane value */ return -EINVAL; new_data = memdup_user(tlv, size); if (IS_ERR(new_data)) return PTR_ERR(new_data); change = ue->tlv_data_size != size; if (!change) change = memcmp(ue->tlv_data, new_data, size); kfree(ue->tlv_data); ue->tlv_data = new_data; ue->tlv_data_size = size; } else { if (! ue->tlv_data_size || ! ue->tlv_data) return -ENXIO; if (size < ue->tlv_data_size) return -ENOSPC; if (copy_to_user(tlv, ue->tlv_data, ue->tlv_data_size)) return -EFAULT; } return change; }
95,140,620,080,426,880,000,000,000,000,000,000,000
None
null
[ "CWE-362" ]
CVE-2014-4652
Race condition in the tlv handler functionality in the snd_ctl_elem_user_tlv function in sound/core/control.c in the ALSA control implementation in the Linux kernel before 3.15.2 allows local users to obtain sensitive information from kernel memory by leveraging /dev/snd/controlCX access.
https://nvd.nist.gov/vuln/detail/CVE-2014-4652
9,262
krb5
dc7ed55c689d57de7f7408b34631bf06fec9dab1
https://github.com/krb5/krb5
https://github.com/krb5/krb5/commit/dc7ed55c689d57de7f7408b34631bf06fec9dab1
Fix LDAP key data segmentation [CVE-2014-4345] For principal entries having keys with multiple kvnos (due to use of -keepold), the LDAP KDB module makes an attempt to store all the keys having the same kvno into a single krbPrincipalKey attribute value. There is a fencepost error in the loop, causing currkvno to be set to the just-processed value instead of the next kvno. As a result, the second and all following groups of multiple keys by kvno are each stored in two krbPrincipalKey attribute values. Fix the loop to use the correct kvno value. CVE-2014-4345: In MIT krb5, when kadmind is configured to use LDAP for the KDC database, an authenticated remote attacker can cause it to perform an out-of-bounds write (buffer overrun) by performing multiple cpw -keepold operations. An off-by-one error while copying key information to the new database entry results in keys sharing a common kvno being written to different array buckets, in an array whose size is determined by the number of kvnos present. After sufficient iterations, the extra writes extend past the end of the (NULL-terminated) array. The NULL terminator is always written after the end of the loop, so no out-of-bounds data is read, it is only written. Historically, it has been possible to convert an out-of-bounds write into remote code execution in some cases, though the necessary exploits must be tailored to the individual application and are usually quite complicated. Depending on the allocated length of the array, an out-of-bounds write may also cause a segmentation fault and/or application crash. CVSSv2 Vector: AV:N/AC:M/Au:S/C:C/I:C/A:C/E:POC/RL:OF/RC:C [ghudson@mit.edu: clarified commit message] [kaduk@mit.edu: CVE summary, CVSSv2 vector] (cherry picked from commit 81c332e29f10887c6b9deb065f81ba259f4c7e03) ticket: 7980 version_fixed: 1.12.2 status: resolved
1
krb5_encode_krbsecretkey(krb5_key_data *key_data_in, int n_key_data, krb5_kvno mkvno) { struct berval **ret = NULL; int currkvno; int num_versions = 1; int i, j, last; krb5_error_code err = 0; krb5_key_data *key_data; if (n_key_data <= 0) return NULL; /* Make a shallow copy of the key data so we can alter it. */ key_data = k5calloc(n_key_data, sizeof(*key_data), &err); if (key_data_in == NULL) goto cleanup; memcpy(key_data, key_data_in, n_key_data * sizeof(*key_data)); /* Unpatched krb5 1.11 and 1.12 cannot decode KrbKey sequences with no salt * field. For compatibility, always encode a salt field. */ for (i = 0; i < n_key_data; i++) { if (key_data[i].key_data_ver == 1) { key_data[i].key_data_ver = 2; key_data[i].key_data_type[1] = KRB5_KDB_SALTTYPE_NORMAL; key_data[i].key_data_length[1] = 0; key_data[i].key_data_contents[1] = NULL; } } /* Find the number of key versions */ for (i = 0; i < n_key_data - 1; i++) if (key_data[i].key_data_kvno != key_data[i + 1].key_data_kvno) num_versions++; ret = (struct berval **) calloc (num_versions + 1, sizeof (struct berval *)); if (ret == NULL) { err = ENOMEM; goto cleanup; } for (i = 0, last = 0, j = 0, currkvno = key_data[0].key_data_kvno; i < n_key_data; i++) { krb5_data *code; if (i == n_key_data - 1 || key_data[i + 1].key_data_kvno != currkvno) { ret[j] = k5alloc(sizeof(struct berval), &err); if (ret[j] == NULL) goto cleanup; err = asn1_encode_sequence_of_keys(key_data + last, (krb5_int16)i - last + 1, mkvno, &code); if (err) goto cleanup; /*CHECK_NULL(ret[j]); */ ret[j]->bv_len = code->length; ret[j]->bv_val = code->data; free(code); j++; last = i + 1; currkvno = key_data[i].key_data_kvno; } } ret[num_versions] = NULL; cleanup: free(key_data); if (err != 0) { if (ret != NULL) { for (i = 0; i <= num_versions; i++) if (ret[i] != NULL) free (ret[i]); free (ret); ret = NULL; } } return ret; }
106,082,474,012,218,960,000,000,000,000,000,000,000
ldap_principal2.c
270,009,798,351,028,000,000,000,000,000,000,000,000
[ "CWE-189" ]
CVE-2014-4345
Off-by-one error in the krb5_encode_krbsecretkey function in plugins/kdb/ldap/libkdb_ldap/ldap_principal2.c in the LDAP KDB module in kadmind in MIT Kerberos 5 (aka krb5) 1.6.x through 1.11.x before 1.11.6 and 1.12.x before 1.12.2 allows remote authenticated users to cause a denial of service (buffer overflow) or possibly execute arbitrary code via a series of "cpw -keepold" commands.
https://nvd.nist.gov/vuln/detail/CVE-2014-4345
9,267
linux
a642fc305053cc1c6e47e4f4df327895747ab485
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/a642fc305053cc1c6e47e4f4df327895747ab485
kvm: vmx: handle invvpid vm exit gracefully On systems with invvpid instruction support (corresponding bit in IA32_VMX_EPT_VPID_CAP MSR is set) guest invocation of invvpid causes vm exit, which is currently not handled and results in propagation of unknown exit to userspace. Fix this by installing an invvpid vm exit handler. This is CVE-2014-3646. Cc: stable@vger.kernel.org Signed-off-by: Petr Matousek <pmatouse@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1
static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu) { u32 intr_info = vmcs_read32(VM_EXIT_INTR_INFO); struct vcpu_vmx *vmx = to_vmx(vcpu); struct vmcs12 *vmcs12 = get_vmcs12(vcpu); u32 exit_reason = vmx->exit_reason; trace_kvm_nested_vmexit(kvm_rip_read(vcpu), exit_reason, vmcs_readl(EXIT_QUALIFICATION), vmx->idt_vectoring_info, intr_info, vmcs_read32(VM_EXIT_INTR_ERROR_CODE), KVM_ISA_VMX); if (vmx->nested.nested_run_pending) return 0; if (unlikely(vmx->fail)) { pr_info_ratelimited("%s failed vm entry %x\n", __func__, vmcs_read32(VM_INSTRUCTION_ERROR)); return 1; } switch (exit_reason) { case EXIT_REASON_EXCEPTION_NMI: if (!is_exception(intr_info)) return 0; else if (is_page_fault(intr_info)) return enable_ept; else if (is_no_device(intr_info) && !(vmcs12->guest_cr0 & X86_CR0_TS)) return 0; return vmcs12->exception_bitmap & (1u << (intr_info & INTR_INFO_VECTOR_MASK)); case EXIT_REASON_EXTERNAL_INTERRUPT: return 0; case EXIT_REASON_TRIPLE_FAULT: return 1; case EXIT_REASON_PENDING_INTERRUPT: return nested_cpu_has(vmcs12, CPU_BASED_VIRTUAL_INTR_PENDING); case EXIT_REASON_NMI_WINDOW: return nested_cpu_has(vmcs12, CPU_BASED_VIRTUAL_NMI_PENDING); case EXIT_REASON_TASK_SWITCH: return 1; case EXIT_REASON_CPUID: if (kvm_register_read(vcpu, VCPU_REGS_RAX) == 0xa) return 0; return 1; case EXIT_REASON_HLT: return nested_cpu_has(vmcs12, CPU_BASED_HLT_EXITING); case EXIT_REASON_INVD: return 1; case EXIT_REASON_INVLPG: return nested_cpu_has(vmcs12, CPU_BASED_INVLPG_EXITING); case EXIT_REASON_RDPMC: return nested_cpu_has(vmcs12, CPU_BASED_RDPMC_EXITING); case EXIT_REASON_RDTSC: return nested_cpu_has(vmcs12, CPU_BASED_RDTSC_EXITING); case EXIT_REASON_VMCALL: case EXIT_REASON_VMCLEAR: case EXIT_REASON_VMLAUNCH: case EXIT_REASON_VMPTRLD: case EXIT_REASON_VMPTRST: case EXIT_REASON_VMREAD: case EXIT_REASON_VMRESUME: case EXIT_REASON_VMWRITE: case EXIT_REASON_VMOFF: case EXIT_REASON_VMON: case EXIT_REASON_INVEPT: /* * VMX instructions trap unconditionally. This allows L1 to * emulate them for its L2 guest, i.e., allows 3-level nesting! */ return 1; case EXIT_REASON_CR_ACCESS: return nested_vmx_exit_handled_cr(vcpu, vmcs12); case EXIT_REASON_DR_ACCESS: return nested_cpu_has(vmcs12, CPU_BASED_MOV_DR_EXITING); case EXIT_REASON_IO_INSTRUCTION: return nested_vmx_exit_handled_io(vcpu, vmcs12); case EXIT_REASON_MSR_READ: case EXIT_REASON_MSR_WRITE: return nested_vmx_exit_handled_msr(vcpu, vmcs12, exit_reason); case EXIT_REASON_INVALID_STATE: return 1; case EXIT_REASON_MWAIT_INSTRUCTION: return nested_cpu_has(vmcs12, CPU_BASED_MWAIT_EXITING); case EXIT_REASON_MONITOR_INSTRUCTION: return nested_cpu_has(vmcs12, CPU_BASED_MONITOR_EXITING); case EXIT_REASON_PAUSE_INSTRUCTION: return nested_cpu_has(vmcs12, CPU_BASED_PAUSE_EXITING) || nested_cpu_has2(vmcs12, SECONDARY_EXEC_PAUSE_LOOP_EXITING); case EXIT_REASON_MCE_DURING_VMENTRY: return 0; case EXIT_REASON_TPR_BELOW_THRESHOLD: return nested_cpu_has(vmcs12, CPU_BASED_TPR_SHADOW); case EXIT_REASON_APIC_ACCESS: return nested_cpu_has2(vmcs12, SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES); case EXIT_REASON_EPT_VIOLATION: /* * L0 always deals with the EPT violation. If nested EPT is * used, and the nested mmu code discovers that the address is * missing in the guest EPT table (EPT12), the EPT violation * will be injected with nested_ept_inject_page_fault() */ return 0; case EXIT_REASON_EPT_MISCONFIG: /* * L2 never uses directly L1's EPT, but rather L0's own EPT * table (shadow on EPT) or a merged EPT table that L0 built * (EPT on EPT). So any problems with the structure of the * table is L0's fault. */ return 0; case EXIT_REASON_WBINVD: return nested_cpu_has2(vmcs12, SECONDARY_EXEC_WBINVD_EXITING); case EXIT_REASON_XSETBV: return 1; default: return 1; } }
25,917,898,868,493,210,000,000,000,000,000,000,000
vmx.c
276,794,831,116,419,260,000,000,000,000,000,000,000
[ "CWE-264" ]
CVE-2014-3646
arch/x86/kvm/vmx.c in the KVM subsystem in the Linux kernel through 3.17.2 does not have an exit handler for the INVVPID instruction, which allows guest OS users to cause a denial of service (guest OS crash) via a crafted application.
https://nvd.nist.gov/vuln/detail/CVE-2014-3646
9,268
linux
854e8bb1aa06c578c2c9145fa6bfe3680ef63b23
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/854e8bb1aa06c578c2c9145fa6bfe3680ef63b23
KVM: x86: Check non-canonical addresses upon WRMSR Upon WRMSR, the CPU should inject #GP if a non-canonical value (address) is written to certain MSRs. The behavior is "almost" identical for AMD and Intel (ignoring MSRs that are not implemented in either architecture since they would anyhow #GP). However, IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if non-canonical address is written on Intel but not on AMD (which ignores the top 32-bits). Accordingly, this patch injects a #GP on the MSRs which behave identically on Intel and AMD. To eliminate the differences between the architecutres, the value which is written to IA32_SYSENTER_ESP and IA32_SYSENTER_EIP is turned to canonical value before writing instead of injecting a #GP. Some references from Intel and AMD manuals: According to Intel SDM description of WRMSR instruction #GP is expected on WRMSR "If the source register contains a non-canonical address and ECX specifies one of the following MSRs: IA32_DS_AREA, IA32_FS_BASE, IA32_GS_BASE, IA32_KERNEL_GS_BASE, IA32_LSTAR, IA32_SYSENTER_EIP, IA32_SYSENTER_ESP." According to AMD manual instruction manual: LSTAR/CSTAR (SYSCALL): "The WRMSR instruction loads the target RIP into the LSTAR and CSTAR registers. If an RIP written by WRMSR is not in canonical form, a general-protection exception (#GP) occurs." IA32_GS_BASE and IA32_FS_BASE (WRFSBASE/WRGSBASE): "The address written to the base field must be in canonical form or a #GP fault will occur." IA32_KERNEL_GS_BASE (SWAPGS): "The address stored in the KernelGSbase MSR must be in canonical form." This patch fixes CVE-2014-3610. Cc: stable@vger.kernel.org Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
1
static int wrmsr_interception(struct vcpu_svm *svm) { struct msr_data msr; u32 ecx = svm->vcpu.arch.regs[VCPU_REGS_RCX]; u64 data = (svm->vcpu.arch.regs[VCPU_REGS_RAX] & -1u) | ((u64)(svm->vcpu.arch.regs[VCPU_REGS_RDX] & -1u) << 32); msr.data = data; msr.index = ecx; msr.host_initiated = false; svm->next_rip = kvm_rip_read(&svm->vcpu) + 2; if (svm_set_msr(&svm->vcpu, &msr)) { trace_kvm_msr_write_ex(ecx, data); kvm_inject_gp(&svm->vcpu, 0); } else { trace_kvm_msr_write(ecx, data); skip_emulated_instruction(&svm->vcpu); } return 1; }
142,189,173,362,958,500,000,000,000,000,000,000,000
svm.c
123,664,188,883,176,400,000,000,000,000,000,000,000
[ "CWE-264" ]
CVE-2014-3610
The WRMSR processing functionality in the KVM subsystem in the Linux kernel through 3.17.2 does not properly handle the writing of a non-canonical address to a model-specific register, which allows guest OS users to cause a denial of service (host OS crash) by leveraging guest OS privileges, related to the wrmsr_interception function in arch/x86/kvm/svm.c and the handle_wrmsr function in arch/x86/kvm/vmx.c.
https://nvd.nist.gov/vuln/detail/CVE-2014-3610
9,271
linux
05ab8f2647e4221cbdb3856dd7d32bd5407316b3
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/05ab8f2647e4221cbdb3856dd7d32bd5407316b3
filter: prevent nla extensions to peek beyond the end of the message The BPF_S_ANC_NLATTR and BPF_S_ANC_NLATTR_NEST extensions fail to check for a minimal message length before testing the supplied offset to be within the bounds of the message. This allows the subtraction of the nla header to underflow and therefore -- as the data type is unsigned -- allowing far to big offset and length values for the search of the netlink attribute. The remainder calculation for the BPF_S_ANC_NLATTR_NEST extension is also wrong. It has the minuend and subtrahend mixed up, therefore calculates a huge length value, allowing to overrun the end of the message while looking for the netlink attribute. The following three BPF snippets will trigger the bugs when attached to a UNIX datagram socket and parsing a message with length 1, 2 or 3. ,-[ PoC for missing size check in BPF_S_ANC_NLATTR ]-- | ld #0x87654321 | ldx #42 | ld #nla | ret a `--- ,-[ PoC for the same bug in BPF_S_ANC_NLATTR_NEST ]-- | ld #0x87654321 | ldx #42 | ld #nlan | ret a `--- ,-[ PoC for wrong remainder calculation in BPF_S_ANC_NLATTR_NEST ]-- | ; (needs a fake netlink header at offset 0) | ld #0 | ldx #42 | ld #nlan | ret a `--- Fix the first issue by ensuring the message length fulfills the minimal size constrains of a nla header. Fix the second bug by getting the math for the remainder calculation right. Fixes: 4738c1db15 ("[SKFILTER]: Add SKF_ADF_NLATTR instruction") Fixes: d214c7537b ("filter: add SKF_AD_NLATTR_NEST to look for nested..") Cc: Patrick McHardy <kaber@trash.net> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Mathias Krause <minipli@googlemail.com> Acked-by: Daniel Borkmann <dborkman@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
static u64 __skb_get_nlattr(u64 ctx, u64 A, u64 X, u64 r4, u64 r5) { struct sk_buff *skb = (struct sk_buff *)(long) ctx; struct nlattr *nla; if (skb_is_nonlinear(skb)) return 0; if (A > skb->len - sizeof(struct nlattr)) return 0; nla = nla_find((struct nlattr *) &skb->data[A], skb->len - A, X); if (nla) return (void *) nla - (void *) skb->data; return 0; }
257,641,762,135,738,580,000,000,000,000,000,000,000
filter.c
228,552,301,451,129,950,000,000,000,000,000,000,000
[ "CWE-189" ]
CVE-2014-3144
The (1) BPF_S_ANC_NLATTR and (2) BPF_S_ANC_NLATTR_NEST extension implementations in the sk_run_filter function in net/core/filter.c in the Linux kernel through 3.14.3 do not check whether a certain length value is sufficiently large, which allows local users to cause a denial of service (integer underflow and system crash) via crafted BPF instructions. NOTE: the affected code was moved to the __skb_get_nlattr and __skb_get_nlattr_nest functions before the vulnerability was announced.
https://nvd.nist.gov/vuln/detail/CVE-2014-3144
9,275
linux
2172fa709ab32ca60e86179dc67d0857be8e2c98
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/2172fa709ab32ca60e86179dc67d0857be8e2c98
SELinux: Fix kernel BUG on empty security contexts. Setting an empty security context (length=0) on a file will lead to incorrectly dereferencing the type and other fields of the security context structure, yielding a kernel BUG. As a zero-length security context is never valid, just reject all such security contexts whether coming from userspace via setxattr or coming from the filesystem upon a getxattr request by SELinux. Setting a security context value (empty or otherwise) unknown to SELinux in the first place is only possible for a root process (CAP_MAC_ADMIN), and, if running SELinux in enforcing mode, only if the corresponding SELinux mac_admin permission is also granted to the domain by policy. In Fedora policies, this is only allowed for specific domains such as livecd for setting down security contexts that are not defined in the build host policy. Reproducer: su setenforce 0 touch foo setfattr -n security.selinux foo Caveat: Relabeling or removing foo after doing the above may not be possible without booting with SELinux disabled. Any subsequent access to foo after doing the above will also trigger the BUG. BUG output from Matthew Thode: [ 473.893141] ------------[ cut here ]------------ [ 473.962110] kernel BUG at security/selinux/ss/services.c:654! [ 473.995314] invalid opcode: 0000 [#6] SMP [ 474.027196] Modules linked in: [ 474.058118] CPU: 0 PID: 8138 Comm: ls Tainted: G D I 3.13.0-grsec #1 [ 474.116637] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.0 07/29/10 [ 474.149768] task: ffff8805f50cd010 ti: ffff8805f50cd488 task.ti: ffff8805f50cd488 [ 474.183707] RIP: 0010:[<ffffffff814681c7>] [<ffffffff814681c7>] context_struct_compute_av+0xce/0x308 [ 474.219954] RSP: 0018:ffff8805c0ac3c38 EFLAGS: 00010246 [ 474.252253] RAX: 0000000000000000 RBX: ffff8805c0ac3d94 RCX: 0000000000000100 [ 474.287018] RDX: ffff8805e8aac000 RSI: 00000000ffffffff RDI: ffff8805e8aaa000 [ 474.321199] RBP: ffff8805c0ac3cb8 R08: 0000000000000010 R09: 0000000000000006 [ 474.357446] R10: 0000000000000000 R11: ffff8805c567a000 R12: 0000000000000006 [ 474.419191] R13: ffff8805c2b74e88 R14: 00000000000001da R15: 0000000000000000 [ 474.453816] FS: 00007f2e75220800(0000) GS:ffff88061fc00000(0000) knlGS:0000000000000000 [ 474.489254] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 474.522215] CR2: 00007f2e74716090 CR3: 00000005c085e000 CR4: 00000000000207f0 [ 474.556058] Stack: [ 474.584325] ffff8805c0ac3c98 ffffffff811b549b ffff8805c0ac3c98 ffff8805f1190a40 [ 474.618913] ffff8805a6202f08 ffff8805c2b74e88 00068800d0464990 ffff8805e8aac860 [ 474.653955] ffff8805c0ac3cb8 000700068113833a ffff880606c75060 ffff8805c0ac3d94 [ 474.690461] Call Trace: [ 474.723779] [<ffffffff811b549b>] ? lookup_fast+0x1cd/0x22a [ 474.778049] [<ffffffff81468824>] security_compute_av+0xf4/0x20b [ 474.811398] [<ffffffff8196f419>] avc_compute_av+0x2a/0x179 [ 474.843813] [<ffffffff8145727b>] avc_has_perm+0x45/0xf4 [ 474.875694] [<ffffffff81457d0e>] inode_has_perm+0x2a/0x31 [ 474.907370] [<ffffffff81457e76>] selinux_inode_getattr+0x3c/0x3e [ 474.938726] [<ffffffff81455cf6>] security_inode_getattr+0x1b/0x22 [ 474.970036] [<ffffffff811b057d>] vfs_getattr+0x19/0x2d [ 475.000618] [<ffffffff811b05e5>] vfs_fstatat+0x54/0x91 [ 475.030402] [<ffffffff811b063b>] vfs_lstat+0x19/0x1b [ 475.061097] [<ffffffff811b077e>] SyS_newlstat+0x15/0x30 [ 475.094595] [<ffffffff8113c5c1>] ? __audit_syscall_entry+0xa1/0xc3 [ 475.148405] [<ffffffff8197791e>] system_call_fastpath+0x16/0x1b [ 475.179201] Code: 00 48 85 c0 48 89 45 b8 75 02 0f 0b 48 8b 45 a0 48 8b 3d 45 d0 b6 00 8b 40 08 89 c6 ff ce e8 d1 b0 06 00 48 85 c0 49 89 c7 75 02 <0f> 0b 48 8b 45 b8 4c 8b 28 eb 1e 49 8d 7d 08 be 80 01 00 00 e8 [ 475.255884] RIP [<ffffffff814681c7>] context_struct_compute_av+0xce/0x308 [ 475.296120] RSP <ffff8805c0ac3c38> [ 475.328734] ---[ end trace f076482e9d754adc ]--- Reported-by: Matthew Thode <mthode@mthode.org> Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Cc: stable@vger.kernel.org Signed-off-by: Paul Moore <pmoore@redhat.com>
1
static int security_context_to_sid_core(const char *scontext, u32 scontext_len, u32 *sid, u32 def_sid, gfp_t gfp_flags, int force) { char *scontext2, *str = NULL; struct context context; int rc = 0; if (!ss_initialized) { int i; for (i = 1; i < SECINITSID_NUM; i++) { if (!strcmp(initial_sid_to_string[i], scontext)) { *sid = i; return 0; } } *sid = SECINITSID_KERNEL; return 0; } *sid = SECSID_NULL; /* Copy the string so that we can modify the copy as we parse it. */ scontext2 = kmalloc(scontext_len + 1, gfp_flags); if (!scontext2) return -ENOMEM; memcpy(scontext2, scontext, scontext_len); scontext2[scontext_len] = 0; if (force) { /* Save another copy for storing in uninterpreted form */ rc = -ENOMEM; str = kstrdup(scontext2, gfp_flags); if (!str) goto out; } read_lock(&policy_rwlock); rc = string_to_context_struct(&policydb, &sidtab, scontext2, scontext_len, &context, def_sid); if (rc == -EINVAL && force) { context.str = str; context.len = scontext_len; str = NULL; } else if (rc) goto out_unlock; rc = sidtab_context_to_sid(&sidtab, &context, sid); context_destroy(&context); out_unlock: read_unlock(&policy_rwlock); out: kfree(scontext2); kfree(str); return rc; }
25,979,071,260,218,318,000,000,000,000,000,000,000
services.c
171,658,659,208,984,150,000,000,000,000,000,000,000
[ "CWE-20" ]
CVE-2014-1874
The security_context_to_sid_core function in security/selinux/ss/services.c in the Linux kernel before 3.13.4 allows local users to cause a denial of service (system crash) by leveraging the CAP_MAC_ADMIN capability to set a zero-length security context.
https://nvd.nist.gov/vuln/detail/CVE-2014-1874
9,276
linux
ef87dbe7614341c2e7bfe8d32fcb7028cc97442c
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/ef87dbe7614341c2e7bfe8d32fcb7028cc97442c
floppy: ignore kernel-only members in FDRAWCMD ioctl input Always clear out these floppy_raw_cmd struct members after copying the entire structure from userspace so that the in-kernel version is always valid and never left in an interdeterminate state. Signed-off-by: Matthew Daley <mattd@bugfuzz.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1
static int raw_cmd_copyin(int cmd, void __user *param, struct floppy_raw_cmd **rcmd) { struct floppy_raw_cmd *ptr; int ret; int i; *rcmd = NULL; loop: ptr = kmalloc(sizeof(struct floppy_raw_cmd), GFP_USER); if (!ptr) return -ENOMEM; *rcmd = ptr; ret = copy_from_user(ptr, param, sizeof(*ptr)); if (ret) return -EFAULT; ptr->next = NULL; ptr->buffer_length = 0; param += sizeof(struct floppy_raw_cmd); if (ptr->cmd_count > 33) /* the command may now also take up the space * initially intended for the reply & the * reply count. Needed for long 82078 commands * such as RESTORE, which takes ... 17 command * bytes. Murphy's law #137: When you reserve * 16 bytes for a structure, you'll one day * discover that you really need 17... */ return -EINVAL; for (i = 0; i < 16; i++) ptr->reply[i] = 0; ptr->resultcode = 0; ptr->kernel_data = NULL; if (ptr->flags & (FD_RAW_READ | FD_RAW_WRITE)) { if (ptr->length <= 0) return -EINVAL; ptr->kernel_data = (char *)fd_dma_mem_alloc(ptr->length); fallback_on_nodma_alloc(&ptr->kernel_data, ptr->length); if (!ptr->kernel_data) return -ENOMEM; ptr->buffer_length = ptr->length; } if (ptr->flags & FD_RAW_WRITE) { ret = fd_copyin(ptr->data, ptr->kernel_data, ptr->length); if (ret) return ret; } if (ptr->flags & FD_RAW_MORE) { rcmd = &(ptr->next); ptr->rate &= 0x43; goto loop; } return 0; }
116,582,411,855,757,030,000,000,000,000,000,000,000
floppy.c
12,289,325,460,874,707,000,000,000,000,000,000,000
[ "CWE-264" ]
CVE-2014-1737
The raw_cmd_copyin function in drivers/block/floppy.c in the Linux kernel through 3.14.3 does not properly handle error conditions during processing of an FDRAWCMD ioctl call, which allows local users to trigger kfree operations and gain privileges by leveraging write access to a /dev/fd device.
https://nvd.nist.gov/vuln/detail/CVE-2014-1737
9,277
torque
3ed749263abe3d69fa3626d142a5789dcb5a5684
https://github.com/adaptivecomputing/torque
https://github.com/adaptivecomputing/torque/commit/3ed749263abe3d69fa3626d142a5789dcb5a5684
Merge pull request #171 into 2.5-fixes.
1
int disrsi_( int stream, int *negate, unsigned *value, unsigned count) { int c; unsigned locval; unsigned ndigs; char *cp; char scratch[DIS_BUFSIZ+1]; assert(negate != NULL); assert(value != NULL); assert(count); assert(stream >= 0); assert(dis_getc != NULL); assert(dis_gets != NULL); memset(scratch, 0, DIS_BUFSIZ+1); if (dis_umaxd == 0) disiui_(); switch (c = (*dis_getc)(stream)) { case '-': case '+': *negate = c == '-'; if ((*dis_gets)(stream, scratch, count) != (int)count) { return(DIS_EOD); } if (count >= dis_umaxd) { if (count > dis_umaxd) goto overflow; if (memcmp(scratch, dis_umax, dis_umaxd) > 0) goto overflow; } cp = scratch; locval = 0; do { if (((c = *cp++) < '0') || (c > '9')) { return(DIS_NONDIGIT); } locval = 10 * locval + c - '0'; } while (--count); *value = locval; return (DIS_SUCCESS); break; case '0': return (DIS_LEADZRO); break; case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': ndigs = c - '0'; if (count > 1) { if ((*dis_gets)(stream, scratch + 1, count - 1) != (int)count - 1) { return(DIS_EOD); } cp = scratch; if (count >= dis_umaxd) { if (count > dis_umaxd) break; *cp = c; if (memcmp(scratch, dis_umax, dis_umaxd) > 0) break; } while (--count) { if (((c = *++cp) < '0') || (c > '9')) { return(DIS_NONDIGIT); } ndigs = 10 * ndigs + c - '0'; } } /* END if (count > 1) */ return(disrsi_(stream, negate, value, ndigs)); /*NOTREACHED*/ break; case - 1: return(DIS_EOD); /*NOTREACHED*/ break; case -2: return(DIS_EOF); /*NOTREACHED*/ break; default: return(DIS_NONDIGIT); /*NOTREACHED*/ break; } *negate = FALSE; overflow: *value = UINT_MAX; return(DIS_OVERFLOW); } /* END disrsi_() */
119,468,526,822,998,800,000,000,000,000,000,000,000
None
null
[ "CWE-119" ]
CVE-2014-0749
Stack-based buffer overflow in lib/Libdis/disrsi_.c in Terascale Open-Source Resource and Queue Manager (aka TORQUE Resource Manager) 2.5.x through 2.5.13 allows remote attackers to execute arbitrary code via a large count value.
https://nvd.nist.gov/vuln/detail/CVE-2014-0749
9,279
linux
edfbbf388f293d70bf4b7c0bc38774d05e6f711a
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/edfbbf388f293d70bf4b7c0bc38774d05e6f711a
aio: fix kernel memory disclosure in io_getevents() introduced in v3.10 A kernel memory disclosure was introduced in aio_read_events_ring() in v3.10 by commit a31ad380bed817aa25f8830ad23e1a0480fef797. The changes made to aio_read_events_ring() failed to correctly limit the index into ctx->ring_pages[], allowing an attacked to cause the subsequent kmap() of an arbitrary page with a copy_to_user() to copy the contents into userspace. This vulnerability has been assigned CVE-2014-0206. Thanks to Mateusz and Petr for disclosing this issue. This patch applies to v3.12+. A separate backport is needed for 3.10/3.11. Signed-off-by: Benjamin LaHaise <bcrl@kvack.org> Cc: Mateusz Guzik <mguzik@redhat.com> Cc: Petr Matousek <pmatouse@redhat.com> Cc: Kent Overstreet <kmo@daterainc.com> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: stable@vger.kernel.org
1
static long aio_read_events_ring(struct kioctx *ctx, struct io_event __user *event, long nr) { struct aio_ring *ring; unsigned head, tail, pos; long ret = 0; int copy_ret; mutex_lock(&ctx->ring_lock); /* Access to ->ring_pages here is protected by ctx->ring_lock. */ ring = kmap_atomic(ctx->ring_pages[0]); head = ring->head; tail = ring->tail; kunmap_atomic(ring); pr_debug("h%u t%u m%u\n", head, tail, ctx->nr_events); if (head == tail) goto out; while (ret < nr) { long avail; struct io_event *ev; struct page *page; avail = (head <= tail ? tail : ctx->nr_events) - head; if (head == tail) break; avail = min(avail, nr - ret); avail = min_t(long, avail, AIO_EVENTS_PER_PAGE - ((head + AIO_EVENTS_OFFSET) % AIO_EVENTS_PER_PAGE)); pos = head + AIO_EVENTS_OFFSET; page = ctx->ring_pages[pos / AIO_EVENTS_PER_PAGE]; pos %= AIO_EVENTS_PER_PAGE; ev = kmap(page); copy_ret = copy_to_user(event + ret, ev + pos, sizeof(*ev) * avail); kunmap(page); if (unlikely(copy_ret)) { ret = -EFAULT; goto out; } ret += avail; head += avail; head %= ctx->nr_events; } ring = kmap_atomic(ctx->ring_pages[0]); ring->head = head; kunmap_atomic(ring); flush_dcache_page(ctx->ring_pages[0]); pr_debug("%li h%u t%u\n", ret, head, tail); out: mutex_unlock(&ctx->ring_lock); return ret; }
175,931,741,530,477,530,000,000,000,000,000,000,000
aio.c
192,927,533,272,323,800,000,000,000,000,000,000,000
[ "CWE-200" ]
CVE-2014-0206
Array index error in the aio_read_events_ring function in fs/aio.c in the Linux kernel through 3.15.1 allows local users to obtain sensitive information from kernel memory via a large head value.
https://nvd.nist.gov/vuln/detail/CVE-2014-0206
9,281
linux
1fd819ecb90cc9b822cd84d3056ddba315d3340f
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/1fd819ecb90cc9b822cd84d3056ddba315d3340f
skbuff: skb_segment: orphan frags before copying skb_segment copies frags around, so we need to copy them carefully to avoid accessing user memory after reporting completion to userspace through a callback. skb_segment doesn't normally happen on datapath: TSO needs to be disabled - so disabling zero copy in this case does not look like a big deal. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: David S. Miller <davem@davemloft.net>
1
struct sk_buff *skb_segment(struct sk_buff *head_skb, netdev_features_t features) { struct sk_buff *segs = NULL; struct sk_buff *tail = NULL; struct sk_buff *list_skb = skb_shinfo(head_skb)->frag_list; skb_frag_t *frag = skb_shinfo(head_skb)->frags; unsigned int mss = skb_shinfo(head_skb)->gso_size; unsigned int doffset = head_skb->data - skb_mac_header(head_skb); unsigned int offset = doffset; unsigned int tnl_hlen = skb_tnl_header_len(head_skb); unsigned int headroom; unsigned int len; __be16 proto; bool csum; int sg = !!(features & NETIF_F_SG); int nfrags = skb_shinfo(head_skb)->nr_frags; int err = -ENOMEM; int i = 0; int pos; proto = skb_network_protocol(head_skb); if (unlikely(!proto)) return ERR_PTR(-EINVAL); csum = !!can_checksum_protocol(features, proto); __skb_push(head_skb, doffset); headroom = skb_headroom(head_skb); pos = skb_headlen(head_skb); do { struct sk_buff *nskb; skb_frag_t *nskb_frag; int hsize; int size; len = head_skb->len - offset; if (len > mss) len = mss; hsize = skb_headlen(head_skb) - offset; if (hsize < 0) hsize = 0; if (hsize > len || !sg) hsize = len; if (!hsize && i >= nfrags && skb_headlen(list_skb) && (skb_headlen(list_skb) == len || sg)) { BUG_ON(skb_headlen(list_skb) > len); i = 0; nfrags = skb_shinfo(list_skb)->nr_frags; frag = skb_shinfo(list_skb)->frags; pos += skb_headlen(list_skb); while (pos < offset + len) { BUG_ON(i >= nfrags); size = skb_frag_size(frag); if (pos + size > offset + len) break; i++; pos += size; frag++; } nskb = skb_clone(list_skb, GFP_ATOMIC); list_skb = list_skb->next; if (unlikely(!nskb)) goto err; if (unlikely(pskb_trim(nskb, len))) { kfree_skb(nskb); goto err; } hsize = skb_end_offset(nskb); if (skb_cow_head(nskb, doffset + headroom)) { kfree_skb(nskb); goto err; } nskb->truesize += skb_end_offset(nskb) - hsize; skb_release_head_state(nskb); __skb_push(nskb, doffset); } else { nskb = __alloc_skb(hsize + doffset + headroom, GFP_ATOMIC, skb_alloc_rx_flag(head_skb), NUMA_NO_NODE); if (unlikely(!nskb)) goto err; skb_reserve(nskb, headroom); __skb_put(nskb, doffset); } if (segs) tail->next = nskb; else segs = nskb; tail = nskb; __copy_skb_header(nskb, head_skb); nskb->mac_len = head_skb->mac_len; skb_headers_offset_update(nskb, skb_headroom(nskb) - headroom); skb_copy_from_linear_data_offset(head_skb, -tnl_hlen, nskb->data - tnl_hlen, doffset + tnl_hlen); if (nskb->len == len + doffset) goto perform_csum_check; if (!sg) { nskb->ip_summed = CHECKSUM_NONE; nskb->csum = skb_copy_and_csum_bits(head_skb, offset, skb_put(nskb, len), len, 0); continue; } nskb_frag = skb_shinfo(nskb)->frags; skb_copy_from_linear_data_offset(head_skb, offset, skb_put(nskb, hsize), hsize); skb_shinfo(nskb)->tx_flags = skb_shinfo(head_skb)->tx_flags & SKBTX_SHARED_FRAG; while (pos < offset + len) { if (i >= nfrags) { BUG_ON(skb_headlen(list_skb)); i = 0; nfrags = skb_shinfo(list_skb)->nr_frags; frag = skb_shinfo(list_skb)->frags; BUG_ON(!nfrags); list_skb = list_skb->next; } if (unlikely(skb_shinfo(nskb)->nr_frags >= MAX_SKB_FRAGS)) { net_warn_ratelimited( "skb_segment: too many frags: %u %u\n", pos, mss); goto err; } *nskb_frag = *frag; __skb_frag_ref(nskb_frag); size = skb_frag_size(nskb_frag); if (pos < offset) { nskb_frag->page_offset += offset - pos; skb_frag_size_sub(nskb_frag, offset - pos); } skb_shinfo(nskb)->nr_frags++; if (pos + size <= offset + len) { i++; frag++; pos += size; } else { skb_frag_size_sub(nskb_frag, pos + size - (offset + len)); goto skip_fraglist; } nskb_frag++; } skip_fraglist: nskb->data_len = len - hsize; nskb->len += nskb->data_len; nskb->truesize += nskb->data_len; perform_csum_check: if (!csum) { nskb->csum = skb_checksum(nskb, doffset, nskb->len - doffset, 0); nskb->ip_summed = CHECKSUM_NONE; } } while ((offset += len) < head_skb->len); return segs; err: kfree_skb_list(segs); return ERR_PTR(err); }
183,173,230,648,926,970,000,000,000,000,000,000,000
skbuff.c
97,493,718,567,580,850,000,000,000,000,000,000,000
[ "CWE-416" ]
CVE-2014-0131
Use-after-free vulnerability in the skb_segment function in net/core/skbuff.c in the Linux kernel through 3.13.6 allows attackers to obtain sensitive information from kernel memory by leveraging the absence of a certain orphaning operation.
https://nvd.nist.gov/vuln/detail/CVE-2014-0131
9,282
linux
ec0223ec48a90cb605244b45f7c62de856403729
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/ec0223ec48a90cb605244b45f7c62de856403729
net: sctp: fix sctp_sf_do_5_1D_ce to verify if we/peer is AUTH capable RFC4895 introduced AUTH chunks for SCTP; during the SCTP handshake RANDOM; CHUNKS; HMAC-ALGO are negotiated (CHUNKS being optional though): ---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ----------> <------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] --------- -------------------- COOKIE-ECHO --------------------> <-------------------- COOKIE-ACK --------------------- A special case is when an endpoint requires COOKIE-ECHO chunks to be authenticated: ---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ----------> <------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] --------- ------------------ AUTH; COOKIE-ECHO ----------------> <-------------------- COOKIE-ACK --------------------- RFC4895, section 6.3. Receiving Authenticated Chunks says: The receiver MUST use the HMAC algorithm indicated in the HMAC Identifier field. If this algorithm was not specified by the receiver in the HMAC-ALGO parameter in the INIT or INIT-ACK chunk during association setup, the AUTH chunk and all the chunks after it MUST be discarded and an ERROR chunk SHOULD be sent with the error cause defined in Section 4.1. [...] If no endpoint pair shared key has been configured for that Shared Key Identifier, all authenticated chunks MUST be silently discarded. [...] When an endpoint requires COOKIE-ECHO chunks to be authenticated, some special procedures have to be followed because the reception of a COOKIE-ECHO chunk might result in the creation of an SCTP association. If a packet arrives containing an AUTH chunk as a first chunk, a COOKIE-ECHO chunk as the second chunk, and possibly more chunks after them, and the receiver does not have an STCB for that packet, then authentication is based on the contents of the COOKIE-ECHO chunk. In this situation, the receiver MUST authenticate the chunks in the packet by using the RANDOM parameters, CHUNKS parameters and HMAC_ALGO parameters obtained from the COOKIE-ECHO chunk, and possibly a local shared secret as inputs to the authentication procedure specified in Section 6.3. If authentication fails, then the packet is discarded. If the authentication is successful, the COOKIE-ECHO and all the chunks after the COOKIE-ECHO MUST be processed. If the receiver has an STCB, it MUST process the AUTH chunk as described above using the STCB from the existing association to authenticate the COOKIE-ECHO chunk and all the chunks after it. [...] Commit bbd0d59809f9 introduced the possibility to receive and verification of AUTH chunk, including the edge case for authenticated COOKIE-ECHO. On reception of COOKIE-ECHO, the function sctp_sf_do_5_1D_ce() handles processing, unpacks and creates a new association if it passed sanity checks and also tests for authentication chunks being present. After a new association has been processed, it invokes sctp_process_init() on the new association and walks through the parameter list it received from the INIT chunk. It checks SCTP_PARAM_RANDOM, SCTP_PARAM_HMAC_ALGO and SCTP_PARAM_CHUNKS, and copies them into asoc->peer meta data (peer_random, peer_hmacs, peer_chunks) in case sysctl -w net.sctp.auth_enable=1 is set. If in INIT's SCTP_PARAM_SUPPORTED_EXT parameter SCTP_CID_AUTH is set, peer_random != NULL and peer_hmacs != NULL the peer is to be assumed asoc->peer.auth_capable=1, in any other case asoc->peer.auth_capable=0. Now, if in sctp_sf_do_5_1D_ce() chunk->auth_chunk is available, we set up a fake auth chunk and pass that on to sctp_sf_authenticate(), which at latest in sctp_auth_calculate_hmac() reliably dereferences a NULL pointer at position 0..0008 when setting up the crypto key in crypto_hash_setkey() by using asoc->asoc_shared_key that is NULL as condition key_id == asoc->active_key_id is true if the AUTH chunk was injected correctly from remote. This happens no matter what net.sctp.auth_enable sysctl says. The fix is to check for net->sctp.auth_enable and for asoc->peer.auth_capable before doing any operations like sctp_sf_authenticate() as no key is activated in sctp_auth_asoc_init_active_key() for each case. Now as RFC4895 section 6.3 states that if the used HMAC-ALGO passed from the INIT chunk was not used in the AUTH chunk, we SHOULD send an error; however in this case it would be better to just silently discard such a maliciously prepared handshake as we didn't even receive a parameter at all. Also, as our endpoint has no shared key configured, section 6.3 says that MUST silently discard, which we are doing from now onwards. Before calling sctp_sf_pdiscard(), we need not only to free the association, but also the chunk->auth_chunk skb, as commit bbd0d59809f9 created a skb clone in that case. I have tested this locally by using netfilter's nfqueue and re-injecting packets into the local stack after maliciously modifying the INIT chunk (removing RANDOM; HMAC-ALGO param) and the SCTP packet containing the COOKIE_ECHO (injecting AUTH chunk before COOKIE_ECHO). Fixed with this patch applied. Fixes: bbd0d59809f9 ("[SCTP]: Implement the receive and verification of AUTH chunk") Signed-off-by: Daniel Borkmann <dborkman@redhat.com> Cc: Vlad Yasevich <yasevich@gmail.com> Cc: Neil Horman <nhorman@tuxdriver.com> Acked-by: Vlad Yasevich <vyasevich@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
sctp_disposition_t sctp_sf_do_5_1D_ce(struct net *net, const struct sctp_endpoint *ep, const struct sctp_association *asoc, const sctp_subtype_t type, void *arg, sctp_cmd_seq_t *commands) { struct sctp_chunk *chunk = arg; struct sctp_association *new_asoc; sctp_init_chunk_t *peer_init; struct sctp_chunk *repl; struct sctp_ulpevent *ev, *ai_ev = NULL; int error = 0; struct sctp_chunk *err_chk_p; struct sock *sk; /* If the packet is an OOTB packet which is temporarily on the * control endpoint, respond with an ABORT. */ if (ep == sctp_sk(net->sctp.ctl_sock)->ep) { SCTP_INC_STATS(net, SCTP_MIB_OUTOFBLUES); return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands); } /* Make sure that the COOKIE_ECHO chunk has a valid length. * In this case, we check that we have enough for at least a * chunk header. More detailed verification is done * in sctp_unpack_cookie(). */ if (!sctp_chunk_length_valid(chunk, sizeof(sctp_chunkhdr_t))) return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); /* If the endpoint is not listening or if the number of associations * on the TCP-style socket exceed the max backlog, respond with an * ABORT. */ sk = ep->base.sk; if (!sctp_sstate(sk, LISTENING) || (sctp_style(sk, TCP) && sk_acceptq_is_full(sk))) return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands); /* "Decode" the chunk. We have no optional parameters so we * are in good shape. */ chunk->subh.cookie_hdr = (struct sctp_signed_cookie *)chunk->skb->data; if (!pskb_pull(chunk->skb, ntohs(chunk->chunk_hdr->length) - sizeof(sctp_chunkhdr_t))) goto nomem; /* 5.1 D) Upon reception of the COOKIE ECHO chunk, Endpoint * "Z" will reply with a COOKIE ACK chunk after building a TCB * and moving to the ESTABLISHED state. */ new_asoc = sctp_unpack_cookie(ep, asoc, chunk, GFP_ATOMIC, &error, &err_chk_p); /* FIXME: * If the re-build failed, what is the proper error path * from here? * * [We should abort the association. --piggy] */ if (!new_asoc) { /* FIXME: Several errors are possible. A bad cookie should * be silently discarded, but think about logging it too. */ switch (error) { case -SCTP_IERROR_NOMEM: goto nomem; case -SCTP_IERROR_STALE_COOKIE: sctp_send_stale_cookie_err(net, ep, asoc, chunk, commands, err_chk_p); return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); case -SCTP_IERROR_BAD_SIG: default: return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); } } /* Delay state machine commands until later. * * Re-build the bind address for the association is done in * the sctp_unpack_cookie() already. */ /* This is a brand-new association, so these are not yet side * effects--it is safe to run them here. */ peer_init = &chunk->subh.cookie_hdr->c.peer_init[0]; if (!sctp_process_init(new_asoc, chunk, &chunk->subh.cookie_hdr->c.peer_addr, peer_init, GFP_ATOMIC)) goto nomem_init; /* SCTP-AUTH: Now that we've populate required fields in * sctp_process_init, set up the assocaition shared keys as * necessary so that we can potentially authenticate the ACK */ error = sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC); if (error) goto nomem_init; /* SCTP-AUTH: auth_chunk pointer is only set when the cookie-echo * is supposed to be authenticated and we have to do delayed * authentication. We've just recreated the association using * the information in the cookie and now it's much easier to * do the authentication. */ if (chunk->auth_chunk) { struct sctp_chunk auth; sctp_ierror_t ret; /* set-up our fake chunk so that we can process it */ auth.skb = chunk->auth_chunk; auth.asoc = chunk->asoc; auth.sctp_hdr = chunk->sctp_hdr; auth.chunk_hdr = (sctp_chunkhdr_t *)skb_push(chunk->auth_chunk, sizeof(sctp_chunkhdr_t)); skb_pull(chunk->auth_chunk, sizeof(sctp_chunkhdr_t)); auth.transport = chunk->transport; ret = sctp_sf_authenticate(net, ep, new_asoc, type, &auth); /* We can now safely free the auth_chunk clone */ kfree_skb(chunk->auth_chunk); if (ret != SCTP_IERROR_NO_ERROR) { sctp_association_free(new_asoc); return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); } } repl = sctp_make_cookie_ack(new_asoc, chunk); if (!repl) goto nomem_init; /* RFC 2960 5.1 Normal Establishment of an Association * * D) IMPLEMENTATION NOTE: An implementation may choose to * send the Communication Up notification to the SCTP user * upon reception of a valid COOKIE ECHO chunk. */ ev = sctp_ulpevent_make_assoc_change(new_asoc, 0, SCTP_COMM_UP, 0, new_asoc->c.sinit_num_ostreams, new_asoc->c.sinit_max_instreams, NULL, GFP_ATOMIC); if (!ev) goto nomem_ev; /* Sockets API Draft Section 5.3.1.6 * When a peer sends a Adaptation Layer Indication parameter , SCTP * delivers this notification to inform the application that of the * peers requested adaptation layer. */ if (new_asoc->peer.adaptation_ind) { ai_ev = sctp_ulpevent_make_adaptation_indication(new_asoc, GFP_ATOMIC); if (!ai_ev) goto nomem_aiev; } /* Add all the state machine commands now since we've created * everything. This way we don't introduce memory corruptions * during side-effect processing and correclty count established * associations. */ sctp_add_cmd_sf(commands, SCTP_CMD_NEW_ASOC, SCTP_ASOC(new_asoc)); sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, SCTP_STATE(SCTP_STATE_ESTABLISHED)); SCTP_INC_STATS(net, SCTP_MIB_CURRESTAB); SCTP_INC_STATS(net, SCTP_MIB_PASSIVEESTABS); sctp_add_cmd_sf(commands, SCTP_CMD_HB_TIMERS_START, SCTP_NULL()); if (new_asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE]) sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_START, SCTP_TO(SCTP_EVENT_TIMEOUT_AUTOCLOSE)); /* This will send the COOKIE ACK */ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl)); /* Queue the ASSOC_CHANGE event */ sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev)); /* Send up the Adaptation Layer Indication event */ if (ai_ev) sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ai_ev)); return SCTP_DISPOSITION_CONSUME; nomem_aiev: sctp_ulpevent_free(ev); nomem_ev: sctp_chunk_free(repl); nomem_init: sctp_association_free(new_asoc); nomem: return SCTP_DISPOSITION_NOMEM; }
212,126,296,819,801,930,000,000,000,000,000,000,000
sm_statefuns.c
46,353,273,935,311,330,000,000,000,000,000,000,000
[ "CWE-20" ]
CVE-2014-0101
The sctp_sf_do_5_1D_ce function in net/sctp/sm_statefuns.c in the Linux kernel through 3.13.6 does not validate certain auth_enable and auth_capable fields before making an sctp_sf_authenticate call, which allows remote attackers to cause a denial of service (NULL pointer dereference and system crash) via an SCTP handshake with a modified INIT chunk and a crafted AUTH chunk before a COOKIE_ECHO chunk.
https://nvd.nist.gov/vuln/detail/CVE-2014-0101
9,283
linux
d8316f3991d207fe32881a9ac20241be8fa2bad0
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/d8316f3991d207fe32881a9ac20241be8fa2bad0
vhost: fix total length when packets are too short When mergeable buffers are disabled, and the incoming packet is too large for the rx buffer, get_rx_bufs returns success. This was intentional in order for make recvmsg truncate the packet and then handle_rx would detect err != sock_len and drop it. Unfortunately we pass the original sock_len to recvmsg - which means we use parts of iov not fully validated. Fix this up by detecting this overrun and doing packet drop immediately. CVE-2014-0077 Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
static int get_rx_bufs(struct vhost_virtqueue *vq, struct vring_used_elem *heads, int datalen, unsigned *iovcount, struct vhost_log *log, unsigned *log_num, unsigned int quota) { unsigned int out, in; int seg = 0; int headcount = 0; unsigned d; int r, nlogs = 0; while (datalen > 0 && headcount < quota) { if (unlikely(seg >= UIO_MAXIOV)) { r = -ENOBUFS; goto err; } d = vhost_get_vq_desc(vq->dev, vq, vq->iov + seg, ARRAY_SIZE(vq->iov) - seg, &out, &in, log, log_num); if (d == vq->num) { r = 0; goto err; } if (unlikely(out || in <= 0)) { vq_err(vq, "unexpected descriptor format for RX: " "out %d, in %d\n", out, in); r = -EINVAL; goto err; } if (unlikely(log)) { nlogs += *log_num; log += *log_num; } heads[headcount].id = d; heads[headcount].len = iov_length(vq->iov + seg, in); datalen -= heads[headcount].len; ++headcount; seg += in; } heads[headcount - 1].len += datalen; *iovcount = seg; if (unlikely(log)) *log_num = nlogs; return headcount; err: vhost_discard_vq_desc(vq, headcount); return r; }
91,983,981,836,047,390,000,000,000,000,000,000,000
net.c
340,076,013,592,583,200,000,000,000,000,000,000,000
[ "CWE-20" ]
CVE-2014-0077
drivers/vhost/net.c in the Linux kernel before 3.13.10, when mergeable buffers are disabled, does not properly validate packet lengths, which allows guest OS users to cause a denial of service (memory corruption and host OS crash) or possibly gain privileges on the host OS via crafted packets, related to the handle_rx and get_rx_bufs functions.
https://nvd.nist.gov/vuln/detail/CVE-2014-0077
9,288
linux
0305cd5f7fca85dae392b9ba85b116896eb7c1c7
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/0305cd5f7fca85dae392b9ba85b116896eb7c1c7
Btrfs: fix truncation of compressed and inlined extents When truncating a file to a smaller size which consists of an inline extent that is compressed, we did not discard (or made unusable) the data between the new file size and the old file size, wasting metadata space and allowing for the truncated data to be leaked and the data corruption/loss mentioned below. We were also not correctly decrementing the number of bytes used by the inode, we were setting it to zero, giving a wrong report for callers of the stat(2) syscall. The fsck tool also reported an error about a mismatch between the nbytes of the file versus the real space used by the file. Now because we weren't discarding the truncated region of the file, it was possible for a caller of the clone ioctl to actually read the data that was truncated, allowing for a security breach without requiring root access to the system, using only standard filesystem operations. The scenario is the following: 1) User A creates a file which consists of an inline and compressed extent with a size of 2000 bytes - the file is not accessible to any other users (no read, write or execution permission for anyone else); 2) The user truncates the file to a size of 1000 bytes; 3) User A makes the file world readable; 4) User B creates a file consisting of an inline extent of 2000 bytes; 5) User B issues a clone operation from user A's file into its own file (using a length argument of 0, clone the whole range); 6) User B now gets to see the 1000 bytes that user A truncated from its file before it made its file world readbale. User B also lost the bytes in the range [1000, 2000[ bytes from its own file, but that might be ok if his/her intention was reading stale data from user A that was never supposed to be public. Note that this contrasts with the case where we truncate a file from 2000 bytes to 1000 bytes and then truncate it back from 1000 to 2000 bytes. In this case reading any byte from the range [1000, 2000[ will return a value of 0x00, instead of the original data. This problem exists since the clone ioctl was added and happens both with and without my recent data loss and file corruption fixes for the clone ioctl (patch "Btrfs: fix file corruption and data loss after cloning inline extents"). So fix this by truncating the compressed inline extents as we do for the non-compressed case, which involves decompressing, if the data isn't already in the page cache, compressing the truncated version of the extent, writing the compressed content into the inline extent and then truncate it. The following test case for fstests reproduces the problem. In order for the test to pass both this fix and my previous fix for the clone ioctl that forbids cloning a smaller inline extent into a larger one, which is titled "Btrfs: fix file corruption and data loss after cloning inline extents", are needed. Without that other fix the test fails in a different way that does not leak the truncated data, instead part of destination file gets replaced with zeroes (because the destination file has a larger inline extent than the source). seq=`basename $0` seqres=$RESULT_DIR/$seq echo "QA output created by $seq" tmp=/tmp/$$ status=1 # failure is the default! trap "_cleanup; exit \$status" 0 1 2 3 15 _cleanup() { rm -f $tmp.* } # get standard environment, filters and checks . ./common/rc . ./common/filter # real QA test starts here _need_to_be_root _supported_fs btrfs _supported_os Linux _require_scratch _require_cloner rm -f $seqres.full _scratch_mkfs >>$seqres.full 2>&1 _scratch_mount "-o compress" # Create our test files. File foo is going to be the source of a clone operation # and consists of a single inline extent with an uncompressed size of 512 bytes, # while file bar consists of a single inline extent with an uncompressed size of # 256 bytes. For our test's purpose, it's important that file bar has an inline # extent with a size smaller than foo's inline extent. $XFS_IO_PROG -f -c "pwrite -S 0xa1 0 128" \ -c "pwrite -S 0x2a 128 384" \ $SCRATCH_MNT/foo | _filter_xfs_io $XFS_IO_PROG -f -c "pwrite -S 0xbb 0 256" $SCRATCH_MNT/bar | _filter_xfs_io # Now durably persist all metadata and data. We do this to make sure that we get # on disk an inline extent with a size of 512 bytes for file foo. sync # Now truncate our file foo to a smaller size. Because it consists of a # compressed and inline extent, btrfs did not shrink the inline extent to the # new size (if the extent was not compressed, btrfs would shrink it to 128 # bytes), it only updates the inode's i_size to 128 bytes. $XFS_IO_PROG -c "truncate 128" $SCRATCH_MNT/foo # Now clone foo's inline extent into bar. # This clone operation should fail with errno EOPNOTSUPP because the source # file consists only of an inline extent and the file's size is smaller than # the inline extent of the destination (128 bytes < 256 bytes). However the # clone ioctl was not prepared to deal with a file that has a size smaller # than the size of its inline extent (something that happens only for compressed # inline extents), resulting in copying the full inline extent from the source # file into the destination file. # # Note that btrfs' clone operation for inline extents consists of removing the # inline extent from the destination inode and copy the inline extent from the # source inode into the destination inode, meaning that if the destination # inode's inline extent is larger (N bytes) than the source inode's inline # extent (M bytes), some bytes (N - M bytes) will be lost from the destination # file. Btrfs could copy the source inline extent's data into the destination's # inline extent so that we would not lose any data, but that's currently not # done due to the complexity that would be needed to deal with such cases # (specially when one or both extents are compressed), returning EOPNOTSUPP, as # it's normally not a very common case to clone very small files (only case # where we get inline extents) and copying inline extents does not save any # space (unlike for normal, non-inlined extents). $CLONER_PROG -s 0 -d 0 -l 0 $SCRATCH_MNT/foo $SCRATCH_MNT/bar # Now because the above clone operation used to succeed, and due to foo's inline # extent not being shinked by the truncate operation, our file bar got the whole # inline extent copied from foo, making us lose the last 128 bytes from bar # which got replaced by the bytes in range [128, 256[ from foo before foo was # truncated - in other words, data loss from bar and being able to read old and # stale data from foo that should not be possible to read anymore through normal # filesystem operations. Contrast with the case where we truncate a file from a # size N to a smaller size M, truncate it back to size N and then read the range # [M, N[, we should always get the value 0x00 for all the bytes in that range. # We expected the clone operation to fail with errno EOPNOTSUPP and therefore # not modify our file's bar data/metadata. So its content should be 256 bytes # long with all bytes having the value 0xbb. # # Without the btrfs bug fix, the clone operation succeeded and resulted in # leaking truncated data from foo, the bytes that belonged to its range # [128, 256[, and losing data from bar in that same range. So reading the # file gave us the following content: # # 0000000 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 a1 # * # 0000200 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a 2a # * # 0000400 echo "File bar's content after the clone operation:" od -t x1 $SCRATCH_MNT/bar # Also because the foo's inline extent was not shrunk by the truncate # operation, btrfs' fsck, which is run by the fstests framework everytime a # test completes, failed reporting the following error: # # root 5 inode 257 errors 400, nbytes wrong status=0 exit Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana <fdmanana@suse.com>
1
int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans, struct btrfs_root *root, struct inode *inode, u64 new_size, u32 min_type) { struct btrfs_path *path; struct extent_buffer *leaf; struct btrfs_file_extent_item *fi; struct btrfs_key key; struct btrfs_key found_key; u64 extent_start = 0; u64 extent_num_bytes = 0; u64 extent_offset = 0; u64 item_end = 0; u64 last_size = new_size; u32 found_type = (u8)-1; int found_extent; int del_item; int pending_del_nr = 0; int pending_del_slot = 0; int extent_type = -1; int ret; int err = 0; u64 ino = btrfs_ino(inode); u64 bytes_deleted = 0; bool be_nice = 0; bool should_throttle = 0; bool should_end = 0; BUG_ON(new_size > 0 && min_type != BTRFS_EXTENT_DATA_KEY); /* * for non-free space inodes and ref cows, we want to back off from * time to time */ if (!btrfs_is_free_space_inode(inode) && test_bit(BTRFS_ROOT_REF_COWS, &root->state)) be_nice = 1; path = btrfs_alloc_path(); if (!path) return -ENOMEM; path->reada = -1; /* * We want to drop from the next block forward in case this new size is * not block aligned since we will be keeping the last block of the * extent just the way it is. */ if (test_bit(BTRFS_ROOT_REF_COWS, &root->state) || root == root->fs_info->tree_root) btrfs_drop_extent_cache(inode, ALIGN(new_size, root->sectorsize), (u64)-1, 0); /* * This function is also used to drop the items in the log tree before * we relog the inode, so if root != BTRFS_I(inode)->root, it means * it is used to drop the loged items. So we shouldn't kill the delayed * items. */ if (min_type == 0 && root == BTRFS_I(inode)->root) btrfs_kill_delayed_inode_items(inode); key.objectid = ino; key.offset = (u64)-1; key.type = (u8)-1; search_again: /* * with a 16K leaf size and 128MB extents, you can actually queue * up a huge file in a single leaf. Most of the time that * bytes_deleted is > 0, it will be huge by the time we get here */ if (be_nice && bytes_deleted > 32 * 1024 * 1024) { if (btrfs_should_end_transaction(trans, root)) { err = -EAGAIN; goto error; } } path->leave_spinning = 1; ret = btrfs_search_slot(trans, root, &key, path, -1, 1); if (ret < 0) { err = ret; goto out; } if (ret > 0) { /* there are no items in the tree for us to truncate, we're * done */ if (path->slots[0] == 0) goto out; path->slots[0]--; } while (1) { fi = NULL; leaf = path->nodes[0]; btrfs_item_key_to_cpu(leaf, &found_key, path->slots[0]); found_type = found_key.type; if (found_key.objectid != ino) break; if (found_type < min_type) break; item_end = found_key.offset; if (found_type == BTRFS_EXTENT_DATA_KEY) { fi = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item); extent_type = btrfs_file_extent_type(leaf, fi); if (extent_type != BTRFS_FILE_EXTENT_INLINE) { item_end += btrfs_file_extent_num_bytes(leaf, fi); } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) { item_end += btrfs_file_extent_inline_len(leaf, path->slots[0], fi); } item_end--; } if (found_type > min_type) { del_item = 1; } else { if (item_end < new_size) break; if (found_key.offset >= new_size) del_item = 1; else del_item = 0; } found_extent = 0; /* FIXME, shrink the extent if the ref count is only 1 */ if (found_type != BTRFS_EXTENT_DATA_KEY) goto delete; if (del_item) last_size = found_key.offset; else last_size = new_size; if (extent_type != BTRFS_FILE_EXTENT_INLINE) { u64 num_dec; extent_start = btrfs_file_extent_disk_bytenr(leaf, fi); if (!del_item) { u64 orig_num_bytes = btrfs_file_extent_num_bytes(leaf, fi); extent_num_bytes = ALIGN(new_size - found_key.offset, root->sectorsize); btrfs_set_file_extent_num_bytes(leaf, fi, extent_num_bytes); num_dec = (orig_num_bytes - extent_num_bytes); if (test_bit(BTRFS_ROOT_REF_COWS, &root->state) && extent_start != 0) inode_sub_bytes(inode, num_dec); btrfs_mark_buffer_dirty(leaf); } else { extent_num_bytes = btrfs_file_extent_disk_num_bytes(leaf, fi); extent_offset = found_key.offset - btrfs_file_extent_offset(leaf, fi); /* FIXME blocksize != 4096 */ num_dec = btrfs_file_extent_num_bytes(leaf, fi); if (extent_start != 0) { found_extent = 1; if (test_bit(BTRFS_ROOT_REF_COWS, &root->state)) inode_sub_bytes(inode, num_dec); } } } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) { /* * we can't truncate inline items that have had * special encodings */ if (!del_item && btrfs_file_extent_compression(leaf, fi) == 0 && btrfs_file_extent_encryption(leaf, fi) == 0 && btrfs_file_extent_other_encoding(leaf, fi) == 0) { u32 size = new_size - found_key.offset; if (test_bit(BTRFS_ROOT_REF_COWS, &root->state)) inode_sub_bytes(inode, item_end + 1 - new_size); /* * update the ram bytes to properly reflect * the new size of our item */ btrfs_set_file_extent_ram_bytes(leaf, fi, size); size = btrfs_file_extent_calc_inline_size(size); btrfs_truncate_item(root, path, size, 1); } else if (test_bit(BTRFS_ROOT_REF_COWS, &root->state)) { inode_sub_bytes(inode, item_end + 1 - found_key.offset); } } delete: if (del_item) { if (!pending_del_nr) { /* no pending yet, add ourselves */ pending_del_slot = path->slots[0]; pending_del_nr = 1; } else if (pending_del_nr && path->slots[0] + 1 == pending_del_slot) { /* hop on the pending chunk */ pending_del_nr++; pending_del_slot = path->slots[0]; } else { BUG(); } } else { break; } should_throttle = 0; if (found_extent && (test_bit(BTRFS_ROOT_REF_COWS, &root->state) || root == root->fs_info->tree_root)) { btrfs_set_path_blocking(path); bytes_deleted += extent_num_bytes; ret = btrfs_free_extent(trans, root, extent_start, extent_num_bytes, 0, btrfs_header_owner(leaf), ino, extent_offset, 0); BUG_ON(ret); if (btrfs_should_throttle_delayed_refs(trans, root)) btrfs_async_run_delayed_refs(root, trans->delayed_ref_updates * 2, 0); if (be_nice) { if (truncate_space_check(trans, root, extent_num_bytes)) { should_end = 1; } if (btrfs_should_throttle_delayed_refs(trans, root)) { should_throttle = 1; } } } if (found_type == BTRFS_INODE_ITEM_KEY) break; if (path->slots[0] == 0 || path->slots[0] != pending_del_slot || should_throttle || should_end) { if (pending_del_nr) { ret = btrfs_del_items(trans, root, path, pending_del_slot, pending_del_nr); if (ret) { btrfs_abort_transaction(trans, root, ret); goto error; } pending_del_nr = 0; } btrfs_release_path(path); if (should_throttle) { unsigned long updates = trans->delayed_ref_updates; if (updates) { trans->delayed_ref_updates = 0; ret = btrfs_run_delayed_refs(trans, root, updates * 2); if (ret && !err) err = ret; } } /* * if we failed to refill our space rsv, bail out * and let the transaction restart */ if (should_end) { err = -EAGAIN; goto error; } goto search_again; } else { path->slots[0]--; } } out: if (pending_del_nr) { ret = btrfs_del_items(trans, root, path, pending_del_slot, pending_del_nr); if (ret) btrfs_abort_transaction(trans, root, ret); } error: if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) btrfs_ordered_update_i_size(inode, last_size, NULL); btrfs_free_path(path); if (be_nice && bytes_deleted > 32 * 1024 * 1024) { unsigned long updates = trans->delayed_ref_updates; if (updates) { trans->delayed_ref_updates = 0; ret = btrfs_run_delayed_refs(trans, root, updates * 2); if (ret && !err) err = ret; } } return err; }
202,426,911,853,200,160,000,000,000,000,000,000,000
inode.c
165,350,596,384,609,150,000,000,000,000,000,000,000
[ "CWE-200" ]
CVE-2015-8374
fs/btrfs/inode.c in the Linux kernel before 4.3.3 mishandles compressed inline extents, which allows local users to obtain sensitive pre-truncation information from a file via a clone action.
https://nvd.nist.gov/vuln/detail/CVE-2015-8374
9,292
linux
451a2886b6bf90e2fb378f7c46c655450fb96e81
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/451a2886b6bf90e2fb378f7c46c655450fb96e81
sg_start_req(): make sure that there's not too many elements in iovec unfortunately, allowing an arbitrary 16bit value means a possibility of overflow in the calculation of total number of pages in bio_map_user_iov() - we rely on there being no more than PAGE_SIZE members of sum in the first loop there. If that sum wraps around, we end up allocating too small array of pointers to pages and it's easy to overflow it in the second loop. X-Coverup: TINC (and there's no lumber cartel either) Cc: stable@vger.kernel.org # way, way back Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
1
sg_start_req(Sg_request *srp, unsigned char *cmd) { int res; struct request *rq; Sg_fd *sfp = srp->parentfp; sg_io_hdr_t *hp = &srp->header; int dxfer_len = (int) hp->dxfer_len; int dxfer_dir = hp->dxfer_direction; unsigned int iov_count = hp->iovec_count; Sg_scatter_hold *req_schp = &srp->data; Sg_scatter_hold *rsv_schp = &sfp->reserve; struct request_queue *q = sfp->parentdp->device->request_queue; struct rq_map_data *md, map_data; int rw = hp->dxfer_direction == SG_DXFER_TO_DEV ? WRITE : READ; unsigned char *long_cmdp = NULL; SCSI_LOG_TIMEOUT(4, sg_printk(KERN_INFO, sfp->parentdp, "sg_start_req: dxfer_len=%d\n", dxfer_len)); if (hp->cmd_len > BLK_MAX_CDB) { long_cmdp = kzalloc(hp->cmd_len, GFP_KERNEL); if (!long_cmdp) return -ENOMEM; } /* * NOTE * * With scsi-mq enabled, there are a fixed number of preallocated * requests equal in number to shost->can_queue. If all of the * preallocated requests are already in use, then using GFP_ATOMIC with * blk_get_request() will return -EWOULDBLOCK, whereas using GFP_KERNEL * will cause blk_get_request() to sleep until an active command * completes, freeing up a request. Neither option is ideal, but * GFP_KERNEL is the better choice to prevent userspace from getting an * unexpected EWOULDBLOCK. * * With scsi-mq disabled, blk_get_request() with GFP_KERNEL usually * does not sleep except under memory pressure. */ rq = blk_get_request(q, rw, GFP_KERNEL); if (IS_ERR(rq)) { kfree(long_cmdp); return PTR_ERR(rq); } blk_rq_set_block_pc(rq); if (hp->cmd_len > BLK_MAX_CDB) rq->cmd = long_cmdp; memcpy(rq->cmd, cmd, hp->cmd_len); rq->cmd_len = hp->cmd_len; srp->rq = rq; rq->end_io_data = srp; rq->sense = srp->sense_b; rq->retries = SG_DEFAULT_RETRIES; if ((dxfer_len <= 0) || (dxfer_dir == SG_DXFER_NONE)) return 0; if (sg_allow_dio && hp->flags & SG_FLAG_DIRECT_IO && dxfer_dir != SG_DXFER_UNKNOWN && !iov_count && !sfp->parentdp->device->host->unchecked_isa_dma && blk_rq_aligned(q, (unsigned long)hp->dxferp, dxfer_len)) md = NULL; else md = &map_data; if (md) { if (!sg_res_in_use(sfp) && dxfer_len <= rsv_schp->bufflen) sg_link_reserve(sfp, srp, dxfer_len); else { res = sg_build_indirect(req_schp, sfp, dxfer_len); if (res) return res; } md->pages = req_schp->pages; md->page_order = req_schp->page_order; md->nr_entries = req_schp->k_use_sg; md->offset = 0; md->null_mapped = hp->dxferp ? 0 : 1; if (dxfer_dir == SG_DXFER_TO_FROM_DEV) md->from_user = 1; else md->from_user = 0; } if (iov_count) { int size = sizeof(struct iovec) * iov_count; struct iovec *iov; struct iov_iter i; iov = memdup_user(hp->dxferp, size); if (IS_ERR(iov)) return PTR_ERR(iov); iov_iter_init(&i, rw, iov, iov_count, min_t(size_t, hp->dxfer_len, iov_length(iov, iov_count))); res = blk_rq_map_user_iov(q, rq, md, &i, GFP_ATOMIC); kfree(iov); } else res = blk_rq_map_user(q, rq, md, hp->dxferp, hp->dxfer_len, GFP_ATOMIC); if (!res) { srp->bio = rq->bio; if (!md) { req_schp->dio_in_use = 1; hp->info |= SG_INFO_DIRECT_IO; } } return res; }
199,085,659,186,409,220,000,000,000,000,000,000,000
sg.c
259,994,801,544,706,120,000,000,000,000,000,000,000
[ "CWE-189" ]
CVE-2015-5707
Integer overflow in the sg_start_req function in drivers/scsi/sg.c in the Linux kernel 2.6.x through 4.x before 4.1 allows local users to cause a denial of service or possibly have unspecified other impact via a large iov_count value in a write request.
https://nvd.nist.gov/vuln/detail/CVE-2015-5707
9,293
linux
59c816c1f24df0204e01851431d3bab3eb76719c
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/59c816c1f24df0204e01851431d3bab3eb76719c
vhost/scsi: potential memory corruption This code in vhost_scsi_make_tpg() is confusing because we limit "tpgt" to UINT_MAX but the data type of "tpg->tport_tpgt" and that is a u16. I looked at the context and it turns out that in vhost_scsi_set_endpoint(), "tpg->tport_tpgt" is used as an offset into the vs_tpg[] array which has VHOST_SCSI_MAX_TARGET (256) elements so anything higher than 255 then it is invalid. I have made that the limit now. In vhost_scsi_send_evt() we mask away values higher than 255, but now that the limit has changed, we don't need the mask. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
1
vhost_scsi_make_tpg(struct se_wwn *wwn, struct config_group *group, const char *name) { struct vhost_scsi_tport *tport = container_of(wwn, struct vhost_scsi_tport, tport_wwn); struct vhost_scsi_tpg *tpg; unsigned long tpgt; int ret; if (strstr(name, "tpgt_") != name) return ERR_PTR(-EINVAL); if (kstrtoul(name + 5, 10, &tpgt) || tpgt > UINT_MAX) return ERR_PTR(-EINVAL); tpg = kzalloc(sizeof(struct vhost_scsi_tpg), GFP_KERNEL); if (!tpg) { pr_err("Unable to allocate struct vhost_scsi_tpg"); return ERR_PTR(-ENOMEM); } mutex_init(&tpg->tv_tpg_mutex); INIT_LIST_HEAD(&tpg->tv_tpg_list); tpg->tport = tport; tpg->tport_tpgt = tpgt; ret = core_tpg_register(&vhost_scsi_fabric_configfs->tf_ops, wwn, &tpg->se_tpg, tpg, TRANSPORT_TPG_TYPE_NORMAL); if (ret < 0) { kfree(tpg); return NULL; } mutex_lock(&vhost_scsi_mutex); list_add_tail(&tpg->tv_tpg_list, &vhost_scsi_list); mutex_unlock(&vhost_scsi_mutex); return &tpg->se_tpg; }
138,793,578,128,477,900,000,000,000,000,000,000,000
scsi.c
220,191,379,760,540,220,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2015-4036
Array index error in the tcm_vhost_make_tpg function in drivers/vhost/scsi.c in the Linux kernel before 4.0 might allow guest OS users to cause a denial of service (memory corruption) or possibly have unspecified other impact via a crafted VHOST_SCSI_SET_ENDPOINT ioctl call. NOTE: the affected function was renamed to vhost_scsi_make_tpg before the vulnerability was announced.
https://nvd.nist.gov/vuln/detail/CVE-2015-4036
9,294
linux
ee73f656a604d5aa9df86a97102e4e462dd79924
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/ee73f656a604d5aa9df86a97102e4e462dd79924
KVM: PIT: control word is write-only PIT control word (address 0x43) is write-only, reads are undefined. Cc: stable@kernel.org Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
1
static int pit_ioport_read(struct kvm_io_device *this, gpa_t addr, int len, void *data) { struct kvm_pit *pit = dev_to_pit(this); struct kvm_kpit_state *pit_state = &pit->pit_state; struct kvm *kvm = pit->kvm; int ret, count; struct kvm_kpit_channel_state *s; if (!pit_in_range(addr)) return -EOPNOTSUPP; addr &= KVM_PIT_CHANNEL_MASK; s = &pit_state->channels[addr]; mutex_lock(&pit_state->lock); if (s->status_latched) { s->status_latched = 0; ret = s->status; } else if (s->count_latched) { switch (s->count_latched) { default: case RW_STATE_LSB: ret = s->latched_count & 0xff; s->count_latched = 0; break; case RW_STATE_MSB: ret = s->latched_count >> 8; s->count_latched = 0; break; case RW_STATE_WORD0: ret = s->latched_count & 0xff; s->count_latched = RW_STATE_MSB; break; } } else { switch (s->read_state) { default: case RW_STATE_LSB: count = pit_get_count(kvm, addr); ret = count & 0xff; break; case RW_STATE_MSB: count = pit_get_count(kvm, addr); ret = (count >> 8) & 0xff; break; case RW_STATE_WORD0: count = pit_get_count(kvm, addr); ret = count & 0xff; s->read_state = RW_STATE_WORD1; break; case RW_STATE_WORD1: count = pit_get_count(kvm, addr); ret = (count >> 8) & 0xff; s->read_state = RW_STATE_WORD0; break; } } if (len > sizeof(ret)) len = sizeof(ret); memcpy(data, (char *)&ret, len); mutex_unlock(&pit_state->lock); return 0; }
56,541,777,842,879,500,000,000,000,000,000,000,000
i8254.c
206,424,261,813,794,500,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2015-3214
The pit_ioport_read in i8254.c in the Linux kernel before 2.6.33 and QEMU before 2.3.1 does not distinguish between read lengths and write lengths, which might allow guest OS users to execute arbitrary code on the host OS by triggering use of an invalid index.
https://nvd.nist.gov/vuln/detail/CVE-2015-3214
9,302
openssl
4924b37ee01f71ae19c94a8934b80eeb2f677932
https://github.com/openssl/openssl
https://github.com/openssl/openssl/commit/4924b37ee01f71ae19c94a8934b80eeb2f677932
bn/bn_gf2m.c: avoid infinite loop wich malformed ECParamters. CVE-2015-1788 Reviewed-by: Matt Caswell <matt@openssl.org>
1
int BN_GF2m_mod_inv(BIGNUM *r, const BIGNUM *a, const BIGNUM *p, BN_CTX *ctx) { BIGNUM *b, *c = NULL, *u = NULL, *v = NULL, *tmp; int ret = 0; bn_check_top(a); bn_check_top(p); BN_CTX_start(ctx); if ((b = BN_CTX_get(ctx)) == NULL) goto err; if ((c = BN_CTX_get(ctx)) == NULL) goto err; if ((u = BN_CTX_get(ctx)) == NULL) goto err; if ((v = BN_CTX_get(ctx)) == NULL) goto err; if (!BN_GF2m_mod(u, a, p)) goto err; if (BN_is_zero(u)) goto err; if (!BN_copy(v, p)) goto err; # if 0 if (!BN_one(b)) goto err; while (1) { while (!BN_is_odd(u)) { if (BN_is_zero(u)) goto err; if (!BN_rshift1(u, u)) goto err; if (BN_is_odd(b)) { if (!BN_GF2m_add(b, b, p)) goto err; } if (!BN_rshift1(b, b)) goto err; } if (BN_abs_is_word(u, 1)) break; if (BN_num_bits(u) < BN_num_bits(v)) { tmp = u; u = v; v = tmp; tmp = b; b = c; c = tmp; } if (!BN_GF2m_add(u, u, v)) goto err; if (!BN_GF2m_add(b, b, c)) goto err; } # else { int i, ubits = BN_num_bits(u), vbits = BN_num_bits(v), /* v is copy * of p */ top = p->top; BN_ULONG *udp, *bdp, *vdp, *cdp; bn_wexpand(u, top); udp = u->d; for (i = u->top; i < top; i++) udp[i] = 0; u->top = top; bn_wexpand(b, top); bdp = b->d; bdp[0] = 1; for (i = 1; i < top; i++) bdp[i] = 0; b->top = top; bn_wexpand(c, top); cdp = c->d; for (i = 0; i < top; i++) cdp[i] = 0; c->top = top; vdp = v->d; /* It pays off to "cache" *->d pointers, * because it allows optimizer to be more * aggressive. But we don't have to "cache" * p->d, because *p is declared 'const'... */ while (1) { while (ubits && !(udp[0] & 1)) { BN_ULONG u0, u1, b0, b1, mask; u0 = udp[0]; b0 = bdp[0]; mask = (BN_ULONG)0 - (b0 & 1); b0 ^= p->d[0] & mask; for (i = 0; i < top - 1; i++) { u1 = udp[i + 1]; udp[i] = ((u0 >> 1) | (u1 << (BN_BITS2 - 1))) & BN_MASK2; u0 = u1; b1 = bdp[i + 1] ^ (p->d[i + 1] & mask); bdp[i] = ((b0 >> 1) | (b1 << (BN_BITS2 - 1))) & BN_MASK2; b0 = b1; } udp[i] = u0 >> 1; bdp[i] = b0 >> 1; ubits--; } if (ubits <= BN_BITS2 && udp[0] == 1) break; if (ubits < vbits) { i = ubits; ubits = vbits; vbits = i; tmp = u; u = v; v = tmp; tmp = b; b = c; c = tmp; udp = vdp; vdp = v->d; bdp = cdp; cdp = c->d; } for (i = 0; i < top; i++) { udp[i] ^= vdp[i]; bdp[i] ^= cdp[i]; } if (ubits == vbits) { BN_ULONG ul; int utop = (ubits - 1) / BN_BITS2; while ((ul = udp[utop]) == 0 && utop) utop--; ubits = utop * BN_BITS2 + BN_num_bits_word(ul); } } bn_correct_top(b); } # endif if (!BN_copy(r, b)) goto err; bn_check_top(r); ret = 1; err: # ifdef BN_DEBUG /* BN_CTX_end would complain about the * expanded form */ bn_correct_top(c); bn_correct_top(u); bn_correct_top(v); # endif BN_CTX_end(ctx); return ret; }
302,254,574,823,175,280,000,000,000,000,000,000,000
None
null
[ "CWE-399" ]
CVE-2015-1788
The BN_GF2m_mod_inv function in crypto/bn/bn_gf2m.c in OpenSSL before 0.9.8s, 1.0.0 before 1.0.0e, 1.0.1 before 1.0.1n, and 1.0.2 before 1.0.2b does not properly handle ECParameters structures in which the curve is over a malformed binary polynomial field, which allows remote attackers to cause a denial of service (infinite loop) via a session that uses an Elliptic Curve algorithm, as demonstrated by an attack against a server that supports client authentication.
https://nvd.nist.gov/vuln/detail/CVE-2015-1788
9,306
pigz
fdad1406b3ec809f4954ff7cdf9e99eb18c2458f
https://github.com/madler/pigz
https://github.com/madler/pigz/commit/fdad1406b3ec809f4954ff7cdf9e99eb18c2458f
When decompressing with -N or -NT, strip any path from header name. This uses the path of the compressed file combined with the name from the header as the name of the decompressed output file. Any path information in the header name is stripped. This avoids a possible vulnerability where absolute or descending paths are put in the gzip header.
1
local void process(char *path) { int method = -1; /* get_header() return value */ size_t len; /* length of base name (minus suffix) */ struct stat st; /* to get file type and mod time */ /* all compressed suffixes for decoding search, in length order */ static char *sufs[] = {".z", "-z", "_z", ".Z", ".gz", "-gz", ".zz", "-zz", ".zip", ".ZIP", ".tgz", NULL}; /* open input file with name in, descriptor ind -- set name and mtime */ if (path == NULL) { strcpy(g.inf, "<stdin>"); g.ind = 0; g.name = NULL; g.mtime = g.headis & 2 ? (fstat(g.ind, &st) ? time(NULL) : st.st_mtime) : 0; len = 0; } else { /* set input file name (already set if recursed here) */ if (path != g.inf) { strncpy(g.inf, path, sizeof(g.inf)); if (g.inf[sizeof(g.inf) - 1]) bail("name too long: ", path); } len = strlen(g.inf); /* try to stat input file -- if not there and decoding, look for that name with compressed suffixes */ if (lstat(g.inf, &st)) { if (errno == ENOENT && (g.list || g.decode)) { char **try = sufs; do { if (*try == NULL || len + strlen(*try) >= sizeof(g.inf)) break; strcpy(g.inf + len, *try++); errno = 0; } while (lstat(g.inf, &st) && errno == ENOENT); } #ifdef EOVERFLOW if (errno == EOVERFLOW || errno == EFBIG) bail(g.inf, " too large -- not compiled with large file support"); #endif if (errno) { g.inf[len] = 0; complain("%s does not exist -- skipping", g.inf); return; } len = strlen(g.inf); } /* only process regular files, but allow symbolic links if -f, recurse into directory if -r */ if ((st.st_mode & S_IFMT) != S_IFREG && (st.st_mode & S_IFMT) != S_IFLNK && (st.st_mode & S_IFMT) != S_IFDIR) { complain("%s is a special file or device -- skipping", g.inf); return; } if ((st.st_mode & S_IFMT) == S_IFLNK && !g.force && !g.pipeout) { complain("%s is a symbolic link -- skipping", g.inf); return; } if ((st.st_mode & S_IFMT) == S_IFDIR && !g.recurse) { complain("%s is a directory -- skipping", g.inf); return; } /* recurse into directory (assumes Unix) */ if ((st.st_mode & S_IFMT) == S_IFDIR) { char *roll, *item, *cut, *base, *bigger; size_t len, hold; DIR *here; struct dirent *next; /* accumulate list of entries (need to do this, since readdir() behavior not defined if directory modified between calls) */ here = opendir(g.inf); if (here == NULL) return; hold = 512; roll = MALLOC(hold); if (roll == NULL) bail("not enough memory", ""); *roll = 0; item = roll; while ((next = readdir(here)) != NULL) { if (next->d_name[0] == 0 || (next->d_name[0] == '.' && (next->d_name[1] == 0 || (next->d_name[1] == '.' && next->d_name[2] == 0)))) continue; len = strlen(next->d_name) + 1; if (item + len + 1 > roll + hold) { do { /* make roll bigger */ hold <<= 1; } while (item + len + 1 > roll + hold); bigger = REALLOC(roll, hold); if (bigger == NULL) { FREE(roll); bail("not enough memory", ""); } item = bigger + (item - roll); roll = bigger; } strcpy(item, next->d_name); item += len; *item = 0; } closedir(here); /* run process() for each entry in the directory */ cut = base = g.inf + strlen(g.inf); if (base > g.inf && base[-1] != (unsigned char)'/') { if ((size_t)(base - g.inf) >= sizeof(g.inf)) bail("path too long", g.inf); *base++ = '/'; } item = roll; while (*item) { strncpy(base, item, sizeof(g.inf) - (base - g.inf)); if (g.inf[sizeof(g.inf) - 1]) { strcpy(g.inf + (sizeof(g.inf) - 4), "..."); bail("path too long: ", g.inf); } process(g.inf); item += strlen(item) + 1; } *cut = 0; /* release list of entries */ FREE(roll); return; } /* don't compress .gz (or provided suffix) files, unless -f */ if (!(g.force || g.list || g.decode) && len >= strlen(g.sufx) && strcmp(g.inf + len - strlen(g.sufx), g.sufx) == 0) { complain("%s ends with %s -- skipping", g.inf, g.sufx); return; } /* create output file only if input file has compressed suffix */ if (g.decode == 1 && !g.pipeout && !g.list) { int suf = compressed_suffix(g.inf); if (suf == 0) { complain("%s does not have compressed suffix -- skipping", g.inf); return; } len -= suf; } /* open input file */ g.ind = open(g.inf, O_RDONLY, 0); if (g.ind < 0) bail("read error on ", g.inf); /* prepare gzip header information for compression */ g.name = g.headis & 1 ? justname(g.inf) : NULL; g.mtime = g.headis & 2 ? st.st_mtime : 0; } SET_BINARY_MODE(g.ind); /* if decoding or testing, try to read gzip header */ g.hname = NULL; if (g.decode) { in_init(); method = get_header(1); if (method != 8 && method != 257 && /* gzip -cdf acts like cat on uncompressed input */ !(method == -2 && g.force && g.pipeout && g.decode != 2 && !g.list)) { RELEASE(g.hname); if (g.ind != 0) close(g.ind); if (method != -1) complain(method < 0 ? "%s is not compressed -- skipping" : "%s has unknown compression method -- skipping", g.inf); return; } /* if requested, test input file (possibly a special list) */ if (g.decode == 2) { if (method == 8) infchk(); else { unlzw(); if (g.list) { g.in_tot -= 3; show_info(method, 0, g.out_tot, 0); } } RELEASE(g.hname); if (g.ind != 0) close(g.ind); return; } } /* if requested, just list information about input file */ if (g.list) { list_info(); RELEASE(g.hname); if (g.ind != 0) close(g.ind); return; } /* create output file out, descriptor outd */ if (path == NULL || g.pipeout) { /* write to stdout */ g.outf = MALLOC(strlen("<stdout>") + 1); if (g.outf == NULL) bail("not enough memory", ""); strcpy(g.outf, "<stdout>"); g.outd = 1; if (!g.decode && !g.force && isatty(g.outd)) bail("trying to write compressed data to a terminal", " (use -f to force)"); } else { char *to, *repl; /* use header name for output when decompressing with -N */ to = g.inf; if (g.decode && (g.headis & 1) != 0 && g.hname != NULL) { to = g.hname; len = strlen(g.hname); } /* replace .tgz with .tar when decoding */ repl = g.decode && strcmp(to + len, ".tgz") ? "" : ".tar"; /* create output file and open to write */ g.outf = MALLOC(len + (g.decode ? strlen(repl) : strlen(g.sufx)) + 1); if (g.outf == NULL) bail("not enough memory", ""); memcpy(g.outf, to, len); strcpy(g.outf + len, g.decode ? repl : g.sufx); g.outd = open(g.outf, O_CREAT | O_TRUNC | O_WRONLY | (g.force ? 0 : O_EXCL), 0600); /* if exists and not -f, give user a chance to overwrite */ if (g.outd < 0 && errno == EEXIST && isatty(0) && g.verbosity) { int ch, reply; fprintf(stderr, "%s exists -- overwrite (y/n)? ", g.outf); fflush(stderr); reply = -1; do { ch = getchar(); if (reply < 0 && ch != ' ' && ch != '\t') reply = ch == 'y' || ch == 'Y' ? 1 : 0; } while (ch != EOF && ch != '\n' && ch != '\r'); if (reply == 1) g.outd = open(g.outf, O_CREAT | O_TRUNC | O_WRONLY, 0600); } /* if exists and no overwrite, report and go on to next */ if (g.outd < 0 && errno == EEXIST) { complain("%s exists -- skipping", g.outf); RELEASE(g.outf); RELEASE(g.hname); if (g.ind != 0) close(g.ind); return; } /* if some other error, give up */ if (g.outd < 0) bail("write error on ", g.outf); } SET_BINARY_MODE(g.outd); RELEASE(g.hname); /* process ind to outd */ if (g.verbosity > 1) fprintf(stderr, "%s to %s ", g.inf, g.outf); if (g.decode) { if (method == 8) infchk(); else if (method == 257) unlzw(); else cat(); } #ifndef NOTHREAD else if (g.procs > 1) parallel_compress(); #endif else single_compress(0); if (g.verbosity > 1) { putc('\n', stderr); fflush(stderr); } /* finish up, copy attributes, set times, delete original */ if (g.ind != 0) close(g.ind); if (g.outd != 1) { if (close(g.outd)) bail("write error on ", g.outf); g.outd = -1; /* now prevent deletion on interrupt */ if (g.ind != 0) { copymeta(g.inf, g.outf); if (!g.keep) unlink(g.inf); } if (g.decode && (g.headis & 2) != 0 && g.stamp) touch(g.outf, g.stamp); } RELEASE(g.outf); }
211,054,325,335,456,130,000,000,000,000,000,000,000
pigz.c
29,522,447,385,920,444,000,000,000,000,000,000,000
[ "CWE-22" ]
CVE-2015-1191
Multiple directory traversal vulnerabilities in pigz 2.3.1 allow remote attackers to write to arbitrary files via a (1) full pathname or (2) .. (dot dot) in an archive.
https://nvd.nist.gov/vuln/detail/CVE-2015-1191
9,307
linux
0f2af21aae11972fa924374ddcf52e88347cf5a8
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/0f2af21aae11972fa924374ddcf52e88347cf5a8
ext4: allocate entire range in zero range Currently there is a bug in zero range code which causes zero range calls to only allocate block aligned portion of the range, while ignoring the rest in some cases. In some cases, namely if the end of the range is past i_size, we do attempt to preallocate the last nonaligned block. However this might cause kernel to BUG() in some carefully designed zero range requests on setups where page size > block size. Fix this problem by first preallocating the entire range, including the nonaligned edges and converting the written extents to unwritten in the next step. This approach will also give us the advantage of having the range to be as linearly contiguous as possible. Signed-off-by: Lukas Czerner <lczerner@redhat.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
1
static long ext4_zero_range(struct file *file, loff_t offset, loff_t len, int mode) { struct inode *inode = file_inode(file); handle_t *handle = NULL; unsigned int max_blocks; loff_t new_size = 0; int ret = 0; int flags; int credits; int partial_begin, partial_end; loff_t start, end; ext4_lblk_t lblk; struct address_space *mapping = inode->i_mapping; unsigned int blkbits = inode->i_blkbits; trace_ext4_zero_range(inode, offset, len, mode); if (!S_ISREG(inode->i_mode)) return -EINVAL; /* Call ext4_force_commit to flush all data in case of data=journal. */ if (ext4_should_journal_data(inode)) { ret = ext4_force_commit(inode->i_sb); if (ret) return ret; } /* * Write out all dirty pages to avoid race conditions * Then release them. */ if (mapping->nrpages && mapping_tagged(mapping, PAGECACHE_TAG_DIRTY)) { ret = filemap_write_and_wait_range(mapping, offset, offset + len - 1); if (ret) return ret; } /* * Round up offset. This is not fallocate, we neet to zero out * blocks, so convert interior block aligned part of the range to * unwritten and possibly manually zero out unaligned parts of the * range. */ start = round_up(offset, 1 << blkbits); end = round_down((offset + len), 1 << blkbits); if (start < offset || end > offset + len) return -EINVAL; partial_begin = offset & ((1 << blkbits) - 1); partial_end = (offset + len) & ((1 << blkbits) - 1); lblk = start >> blkbits; max_blocks = (end >> blkbits); if (max_blocks < lblk) max_blocks = 0; else max_blocks -= lblk; flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT | EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | EXT4_EX_NOCACHE; if (mode & FALLOC_FL_KEEP_SIZE) flags |= EXT4_GET_BLOCKS_KEEP_SIZE; mutex_lock(&inode->i_mutex); /* * Indirect files do not support unwritten extnets */ if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { ret = -EOPNOTSUPP; goto out_mutex; } if (!(mode & FALLOC_FL_KEEP_SIZE) && offset + len > i_size_read(inode)) { new_size = offset + len; ret = inode_newsize_ok(inode, new_size); if (ret) goto out_mutex; /* * If we have a partial block after EOF we have to allocate * the entire block. */ if (partial_end) max_blocks += 1; } if (max_blocks > 0) { /* Now release the pages and zero block aligned part of pages*/ truncate_pagecache_range(inode, start, end - 1); inode->i_mtime = inode->i_ctime = ext4_current_time(inode); /* Wait all existing dio workers, newcomers will block on i_mutex */ ext4_inode_block_unlocked_dio(inode); inode_dio_wait(inode); ret = ext4_alloc_file_blocks(file, lblk, max_blocks, new_size, flags, mode); if (ret) goto out_dio; /* * Remove entire range from the extent status tree. * * ext4_es_remove_extent(inode, lblk, max_blocks) is * NOT sufficient. I'm not sure why this is the case, * but let's be conservative and remove the extent * status tree for the entire inode. There should be * no outstanding delalloc extents thanks to the * filemap_write_and_wait_range() call above. */ ret = ext4_es_remove_extent(inode, 0, EXT_MAX_BLOCKS); if (ret) goto out_dio; } if (!partial_begin && !partial_end) goto out_dio; /* * In worst case we have to writeout two nonadjacent unwritten * blocks and update the inode */ credits = (2 * ext4_ext_index_trans_blocks(inode, 2)) + 1; if (ext4_should_journal_data(inode)) credits += 2; handle = ext4_journal_start(inode, EXT4_HT_MISC, credits); if (IS_ERR(handle)) { ret = PTR_ERR(handle); ext4_std_error(inode->i_sb, ret); goto out_dio; } inode->i_mtime = inode->i_ctime = ext4_current_time(inode); if (new_size) { ext4_update_inode_size(inode, new_size); } else { /* * Mark that we allocate beyond EOF so the subsequent truncate * can proceed even if the new size is the same as i_size. */ if ((offset + len) > i_size_read(inode)) ext4_set_inode_flag(inode, EXT4_INODE_EOFBLOCKS); } ext4_mark_inode_dirty(handle, inode); /* Zero out partial block at the edges of the range */ ret = ext4_zero_partial_blocks(handle, inode, offset, len); if (file->f_flags & O_SYNC) ext4_handle_sync(handle); ext4_journal_stop(handle); out_dio: ext4_inode_resume_unlocked_dio(inode); out_mutex: mutex_unlock(&inode->i_mutex); return ret; }
140,250,258,899,090,150,000,000,000,000,000,000,000
extents.c
297,653,654,506,470,950,000,000,000,000,000,000,000
[ "CWE-17" ]
CVE-2015-0275
The ext4_zero_range function in fs/ext4/extents.c in the Linux kernel before 4.1 allows local users to cause a denial of service (BUG) via a crafted fallocate zero-range request.
https://nvd.nist.gov/vuln/detail/CVE-2015-0275
9,310
libsndfile
725c7dbb95bfaf8b4bb7b04820e3a00cceea9ce6
https://github.com/erikd/libsndfile
https://github.com/erikd/libsndfile/commit/725c7dbb95bfaf8b4bb7b04820e3a00cceea9ce6
src/file_io.c : Prevent potential divide-by-zero. Closes: https://github.com/erikd/libsndfile/issues/92
1
psf_fwrite (const void *ptr, sf_count_t bytes, sf_count_t items, SF_PRIVATE *psf) { sf_count_t total = 0 ; ssize_t count ; if (psf->virtual_io) return psf->vio.write (ptr, bytes*items, psf->vio_user_data) / bytes ; items *= bytes ; /* Do this check after the multiplication above. */ if (items <= 0) return 0 ; while (items > 0) { /* Break the writes down to a sensible size. */ count = (items > SENSIBLE_SIZE) ? SENSIBLE_SIZE : items ; count = write (psf->file.filedes, ((const char*) ptr) + total, count) ; if (count == -1) { if (errno == EINTR) continue ; psf_log_syserr (psf, errno) ; break ; } ; if (count == 0) break ; total += count ; items -= count ; } ; if (psf->is_pipe) psf->pipeoffset += total ; return total / bytes ; } /* psf_fwrite */
168,840,506,501,797,730,000,000,000,000,000,000,000
file_io.c
239,890,201,875,820,400,000,000,000,000,000,000,000
[ "CWE-189" ]
CVE-2014-9756
The psf_fwrite function in file_io.c in libsndfile allows attackers to cause a denial of service (divide-by-zero error and application crash) via unspecified vectors related to the headindex variable.
https://nvd.nist.gov/vuln/detail/CVE-2014-9756
9,312
linux
e159332b9af4b04d882dbcfe1bb0117f0a6d4b58
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/e159332b9af4b04d882dbcfe1bb0117f0a6d4b58
udf: Verify i_size when loading inode Verify that inode size is sane when loading inode with data stored in ICB. Otherwise we may get confused later when working with the inode and inode size is too big. CC: stable@vger.kernel.org Reported-by: Carl Henrik Lunde <chlunde@ping.uio.no> Signed-off-by: Jan Kara <jack@suse.cz>
1
static int udf_read_inode(struct inode *inode, bool hidden_inode) { struct buffer_head *bh = NULL; struct fileEntry *fe; struct extendedFileEntry *efe; uint16_t ident; struct udf_inode_info *iinfo = UDF_I(inode); struct udf_sb_info *sbi = UDF_SB(inode->i_sb); struct kernel_lb_addr *iloc = &iinfo->i_location; unsigned int link_count; unsigned int indirections = 0; int ret = -EIO; reread: if (iloc->logicalBlockNum >= sbi->s_partmaps[iloc->partitionReferenceNum].s_partition_len) { udf_debug("block=%d, partition=%d out of range\n", iloc->logicalBlockNum, iloc->partitionReferenceNum); return -EIO; } /* * Set defaults, but the inode is still incomplete! * Note: get_new_inode() sets the following on a new inode: * i_sb = sb * i_no = ino * i_flags = sb->s_flags * i_state = 0 * clean_inode(): zero fills and sets * i_count = 1 * i_nlink = 1 * i_op = NULL; */ bh = udf_read_ptagged(inode->i_sb, iloc, 0, &ident); if (!bh) { udf_err(inode->i_sb, "(ino %ld) failed !bh\n", inode->i_ino); return -EIO; } if (ident != TAG_IDENT_FE && ident != TAG_IDENT_EFE && ident != TAG_IDENT_USE) { udf_err(inode->i_sb, "(ino %ld) failed ident=%d\n", inode->i_ino, ident); goto out; } fe = (struct fileEntry *)bh->b_data; efe = (struct extendedFileEntry *)bh->b_data; if (fe->icbTag.strategyType == cpu_to_le16(4096)) { struct buffer_head *ibh; ibh = udf_read_ptagged(inode->i_sb, iloc, 1, &ident); if (ident == TAG_IDENT_IE && ibh) { struct kernel_lb_addr loc; struct indirectEntry *ie; ie = (struct indirectEntry *)ibh->b_data; loc = lelb_to_cpu(ie->indirectICB.extLocation); if (ie->indirectICB.extLength) { brelse(ibh); memcpy(&iinfo->i_location, &loc, sizeof(struct kernel_lb_addr)); if (++indirections > UDF_MAX_ICB_NESTING) { udf_err(inode->i_sb, "too many ICBs in ICB hierarchy" " (max %d supported)\n", UDF_MAX_ICB_NESTING); goto out; } brelse(bh); goto reread; } } brelse(ibh); } else if (fe->icbTag.strategyType != cpu_to_le16(4)) { udf_err(inode->i_sb, "unsupported strategy type: %d\n", le16_to_cpu(fe->icbTag.strategyType)); goto out; } if (fe->icbTag.strategyType == cpu_to_le16(4)) iinfo->i_strat4096 = 0; else /* if (fe->icbTag.strategyType == cpu_to_le16(4096)) */ iinfo->i_strat4096 = 1; iinfo->i_alloc_type = le16_to_cpu(fe->icbTag.flags) & ICBTAG_FLAG_AD_MASK; iinfo->i_unique = 0; iinfo->i_lenEAttr = 0; iinfo->i_lenExtents = 0; iinfo->i_lenAlloc = 0; iinfo->i_next_alloc_block = 0; iinfo->i_next_alloc_goal = 0; if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_EFE)) { iinfo->i_efe = 1; iinfo->i_use = 0; ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize - sizeof(struct extendedFileEntry)); if (ret) goto out; memcpy(iinfo->i_ext.i_data, bh->b_data + sizeof(struct extendedFileEntry), inode->i_sb->s_blocksize - sizeof(struct extendedFileEntry)); } else if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_FE)) { iinfo->i_efe = 0; iinfo->i_use = 0; ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize - sizeof(struct fileEntry)); if (ret) goto out; memcpy(iinfo->i_ext.i_data, bh->b_data + sizeof(struct fileEntry), inode->i_sb->s_blocksize - sizeof(struct fileEntry)); } else if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_USE)) { iinfo->i_efe = 0; iinfo->i_use = 1; iinfo->i_lenAlloc = le32_to_cpu( ((struct unallocSpaceEntry *)bh->b_data)-> lengthAllocDescs); ret = udf_alloc_i_data(inode, inode->i_sb->s_blocksize - sizeof(struct unallocSpaceEntry)); if (ret) goto out; memcpy(iinfo->i_ext.i_data, bh->b_data + sizeof(struct unallocSpaceEntry), inode->i_sb->s_blocksize - sizeof(struct unallocSpaceEntry)); return 0; } ret = -EIO; read_lock(&sbi->s_cred_lock); i_uid_write(inode, le32_to_cpu(fe->uid)); if (!uid_valid(inode->i_uid) || UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_UID_IGNORE) || UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_UID_SET)) inode->i_uid = UDF_SB(inode->i_sb)->s_uid; i_gid_write(inode, le32_to_cpu(fe->gid)); if (!gid_valid(inode->i_gid) || UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_GID_IGNORE) || UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_GID_SET)) inode->i_gid = UDF_SB(inode->i_sb)->s_gid; if (fe->icbTag.fileType != ICBTAG_FILE_TYPE_DIRECTORY && sbi->s_fmode != UDF_INVALID_MODE) inode->i_mode = sbi->s_fmode; else if (fe->icbTag.fileType == ICBTAG_FILE_TYPE_DIRECTORY && sbi->s_dmode != UDF_INVALID_MODE) inode->i_mode = sbi->s_dmode; else inode->i_mode = udf_convert_permissions(fe); inode->i_mode &= ~sbi->s_umask; read_unlock(&sbi->s_cred_lock); link_count = le16_to_cpu(fe->fileLinkCount); if (!link_count) { if (!hidden_inode) { ret = -ESTALE; goto out; } link_count = 1; } set_nlink(inode, link_count); inode->i_size = le64_to_cpu(fe->informationLength); iinfo->i_lenExtents = inode->i_size; if (iinfo->i_efe == 0) { inode->i_blocks = le64_to_cpu(fe->logicalBlocksRecorded) << (inode->i_sb->s_blocksize_bits - 9); if (!udf_disk_stamp_to_time(&inode->i_atime, fe->accessTime)) inode->i_atime = sbi->s_record_time; if (!udf_disk_stamp_to_time(&inode->i_mtime, fe->modificationTime)) inode->i_mtime = sbi->s_record_time; if (!udf_disk_stamp_to_time(&inode->i_ctime, fe->attrTime)) inode->i_ctime = sbi->s_record_time; iinfo->i_unique = le64_to_cpu(fe->uniqueID); iinfo->i_lenEAttr = le32_to_cpu(fe->lengthExtendedAttr); iinfo->i_lenAlloc = le32_to_cpu(fe->lengthAllocDescs); iinfo->i_checkpoint = le32_to_cpu(fe->checkpoint); } else { inode->i_blocks = le64_to_cpu(efe->logicalBlocksRecorded) << (inode->i_sb->s_blocksize_bits - 9); if (!udf_disk_stamp_to_time(&inode->i_atime, efe->accessTime)) inode->i_atime = sbi->s_record_time; if (!udf_disk_stamp_to_time(&inode->i_mtime, efe->modificationTime)) inode->i_mtime = sbi->s_record_time; if (!udf_disk_stamp_to_time(&iinfo->i_crtime, efe->createTime)) iinfo->i_crtime = sbi->s_record_time; if (!udf_disk_stamp_to_time(&inode->i_ctime, efe->attrTime)) inode->i_ctime = sbi->s_record_time; iinfo->i_unique = le64_to_cpu(efe->uniqueID); iinfo->i_lenEAttr = le32_to_cpu(efe->lengthExtendedAttr); iinfo->i_lenAlloc = le32_to_cpu(efe->lengthAllocDescs); iinfo->i_checkpoint = le32_to_cpu(efe->checkpoint); } inode->i_generation = iinfo->i_unique; switch (fe->icbTag.fileType) { case ICBTAG_FILE_TYPE_DIRECTORY: inode->i_op = &udf_dir_inode_operations; inode->i_fop = &udf_dir_operations; inode->i_mode |= S_IFDIR; inc_nlink(inode); break; case ICBTAG_FILE_TYPE_REALTIME: case ICBTAG_FILE_TYPE_REGULAR: case ICBTAG_FILE_TYPE_UNDEF: case ICBTAG_FILE_TYPE_VAT20: if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) inode->i_data.a_ops = &udf_adinicb_aops; else inode->i_data.a_ops = &udf_aops; inode->i_op = &udf_file_inode_operations; inode->i_fop = &udf_file_operations; inode->i_mode |= S_IFREG; break; case ICBTAG_FILE_TYPE_BLOCK: inode->i_mode |= S_IFBLK; break; case ICBTAG_FILE_TYPE_CHAR: inode->i_mode |= S_IFCHR; break; case ICBTAG_FILE_TYPE_FIFO: init_special_inode(inode, inode->i_mode | S_IFIFO, 0); break; case ICBTAG_FILE_TYPE_SOCKET: init_special_inode(inode, inode->i_mode | S_IFSOCK, 0); break; case ICBTAG_FILE_TYPE_SYMLINK: inode->i_data.a_ops = &udf_symlink_aops; inode->i_op = &udf_symlink_inode_operations; inode->i_mode = S_IFLNK | S_IRWXUGO; break; case ICBTAG_FILE_TYPE_MAIN: udf_debug("METADATA FILE-----\n"); break; case ICBTAG_FILE_TYPE_MIRROR: udf_debug("METADATA MIRROR FILE-----\n"); break; case ICBTAG_FILE_TYPE_BITMAP: udf_debug("METADATA BITMAP FILE-----\n"); break; default: udf_err(inode->i_sb, "(ino %ld) failed unknown file type=%d\n", inode->i_ino, fe->icbTag.fileType); goto out; } if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) { struct deviceSpec *dsea = (struct deviceSpec *)udf_get_extendedattr(inode, 12, 1); if (dsea) { init_special_inode(inode, inode->i_mode, MKDEV(le32_to_cpu(dsea->majorDeviceIdent), le32_to_cpu(dsea->minorDeviceIdent))); /* Developer ID ??? */ } else goto out; } ret = 0; out: brelse(bh); return ret; }
164,719,948,166,104,950,000,000,000,000,000,000,000
inode.c
333,950,287,295,443,840,000,000,000,000,000,000,000
[ "CWE-703" ]
CVE-2014-9728
The UDF filesystem implementation in the Linux kernel before 3.18.2 does not validate certain lengths, which allows local users to cause a denial of service (buffer over-read and system crash) via a crafted filesystem image, related to fs/udf/inode.c and fs/udf/symlink.c.
https://nvd.nist.gov/vuln/detail/CVE-2014-9728
9,314
linux
5b6698b0e4a37053de35cc24ee695b98a7eb712b
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/5b6698b0e4a37053de35cc24ee695b98a7eb712b
batman-adv: Calculate extra tail size based on queued fragments The fragmentation code was replaced in 610bfc6bc99bc83680d190ebc69359a05fc7f605 ("batman-adv: Receive fragmented packets and merge"). The new code provided a mostly unused parameter skb for the merging function. It is used inside the function to calculate the additionally needed skb tailroom. But instead of increasing its own tailroom, it is only increasing the tailroom of the first queued skb. This is not correct in some situations because the first queued entry can be a different one than the parameter. An observed problem was: 1. packet with size 104, total_size 1464, fragno 1 was received - packet is queued 2. packet with size 1400, total_size 1464, fragno 0 was received - packet is queued at the end of the list 3. enough data was received and can be given to the merge function (1464 == (1400 - 20) + (104 - 20)) - merge functions gets 1400 byte large packet as skb argument 4. merge function gets first entry in queue (104 byte) - stored as skb_out 5. merge function calculates the required extra tail as total_size - skb->len - pskb_expand_head tail of skb_out with 64 bytes 6. merge function tries to squeeze the extra 1380 bytes from the second queued skb (1400 byte aka skb parameter) in the 64 extra tail bytes of skb_out Instead calculate the extra required tail bytes for skb_out also using skb_out instead of using the parameter skb. The skb parameter is only used to get the total_size from the last received packet. This is also the total_size used to decide that all fragments were received. Reported-by: Philipp Psurek <philipp.psurek@gmail.com> Signed-off-by: Sven Eckelmann <sven@narfation.org> Acked-by: Martin Hundebøll <martin@hundeboll.net> Signed-off-by: David S. Miller <davem@davemloft.net>
1
batadv_frag_merge_packets(struct hlist_head *chain, struct sk_buff *skb) { struct batadv_frag_packet *packet; struct batadv_frag_list_entry *entry; struct sk_buff *skb_out = NULL; int size, hdr_size = sizeof(struct batadv_frag_packet); /* Make sure incoming skb has non-bogus data. */ packet = (struct batadv_frag_packet *)skb->data; size = ntohs(packet->total_size); if (size > batadv_frag_size_limit()) goto free; /* Remove first entry, as this is the destination for the rest of the * fragments. */ entry = hlist_entry(chain->first, struct batadv_frag_list_entry, list); hlist_del(&entry->list); skb_out = entry->skb; kfree(entry); /* Make room for the rest of the fragments. */ if (pskb_expand_head(skb_out, 0, size - skb->len, GFP_ATOMIC) < 0) { kfree_skb(skb_out); skb_out = NULL; goto free; } /* Move the existing MAC header to just before the payload. (Override * the fragment header.) */ skb_pull_rcsum(skb_out, hdr_size); memmove(skb_out->data - ETH_HLEN, skb_mac_header(skb_out), ETH_HLEN); skb_set_mac_header(skb_out, -ETH_HLEN); skb_reset_network_header(skb_out); skb_reset_transport_header(skb_out); /* Copy the payload of the each fragment into the last skb */ hlist_for_each_entry(entry, chain, list) { size = entry->skb->len - hdr_size; memcpy(skb_put(skb_out, size), entry->skb->data + hdr_size, size); } free: /* Locking is not needed, because 'chain' is not part of any orig. */ batadv_frag_clear_chain(chain); return skb_out; }
327,120,903,356,038,720,000,000,000,000,000,000,000
fragmentation.c
207,228,498,943,816,330,000,000,000,000,000,000,000
[ "CWE-399" ]
CVE-2014-9428
The batadv_frag_merge_packets function in net/batman-adv/fragmentation.c in the B.A.T.M.A.N. implementation in the Linux kernel through 3.18.1 uses an incorrect length field during a calculation of an amount of memory, which allows remote attackers to cause a denial of service (mesh-node system crash) via fragmented packets.
https://nvd.nist.gov/vuln/detail/CVE-2014-9428
9,315
krb5
5bb8a6b9c9eb8dd22bc9526751610aaa255ead9c
https://github.com/krb5/krb5
https://github.com/krb5/krb5/commit/5bb8a6b9c9eb8dd22bc9526751610aaa255ead9c
Fix gssrpc data leakage [CVE-2014-9423] [MITKRB5-SA-2015-001] In svcauth_gss_accept_sec_context(), do not copy bytes from the union context into the handle field we send to the client. We do not use this handle field, so just supply a fixed string of "xxxx". In gss_union_ctx_id_struct, remove the unused "interposer" field which was causing part of the union context to remain uninitialized. ticket: 8058 (new) target_version: 1.13.1 tags: pullup
1
svcauth_gss_accept_sec_context(struct svc_req *rqst, struct rpc_gss_init_res *gr) { struct svc_rpc_gss_data *gd; struct rpc_gss_cred *gc; gss_buffer_desc recv_tok, seqbuf; gss_OID mech; OM_uint32 maj_stat = 0, min_stat = 0, ret_flags, seq; log_debug("in svcauth_gss_accept_context()"); gd = SVCAUTH_PRIVATE(rqst->rq_xprt->xp_auth); gc = (struct rpc_gss_cred *)rqst->rq_clntcred; memset(gr, 0, sizeof(*gr)); /* Deserialize arguments. */ memset(&recv_tok, 0, sizeof(recv_tok)); if (!svc_getargs(rqst->rq_xprt, xdr_rpc_gss_init_args, (caddr_t)&recv_tok)) return (FALSE); gr->gr_major = gss_accept_sec_context(&gr->gr_minor, &gd->ctx, svcauth_gss_creds, &recv_tok, GSS_C_NO_CHANNEL_BINDINGS, &gd->client_name, &mech, &gr->gr_token, &ret_flags, NULL, NULL); svc_freeargs(rqst->rq_xprt, xdr_rpc_gss_init_args, (caddr_t)&recv_tok); log_status("accept_sec_context", gr->gr_major, gr->gr_minor); if (gr->gr_major != GSS_S_COMPLETE && gr->gr_major != GSS_S_CONTINUE_NEEDED) { badauth(gr->gr_major, gr->gr_minor, rqst->rq_xprt); gd->ctx = GSS_C_NO_CONTEXT; goto errout; } /* * ANDROS: krb5 mechglue returns ctx of size 8 - two pointers, * one to the mechanism oid, one to the internal_ctx_id */ if ((gr->gr_ctx.value = mem_alloc(sizeof(gss_union_ctx_id_desc))) == NULL) { fprintf(stderr, "svcauth_gss_accept_context: out of memory\n"); goto errout; } memcpy(gr->gr_ctx.value, gd->ctx, sizeof(gss_union_ctx_id_desc)); gr->gr_ctx.length = sizeof(gss_union_ctx_id_desc); /* gr->gr_win = 0x00000005; ANDROS: for debugging linux kernel version... */ gr->gr_win = sizeof(gd->seqmask) * 8; /* Save client info. */ gd->sec.mech = mech; gd->sec.qop = GSS_C_QOP_DEFAULT; gd->sec.svc = gc->gc_svc; gd->seq = gc->gc_seq; gd->win = gr->gr_win; if (gr->gr_major == GSS_S_COMPLETE) { #ifdef SPKM /* spkm3: no src_name (anonymous) */ if(!g_OID_equal(gss_mech_spkm3, mech)) { #endif maj_stat = gss_display_name(&min_stat, gd->client_name, &gd->cname, &gd->sec.mech); #ifdef SPKM } #endif if (maj_stat != GSS_S_COMPLETE) { log_status("display_name", maj_stat, min_stat); goto errout; } #ifdef DEBUG #ifdef HAVE_HEIMDAL log_debug("accepted context for %.*s with " "<mech {}, qop %d, svc %d>", gd->cname.length, (char *)gd->cname.value, gd->sec.qop, gd->sec.svc); #else { gss_buffer_desc mechname; gss_oid_to_str(&min_stat, mech, &mechname); log_debug("accepted context for %.*s with " "<mech %.*s, qop %d, svc %d>", gd->cname.length, (char *)gd->cname.value, mechname.length, (char *)mechname.value, gd->sec.qop, gd->sec.svc); gss_release_buffer(&min_stat, &mechname); } #endif #endif /* DEBUG */ seq = htonl(gr->gr_win); seqbuf.value = &seq; seqbuf.length = sizeof(seq); gss_release_buffer(&min_stat, &gd->checksum); maj_stat = gss_sign(&min_stat, gd->ctx, GSS_C_QOP_DEFAULT, &seqbuf, &gd->checksum); if (maj_stat != GSS_S_COMPLETE) { goto errout; } rqst->rq_xprt->xp_verf.oa_flavor = RPCSEC_GSS; rqst->rq_xprt->xp_verf.oa_base = gd->checksum.value; rqst->rq_xprt->xp_verf.oa_length = gd->checksum.length; } return (TRUE); errout: gss_release_buffer(&min_stat, &gr->gr_token); return (FALSE); }
160,165,331,551,633,580,000,000,000,000,000,000,000
None
null
[ "CWE-200" ]
CVE-2014-9423
The svcauth_gss_accept_sec_context function in lib/rpc/svc_auth_gss.c in MIT Kerberos 5 (aka krb5) 1.11.x through 1.11.5, 1.12.x through 1.12.2, and 1.13.x before 1.13.1 transmits uninitialized interposer data to clients, which allows remote attackers to obtain sensitive information from process heap memory by sniffing the network for data in a handle field.
https://nvd.nist.gov/vuln/detail/CVE-2014-9423
9,316
krb5
6609658db0799053fbef0d7d0aa2f1fd68ef32d8
https://github.com/krb5/krb5
https://github.com/krb5/krb5/commit/6609658db0799053fbef0d7d0aa2f1fd68ef32d8
Fix kadmind server validation [CVE-2014-9422] [MITKRB5-SA-2015-001] In kadmind's check_rpcsec_auth(), use data_eq_string() instead of strncmp() to check components of the server principal, so that we don't erroneously match left substrings of "kadmin", "history", or the realm. ticket: 8057 (new) target_version: 1.13.1 tags: pullup
1
check_rpcsec_auth(struct svc_req *rqstp) { gss_ctx_id_t ctx; krb5_context kctx; OM_uint32 maj_stat, min_stat; gss_name_t name; krb5_principal princ; int ret, success; krb5_data *c1, *c2, *realm; gss_buffer_desc gss_str; kadm5_server_handle_t handle; size_t slen; char *sdots; success = 0; handle = (kadm5_server_handle_t)global_server_handle; if (rqstp->rq_cred.oa_flavor != RPCSEC_GSS) return 0; ctx = rqstp->rq_svccred; maj_stat = gss_inquire_context(&min_stat, ctx, NULL, &name, NULL, NULL, NULL, NULL, NULL); if (maj_stat != GSS_S_COMPLETE) { krb5_klog_syslog(LOG_ERR, _("check_rpcsec_auth: failed " "inquire_context, stat=%u"), maj_stat); log_badauth(maj_stat, min_stat, rqstp->rq_xprt, NULL); goto fail_name; } kctx = handle->context; ret = gss_to_krb5_name_1(rqstp, kctx, name, &princ, &gss_str); if (ret == 0) goto fail_name; slen = gss_str.length; trunc_name(&slen, &sdots); /* * Since we accept with GSS_C_NO_NAME, the client can authenticate * against the entire kdb. Therefore, ensure that the service * name is something reasonable. */ if (krb5_princ_size(kctx, princ) != 2) goto fail_princ; c1 = krb5_princ_component(kctx, princ, 0); c2 = krb5_princ_component(kctx, princ, 1); realm = krb5_princ_realm(kctx, princ); if (strncmp(handle->params.realm, realm->data, realm->length) == 0 && strncmp("kadmin", c1->data, c1->length) == 0) { if (strncmp("history", c2->data, c2->length) == 0) goto fail_princ; else success = 1; } fail_princ: if (!success) { krb5_klog_syslog(LOG_ERR, _("bad service principal %.*s%s"), (int) slen, (char *) gss_str.value, sdots); } gss_release_buffer(&min_stat, &gss_str); krb5_free_principal(kctx, princ); fail_name: gss_release_name(&min_stat, &name); return success; }
94,969,864,754,992,500,000,000,000,000,000,000,000
kadm_rpc_svc.c
185,540,932,340,222,600,000,000,000,000,000,000,000
[ "CWE-284" ]
CVE-2014-9422
The check_rpcsec_auth function in kadmin/server/kadm_rpc_svc.c in kadmind in MIT Kerberos 5 (aka krb5) through 1.11.5, 1.12.x through 1.12.2, and 1.13.x before 1.13.1 allows remote authenticated users to bypass a kadmin/* authorization check and obtain administrative access by leveraging access to a two-component principal with an initial "kadmind" substring, as demonstrated by a "ka/x" principal.
https://nvd.nist.gov/vuln/detail/CVE-2014-9422
9,317
openssl
cb62ab4b17818fe66d2fed0a7fe71969131c811b
https://github.com/openssl/openssl
https://github.com/openssl/openssl/commit/cb62ab4b17818fe66d2fed0a7fe71969131c811b
use correct function name Reviewed-by: Rich Salz <rsalz@openssl.org> Reviewed-by: Matt Caswell <matt@openssl.org>
1
int ASN1_item_verify(const ASN1_ITEM *it, X509_ALGOR *a, ASN1_BIT_STRING *signature, void *asn, EVP_PKEY *pkey) { EVP_MD_CTX ctx; unsigned char *buf_in=NULL; int ret= -1,inl; int mdnid, pknid; if (!pkey) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY, ERR_R_PASSED_NULL_PARAMETER); return -1; } if (signature->type == V_ASN1_BIT_STRING && signature->flags & 0x7) { ASN1err(ASN1_F_ASN1_VERIFY, ASN1_R_INVALID_BIT_STRING_BITS_LEFT); return -1; } EVP_MD_CTX_init(&ctx); /* Convert signature OID into digest and public key OIDs */ if (!OBJ_find_sigid_algs(OBJ_obj2nid(a->algorithm), &mdnid, &pknid)) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY,ASN1_R_UNKNOWN_SIGNATURE_ALGORITHM); goto err; } if (mdnid == NID_undef) { if (!pkey->ameth || !pkey->ameth->item_verify) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY,ASN1_R_UNKNOWN_SIGNATURE_ALGORITHM); goto err; } ret = pkey->ameth->item_verify(&ctx, it, asn, a, signature, pkey); /* Return value of 2 means carry on, anything else means we * exit straight away: either a fatal error of the underlying * verification routine handles all verification. */ if (ret != 2) goto err; ret = -1; } else { const EVP_MD *type; type=EVP_get_digestbynid(mdnid); if (type == NULL) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY,ASN1_R_UNKNOWN_MESSAGE_DIGEST_ALGORITHM); goto err; } /* Check public key OID matches public key type */ if (EVP_PKEY_type(pknid) != pkey->ameth->pkey_id) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY,ASN1_R_WRONG_PUBLIC_KEY_TYPE); goto err; } if (!EVP_DigestVerifyInit(&ctx, NULL, type, NULL, pkey)) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY,ERR_R_EVP_LIB); ret=0; goto err; } } inl = ASN1_item_i2d(asn, &buf_in, it); if (buf_in == NULL) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY,ERR_R_MALLOC_FAILURE); goto err; } ret = EVP_DigestVerifyUpdate(&ctx,buf_in,inl); OPENSSL_cleanse(buf_in,(unsigned int)inl); OPENSSL_free(buf_in); if (!ret) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY,ERR_R_EVP_LIB); goto err; } ret = -1; if (EVP_DigestVerifyFinal(&ctx,signature->data, (size_t)signature->length) <= 0) { ASN1err(ASN1_F_ASN1_ITEM_VERIFY,ERR_R_EVP_LIB); ret=0; goto err; } /* we don't need to zero the 'ctx' because we just checked * public information */ /* memset(&ctx,0,sizeof(ctx)); */ ret=1; err: EVP_MD_CTX_cleanup(&ctx); return(ret); }
82,007,499,822,031,690,000,000,000,000,000,000,000
None
null
[ "CWE-310" ]
CVE-2014-8275
OpenSSL before 0.9.8zd, 1.0.0 before 1.0.0p, and 1.0.1 before 1.0.1k does not enforce certain constraints on certificate data, which allows remote attackers to defeat a fingerprint-based certificate-blacklist protection mechanism by including crafted data within a certificate's unsigned portion, related to crypto/asn1/a_verify.c, crypto/dsa/dsa_asn1.c, crypto/ecdsa/ecs_vrf.c, and crypto/x509/x_all.c.
https://nvd.nist.gov/vuln/detail/CVE-2014-8275
9,318
krb5
102bb6ebf20f9174130c85c3b052ae104e5073ec
https://github.com/krb5/krb5
https://github.com/krb5/krb5/commit/102bb6ebf20f9174130c85c3b052ae104e5073ec
Fix krb5_read_message handling [CVE-2014-5355] In recvauth_common, do not use strcmp against the data fields of krb5_data objects populated by krb5_read_message(), as there is no guarantee that they are C strings. Instead, create an expected krb5_data value and use data_eq(). In the sample user-to-user server application, check that the received client principal name is null-terminated before using it with printf and krb5_parse_name. CVE-2014-5355: In MIT krb5, when a server process uses the krb5_recvauth function, an unauthenticated remote attacker can cause a NULL dereference by sending a zero-byte version string, or a read beyond the end of allocated storage by sending a non-null-terminated version string. The example user-to-user server application (uuserver) is similarly vulnerable to a zero-length or non-null-terminated principal name string. The krb5_recvauth function reads two version strings from the client using krb5_read_message(), which produces a krb5_data structure containing a length and a pointer to an octet sequence. krb5_recvauth assumes that the data pointer is a valid C string and passes it to strcmp() to verify the versions. If the client sends an empty octet sequence, the data pointer will be NULL and strcmp() will dereference a NULL pointer, causing the process to crash. If the client sends a non-null-terminated octet sequence, strcmp() will read beyond the end of the allocated storage, possibly causing the process to crash. uuserver similarly uses krb5_read_message() to read a client principal name, and then passes it to printf() and krb5_parse_name() without verifying that it is a valid C string. The krb5_recvauth function is used by kpropd and the Kerberized versions of the BSD rlogin and rsh daemons. These daemons are usually run out of inetd or in a mode which forks before processing incoming connections, so a process crash will generally not result in a complete denial of service. Thanks to Tim Uglow for discovering this issue. CVSSv2: AV:N/AC:L/Au:N/C:N/I:N/A:P/E:POC/RL:OF/RC:C [tlyu@mit.edu: CVSS score] ticket: 8050 (new) target_version: 1.13.1 tags: pullup
1
recvauth_common(krb5_context context, krb5_auth_context * auth_context, /* IN */ krb5_pointer fd, char *appl_version, krb5_principal server, krb5_int32 flags, krb5_keytab keytab, /* OUT */ krb5_ticket ** ticket, krb5_data *version) { krb5_auth_context new_auth_context; krb5_flags ap_option = 0; krb5_error_code retval, problem; krb5_data inbuf; krb5_data outbuf; krb5_rcache rcache = 0; krb5_octet response; krb5_data null_server; int need_error_free = 0; int local_rcache = 0, local_authcon = 0; /* * Zero out problem variable. If problem is set at the end of * the intial version negotiation section, it means that we * need to send an error code back to the client application * and exit. */ problem = 0; response = 0; if (!(flags & KRB5_RECVAUTH_SKIP_VERSION)) { /* * First read the sendauth version string and check it. */ if ((retval = krb5_read_message(context, fd, &inbuf))) return(retval); if (strcmp(inbuf.data, sendauth_version)) { problem = KRB5_SENDAUTH_BADAUTHVERS; response = 1; } free(inbuf.data); } if (flags & KRB5_RECVAUTH_BADAUTHVERS) { problem = KRB5_SENDAUTH_BADAUTHVERS; response = 1; } /* * Do the same thing for the application version string. */ if ((retval = krb5_read_message(context, fd, &inbuf))) return(retval); if (appl_version && strcmp(inbuf.data, appl_version)) { if (!problem) { problem = KRB5_SENDAUTH_BADAPPLVERS; response = 2; } } if (version && !problem) *version = inbuf; else free(inbuf.data); /* * Now we actually write the response. If the response is non-zero, * exit with a return value of problem */ if ((krb5_net_write(context, *((int *)fd), (char *)&response, 1)) < 0) { return(problem); /* We'll return the top-level problem */ } if (problem) return(problem); /* We are clear of errors here */ /* * Now, let's read the AP_REQ message and decode it */ if ((retval = krb5_read_message(context, fd, &inbuf))) return retval; if (*auth_context == NULL) { problem = krb5_auth_con_init(context, &new_auth_context); *auth_context = new_auth_context; local_authcon = 1; } krb5_auth_con_getrcache(context, *auth_context, &rcache); if ((!problem) && rcache == NULL) { /* * Setup the replay cache. */ if (server != NULL && server->length > 0) { problem = krb5_get_server_rcache(context, &server->data[0], &rcache); } else { null_server.length = 7; null_server.data = "default"; problem = krb5_get_server_rcache(context, &null_server, &rcache); } if (!problem) problem = krb5_auth_con_setrcache(context, *auth_context, rcache); local_rcache = 1; } if (!problem) { problem = krb5_rd_req(context, auth_context, &inbuf, server, keytab, &ap_option, ticket); free(inbuf.data); } /* * If there was a problem, send back a krb5_error message, * preceeded by the length of the krb5_error message. If * everything's ok, send back 0 for the length. */ if (problem) { krb5_error error; const char *message; memset(&error, 0, sizeof(error)); krb5_us_timeofday(context, &error.stime, &error.susec); if(server) error.server = server; else { /* If this fails - ie. ENOMEM we are hosed we cannot even send the error if we wanted to... */ (void) krb5_parse_name(context, "????", &error.server); need_error_free = 1; } error.error = problem - ERROR_TABLE_BASE_krb5; if (error.error > 127) error.error = KRB_ERR_GENERIC; message = error_message(problem); error.text.length = strlen(message) + 1; error.text.data = strdup(message); if (!error.text.data) { retval = ENOMEM; goto cleanup; } if ((retval = krb5_mk_error(context, &error, &outbuf))) { free(error.text.data); goto cleanup; } free(error.text.data); if(need_error_free) krb5_free_principal(context, error.server); } else { outbuf.length = 0; outbuf.data = 0; } retval = krb5_write_message(context, fd, &outbuf); if (outbuf.data) { free(outbuf.data); /* We sent back an error, we need cleanup then return */ retval = problem; goto cleanup; } if (retval) goto cleanup; /* Here lies the mutual authentication stuff... */ if ((ap_option & AP_OPTS_MUTUAL_REQUIRED)) { if ((retval = krb5_mk_rep(context, *auth_context, &outbuf))) { return(retval); } retval = krb5_write_message(context, fd, &outbuf); free(outbuf.data); } cleanup:; if (retval) { if (local_authcon) { krb5_auth_con_free(context, *auth_context); } else if (local_rcache && rcache != NULL) { krb5_rc_close(context, rcache); krb5_auth_con_setrcache(context, *auth_context, NULL); } } return retval; }
146,562,876,660,269,340,000,000,000,000,000,000,000
recvauth.c
238,692,121,563,403,830,000,000,000,000,000,000,000
[ "CWE-703" ]
CVE-2014-5355
MIT Kerberos 5 (aka krb5) through 1.13.1 incorrectly expects that a krb5_read_message data field is represented as a string ending with a '\0' character, which allows remote attackers to (1) cause a denial of service (NULL pointer dereference) via a zero-byte version string or (2) cause a denial of service (out-of-bounds read) by omitting the '\0' character, related to appl/user_user/server.c and lib/krb5/krb/recvauth.c.
https://nvd.nist.gov/vuln/detail/CVE-2014-5355
9,322
krb5
82dc33da50338ac84c7b4102dc6513d897d0506a
https://github.com/krb5/krb5
https://github.com/krb5/krb5/commit/82dc33da50338ac84c7b4102dc6513d897d0506a
Fix gss_process_context_token() [CVE-2014-5352] [MITKRB5-SA-2015-001] The krb5 gss_process_context_token() should not actually delete the context; that leaves the caller with a dangling pointer and no way to know that it is invalid. Instead, mark the context as terminated, and check for terminated contexts in the GSS functions which expect established contexts. Also add checks in export_sec_context and pseudo_random, and adjust t_prf.c for the pseudo_random check. ticket: 8055 (new) target_version: 1.13.1 tags: pullup
1
krb5_gss_process_context_token(minor_status, context_handle, token_buffer) OM_uint32 *minor_status; gss_ctx_id_t context_handle; gss_buffer_t token_buffer; { krb5_gss_ctx_id_rec *ctx; OM_uint32 majerr; ctx = (krb5_gss_ctx_id_t) context_handle; if (! ctx->established) { *minor_status = KG_CTX_INCOMPLETE; return(GSS_S_NO_CONTEXT); } /* "unseal" the token */ if (GSS_ERROR(majerr = kg_unseal(minor_status, context_handle, token_buffer, GSS_C_NO_BUFFER, NULL, NULL, KG_TOK_DEL_CTX))) return(majerr); /* that's it. delete the context */ return(krb5_gss_delete_sec_context(minor_status, &context_handle, GSS_C_NO_BUFFER)); }
158,561,950,850,829,050,000,000,000,000,000,000,000
process_context_token.c
268,638,132,235,201,940,000,000,000,000,000,000,000
[ "CWE-703" ]
CVE-2014-5352
The krb5_gss_process_context_token function in lib/gssapi/krb5/process_context_token.c in the libgssapi_krb5 library in MIT Kerberos 5 (aka krb5) through 1.11.5, 1.12.x through 1.12.2, and 1.13.x before 1.13.1 does not properly maintain security-context handles, which allows remote authenticated users to cause a denial of service (use-after-free and double free, and daemon crash) or possibly execute arbitrary code via crafted GSSAPI traffic, as demonstrated by traffic to kadmind.
https://nvd.nist.gov/vuln/detail/CVE-2014-5352
9,325
linux
5d26a105b5a73e5635eae0629b42fa0a90e07b7b
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/5d26a105b5a73e5635eae0629b42fa0a90e07b7b
crypto: prefix module autoloading with "crypto-" This prefixes all crypto module loading with "crypto-" so we never run the risk of exposing module auto-loading to userspace via a crypto API, as demonstrated by Mathias Krause: https://lkml.org/lkml/2013/3/4/70 Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
1
struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask) { struct crypto_alg *alg; if (!name) return ERR_PTR(-ENOENT); mask &= ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD); type &= mask; alg = crypto_alg_lookup(name, type, mask); if (!alg) { request_module("%s", name); if (!((type ^ CRYPTO_ALG_NEED_FALLBACK) & mask & CRYPTO_ALG_NEED_FALLBACK)) request_module("%s-all", name); alg = crypto_alg_lookup(name, type, mask); } if (alg) return crypto_is_larval(alg) ? crypto_larval_wait(alg) : alg; return crypto_larval_add(name, type, mask); }
324,790,015,319,107,200,000,000,000,000,000,000,000
api.c
242,936,207,862,042,000,000,000,000,000,000,000,000
[ "CWE-264" ]
CVE-2013-7421
The Crypto API in the Linux kernel before 3.18.5 allows local users to load arbitrary kernel modules via a bind system call for an AF_ALG socket with a module name in the salg_name field, a different vulnerability than CVE-2014-9644.
https://nvd.nist.gov/vuln/detail/CVE-2013-7421
9,326
linux
bf911e985d6bbaa328c20c3e05f4eb03de11fdd6
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/bf911e985d6bbaa328c20c3e05f4eb03de11fdd6
sctp: validate chunk len before actually using it Andrey Konovalov reported that KASAN detected that SCTP was using a slab beyond the boundaries. It was caused because when handling out of the blue packets in function sctp_sf_ootb() it was checking the chunk len only after already processing the first chunk, validating only for the 2nd and subsequent ones. The fix is to just move the check upwards so it's also validated for the 1st chunk. Reported-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Reviewed-by: Xin Long <lucien.xin@gmail.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
sctp_disposition_t sctp_sf_ootb(struct net *net, const struct sctp_endpoint *ep, const struct sctp_association *asoc, const sctp_subtype_t type, void *arg, sctp_cmd_seq_t *commands) { struct sctp_chunk *chunk = arg; struct sk_buff *skb = chunk->skb; sctp_chunkhdr_t *ch; sctp_errhdr_t *err; __u8 *ch_end; int ootb_shut_ack = 0; int ootb_cookie_ack = 0; SCTP_INC_STATS(net, SCTP_MIB_OUTOFBLUES); ch = (sctp_chunkhdr_t *) chunk->chunk_hdr; do { /* Report violation if the chunk is less then minimal */ if (ntohs(ch->length) < sizeof(sctp_chunkhdr_t)) return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, commands); /* Now that we know we at least have a chunk header, * do things that are type appropriate. */ if (SCTP_CID_SHUTDOWN_ACK == ch->type) ootb_shut_ack = 1; /* RFC 2960, Section 3.3.7 * Moreover, under any circumstances, an endpoint that * receives an ABORT MUST NOT respond to that ABORT by * sending an ABORT of its own. */ if (SCTP_CID_ABORT == ch->type) return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); /* RFC 8.4, 7) If the packet contains a "Stale cookie" ERROR * or a COOKIE ACK the SCTP Packet should be silently * discarded. */ if (SCTP_CID_COOKIE_ACK == ch->type) ootb_cookie_ack = 1; if (SCTP_CID_ERROR == ch->type) { sctp_walk_errors(err, ch) { if (SCTP_ERROR_STALE_COOKIE == err->cause) { ootb_cookie_ack = 1; break; } } } /* Report violation if chunk len overflows */ ch_end = ((__u8 *)ch) + SCTP_PAD4(ntohs(ch->length)); if (ch_end > skb_tail_pointer(skb)) return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, commands); ch = (sctp_chunkhdr_t *) ch_end; } while (ch_end < skb_tail_pointer(skb)); if (ootb_shut_ack) return sctp_sf_shut_8_4_5(net, ep, asoc, type, arg, commands); else if (ootb_cookie_ack) return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); else return sctp_sf_tabort_8_4_8(net, ep, asoc, type, arg, commands); }
324,777,000,170,628,950,000,000,000,000,000,000,000
sm_statefuns.c
57,717,278,456,982,200,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2016-9555
The sctp_sf_ootb function in net/sctp/sm_statefuns.c in the Linux kernel before 4.8.8 lacks chunk-length checking for the first chunk, which allows remote attackers to cause a denial of service (out-of-bounds slab access) or possibly have unspecified other impact via crafted SCTP data.
https://nvd.nist.gov/vuln/detail/CVE-2016-9555
9,328
libtiff
83a4b92815ea04969d494416eaae3d4c6b338e4a
https://github.com/vadz/libtiff
https://github.com/vadz/libtiff/commit/83a4b92815ea04969d494416eaae3d4c6b338e4a#diff-5be5ce02d0dea67050d5b2a10102d1ba
* tools/tiffcrop.c: fix various out-of-bounds write vulnerabilities in heap or stack allocated buffers. Reported as MSVR 35093, MSVR 35096 and MSVR 35097. Discovered by Axel Souchet and Vishal Chauhan from the MSRC Vulnerabilities & Mitigations team. * tools/tiff2pdf.c: fix out-of-bounds write vulnerabilities in heap allocate buffer in t2p_process_jpeg_strip(). Reported as MSVR 35098. Discovered by Axel Souchet and Vishal Chauhan from the MSRC Vulnerabilities & Mitigations team. * libtiff/tif_pixarlog.c: fix out-of-bounds write vulnerabilities in heap allocated buffers. Reported as MSVR 35094. Discovered by Axel Souchet and Vishal Chauhan from the MSRC Vulnerabilities & Mitigations team. * libtiff/tif_write.c: fix issue in error code path of TIFFFlushData1() that didn't reset the tif_rawcc and tif_rawcp members. I'm not completely sure if that could happen in practice outside of the odd behaviour of t2p_seekproc() of tiff2pdf). The report points that a better fix could be to check the return value of TIFFFlushData1() in places where it isn't done currently, but it seems this patch is enough. Reported as MSVR 35095. Discovered by Axel Souchet & Vishal Chauhan & Suha Can from the MSRC Vulnerabilities & Mitigations team.
1
TIFFFlushData1(TIFF* tif) { if (tif->tif_rawcc > 0 && tif->tif_flags & TIFF_BUF4WRITE ) { if (!isFillOrder(tif, tif->tif_dir.td_fillorder) && (tif->tif_flags & TIFF_NOBITREV) == 0) TIFFReverseBits((uint8*)tif->tif_rawdata, tif->tif_rawcc); if (!TIFFAppendToStrip(tif, isTiled(tif) ? tif->tif_curtile : tif->tif_curstrip, tif->tif_rawdata, tif->tif_rawcc)) return (0); tif->tif_rawcc = 0; tif->tif_rawcp = tif->tif_rawdata; } return (1); }
323,177,524,682,187,800,000,000,000,000,000,000,000
None
null
[ "CWE-787" ]
CVE-2016-9534
tif_write.c in libtiff 4.0.6 has an issue in the error code path of TIFFFlushData1() that didn't reset the tif_rawcc and tif_rawcp members. Reported as MSVR 35095, aka "TIFFFlushData1 heap-buffer-overflow."
https://nvd.nist.gov/vuln/detail/CVE-2016-9534
9,330
linux
05692d7005a364add85c6e25a6c4447ce08f913a
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/05692d7005a364add85c6e25a6c4447ce08f913a
vfio/pci: Fix integer overflows, bitmask check The VFIO_DEVICE_SET_IRQS ioctl did not sufficiently sanitize user-supplied integers, potentially allowing memory corruption. This patch adds appropriate integer overflow checks, checks the range bounds for VFIO_IRQ_SET_DATA_NONE, and also verifies that only single element in the VFIO_IRQ_SET_DATA_TYPE_MASK bitmask is set. VFIO_IRQ_SET_ACTION_TYPE_MASK is already correctly checked later in vfio_pci_set_irqs_ioctl(). Furthermore, a kzalloc is changed to a kcalloc because the use of a kzalloc with an integer multiplication allowed an integer overflow condition to be reached without this patch. kcalloc checks for overflow and should prevent a similar occurrence. Signed-off-by: Vlad Tsyrklevich <vlad@tsyrklevich.net> Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
1
static int vfio_msi_enable(struct vfio_pci_device *vdev, int nvec, bool msix) { struct pci_dev *pdev = vdev->pdev; unsigned int flag = msix ? PCI_IRQ_MSIX : PCI_IRQ_MSI; int ret; if (!is_irq_none(vdev)) return -EINVAL; vdev->ctx = kzalloc(nvec * sizeof(struct vfio_pci_irq_ctx), GFP_KERNEL); if (!vdev->ctx) return -ENOMEM; /* return the number of supported vectors if we can't get all: */ ret = pci_alloc_irq_vectors(pdev, 1, nvec, flag); if (ret < nvec) { if (ret > 0) pci_free_irq_vectors(pdev); kfree(vdev->ctx); return ret; } vdev->num_ctx = nvec; vdev->irq_type = msix ? VFIO_PCI_MSIX_IRQ_INDEX : VFIO_PCI_MSI_IRQ_INDEX; if (!msix) { /* * Compute the virtual hardware field for max msi vectors - * it is the log base 2 of the number of vectors. */ vdev->msi_qmax = fls(nvec * 2 - 1) - 1; } return 0; }
265,085,938,067,485,300,000,000,000,000,000,000,000
vfio_pci_intrs.c
317,782,573,120,508,840,000,000,000,000,000,000,000
[ "CWE-190" ]
CVE-2016-9083
drivers/vfio/pci/vfio_pci.c in the Linux kernel through 4.8.11 allows local users to bypass integer overflow checks, and cause a denial of service (memory corruption) or have unspecified other impact, by leveraging access to a vfio PCI device file for a VFIO_DEVICE_SET_IRQS ioctl call, aka a "state machine confusion bug."
https://nvd.nist.gov/vuln/detail/CVE-2016-9083
9,332
linux
f5527fffff3f002b0a6b376163613b82f69de073
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/f5527fffff3f002b0a6b376163613b82f69de073
mpi: Fix NULL ptr dereference in mpi_powm() [ver #3] This fixes CVE-2016-8650. If mpi_powm() is given a zero exponent, it wants to immediately return either 1 or 0, depending on the modulus. However, if the result was initalised with zero limb space, no limbs space is allocated and a NULL-pointer exception ensues. Fix this by allocating a minimal amount of limb space for the result when the 0-exponent case when the result is 1 and not touching the limb space when the result is 0. This affects the use of RSA keys and X.509 certificates that carry them. BUG: unable to handle kernel NULL pointer dereference at (null) IP: [<ffffffff8138ce5d>] mpi_powm+0x32/0x7e6 PGD 0 Oops: 0002 [#1] SMP Modules linked in: CPU: 3 PID: 3014 Comm: keyctl Not tainted 4.9.0-rc6-fscache+ #278 Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014 task: ffff8804011944c0 task.stack: ffff880401294000 RIP: 0010:[<ffffffff8138ce5d>] [<ffffffff8138ce5d>] mpi_powm+0x32/0x7e6 RSP: 0018:ffff880401297ad8 EFLAGS: 00010212 RAX: 0000000000000000 RBX: ffff88040868bec0 RCX: ffff88040868bba0 RDX: ffff88040868b260 RSI: ffff88040868bec0 RDI: ffff88040868bee0 RBP: ffff880401297ba8 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000047 R11: ffffffff8183b210 R12: 0000000000000000 R13: ffff8804087c7600 R14: 000000000000001f R15: ffff880401297c50 FS: 00007f7a7918c700(0000) GS:ffff88041fb80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000000 CR3: 0000000401250000 CR4: 00000000001406e0 Stack: ffff88040868bec0 0000000000000020 ffff880401297b00 ffffffff81376cd4 0000000000000100 ffff880401297b10 ffffffff81376d12 ffff880401297b30 ffffffff81376f37 0000000000000100 0000000000000000 ffff880401297ba8 Call Trace: [<ffffffff81376cd4>] ? __sg_page_iter_next+0x43/0x66 [<ffffffff81376d12>] ? sg_miter_get_next_page+0x1b/0x5d [<ffffffff81376f37>] ? sg_miter_next+0x17/0xbd [<ffffffff8138ba3a>] ? mpi_read_raw_from_sgl+0xf2/0x146 [<ffffffff8132a95c>] rsa_verify+0x9d/0xee [<ffffffff8132acca>] ? pkcs1pad_sg_set_buf+0x2e/0xbb [<ffffffff8132af40>] pkcs1pad_verify+0xc0/0xe1 [<ffffffff8133cb5e>] public_key_verify_signature+0x1b0/0x228 [<ffffffff8133d974>] x509_check_for_self_signed+0xa1/0xc4 [<ffffffff8133cdde>] x509_cert_parse+0x167/0x1a1 [<ffffffff8133d609>] x509_key_preparse+0x21/0x1a1 [<ffffffff8133c3d7>] asymmetric_key_preparse+0x34/0x61 [<ffffffff812fc9f3>] key_create_or_update+0x145/0x399 [<ffffffff812fe227>] SyS_add_key+0x154/0x19e [<ffffffff81001c2b>] do_syscall_64+0x80/0x191 [<ffffffff816825e4>] entry_SYSCALL64_slow_path+0x25/0x25 Code: 56 41 55 41 54 53 48 81 ec a8 00 00 00 44 8b 71 04 8b 42 04 4c 8b 67 18 45 85 f6 89 45 80 0f 84 b4 06 00 00 85 c0 75 2f 41 ff ce <49> c7 04 24 01 00 00 00 b0 01 75 0b 48 8b 41 18 48 83 38 01 0f RIP [<ffffffff8138ce5d>] mpi_powm+0x32/0x7e6 RSP <ffff880401297ad8> CR2: 0000000000000000 ---[ end trace d82015255d4a5d8d ]--- Basically, this is a backport of a libgcrypt patch: http://git.gnupg.org/cgi-bin/gitweb.cgi?p=libgcrypt.git;a=patch;h=6e1adb05d290aeeb1c230c763970695f4a538526 Fixes: cdec9cb5167a ("crypto: GnuPG based MPI lib - source files (part 1)") Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Signed-off-by: David Howells <dhowells@redhat.com> cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com> cc: linux-ima-devel@lists.sourceforge.net cc: stable@vger.kernel.org Signed-off-by: James Morris <james.l.morris@oracle.com>
1
int mpi_powm(MPI res, MPI base, MPI exp, MPI mod) { mpi_ptr_t mp_marker = NULL, bp_marker = NULL, ep_marker = NULL; mpi_ptr_t xp_marker = NULL; mpi_ptr_t tspace = NULL; mpi_ptr_t rp, ep, mp, bp; mpi_size_t esize, msize, bsize, rsize; int esign, msign, bsign, rsign; mpi_size_t size; int mod_shift_cnt; int negative_result; int assign_rp = 0; mpi_size_t tsize = 0; /* to avoid compiler warning */ /* fixme: we should check that the warning is void */ int rc = -ENOMEM; esize = exp->nlimbs; msize = mod->nlimbs; size = 2 * msize; esign = exp->sign; msign = mod->sign; rp = res->d; ep = exp->d; if (!msize) return -EINVAL; if (!esize) { /* Exponent is zero, result is 1 mod MOD, i.e., 1 or 0 * depending on if MOD equals 1. */ rp[0] = 1; res->nlimbs = (msize == 1 && mod->d[0] == 1) ? 0 : 1; res->sign = 0; goto leave; } /* Normalize MOD (i.e. make its most significant bit set) as required by * mpn_divrem. This will make the intermediate values in the calculation * slightly larger, but the correct result is obtained after a final * reduction using the original MOD value. */ mp = mp_marker = mpi_alloc_limb_space(msize); if (!mp) goto enomem; mod_shift_cnt = count_leading_zeros(mod->d[msize - 1]); if (mod_shift_cnt) mpihelp_lshift(mp, mod->d, msize, mod_shift_cnt); else MPN_COPY(mp, mod->d, msize); bsize = base->nlimbs; bsign = base->sign; if (bsize > msize) { /* The base is larger than the module. Reduce it. */ /* Allocate (BSIZE + 1) with space for remainder and quotient. * (The quotient is (bsize - msize + 1) limbs.) */ bp = bp_marker = mpi_alloc_limb_space(bsize + 1); if (!bp) goto enomem; MPN_COPY(bp, base->d, bsize); /* We don't care about the quotient, store it above the remainder, * at BP + MSIZE. */ mpihelp_divrem(bp + msize, 0, bp, bsize, mp, msize); bsize = msize; /* Canonicalize the base, since we are going to multiply with it * quite a few times. */ MPN_NORMALIZE(bp, bsize); } else bp = base->d; if (!bsize) { res->nlimbs = 0; res->sign = 0; goto leave; } if (res->alloced < size) { /* We have to allocate more space for RES. If any of the input * parameters are identical to RES, defer deallocation of the old * space. */ if (rp == ep || rp == mp || rp == bp) { rp = mpi_alloc_limb_space(size); if (!rp) goto enomem; assign_rp = 1; } else { if (mpi_resize(res, size) < 0) goto enomem; rp = res->d; } } else { /* Make BASE, EXP and MOD not overlap with RES. */ if (rp == bp) { /* RES and BASE are identical. Allocate temp. space for BASE. */ BUG_ON(bp_marker); bp = bp_marker = mpi_alloc_limb_space(bsize); if (!bp) goto enomem; MPN_COPY(bp, rp, bsize); } if (rp == ep) { /* RES and EXP are identical. Allocate temp. space for EXP. */ ep = ep_marker = mpi_alloc_limb_space(esize); if (!ep) goto enomem; MPN_COPY(ep, rp, esize); } if (rp == mp) { /* RES and MOD are identical. Allocate temporary space for MOD. */ BUG_ON(mp_marker); mp = mp_marker = mpi_alloc_limb_space(msize); if (!mp) goto enomem; MPN_COPY(mp, rp, msize); } } MPN_COPY(rp, bp, bsize); rsize = bsize; rsign = bsign; { mpi_size_t i; mpi_ptr_t xp; int c; mpi_limb_t e; mpi_limb_t carry_limb; struct karatsuba_ctx karactx; xp = xp_marker = mpi_alloc_limb_space(2 * (msize + 1)); if (!xp) goto enomem; memset(&karactx, 0, sizeof karactx); negative_result = (ep[0] & 1) && base->sign; i = esize - 1; e = ep[i]; c = count_leading_zeros(e); e = (e << c) << 1; /* shift the exp bits to the left, lose msb */ c = BITS_PER_MPI_LIMB - 1 - c; /* Main loop. * * Make the result be pointed to alternately by XP and RP. This * helps us avoid block copying, which would otherwise be necessary * with the overlap restrictions of mpihelp_divmod. With 50% probability * the result after this loop will be in the area originally pointed * by RP (==RES->d), and with 50% probability in the area originally * pointed to by XP. */ for (;;) { while (c) { mpi_ptr_t tp; mpi_size_t xsize; /*if (mpihelp_mul_n(xp, rp, rp, rsize) < 0) goto enomem */ if (rsize < KARATSUBA_THRESHOLD) mpih_sqr_n_basecase(xp, rp, rsize); else { if (!tspace) { tsize = 2 * rsize; tspace = mpi_alloc_limb_space(tsize); if (!tspace) goto enomem; } else if (tsize < (2 * rsize)) { mpi_free_limb_space(tspace); tsize = 2 * rsize; tspace = mpi_alloc_limb_space(tsize); if (!tspace) goto enomem; } mpih_sqr_n(xp, rp, rsize, tspace); } xsize = 2 * rsize; if (xsize > msize) { mpihelp_divrem(xp + msize, 0, xp, xsize, mp, msize); xsize = msize; } tp = rp; rp = xp; xp = tp; rsize = xsize; if ((mpi_limb_signed_t) e < 0) { /*mpihelp_mul( xp, rp, rsize, bp, bsize ); */ if (bsize < KARATSUBA_THRESHOLD) { mpi_limb_t tmp; if (mpihelp_mul (xp, rp, rsize, bp, bsize, &tmp) < 0) goto enomem; } else { if (mpihelp_mul_karatsuba_case (xp, rp, rsize, bp, bsize, &karactx) < 0) goto enomem; } xsize = rsize + bsize; if (xsize > msize) { mpihelp_divrem(xp + msize, 0, xp, xsize, mp, msize); xsize = msize; } tp = rp; rp = xp; xp = tp; rsize = xsize; } e <<= 1; c--; } i--; if (i < 0) break; e = ep[i]; c = BITS_PER_MPI_LIMB; } /* We shifted MOD, the modulo reduction argument, left MOD_SHIFT_CNT * steps. Adjust the result by reducing it with the original MOD. * * Also make sure the result is put in RES->d (where it already * might be, see above). */ if (mod_shift_cnt) { carry_limb = mpihelp_lshift(res->d, rp, rsize, mod_shift_cnt); rp = res->d; if (carry_limb) { rp[rsize] = carry_limb; rsize++; } } else { MPN_COPY(res->d, rp, rsize); rp = res->d; } if (rsize >= msize) { mpihelp_divrem(rp + msize, 0, rp, rsize, mp, msize); rsize = msize; } /* Remove any leading zero words from the result. */ if (mod_shift_cnt) mpihelp_rshift(rp, rp, rsize, mod_shift_cnt); MPN_NORMALIZE(rp, rsize); mpihelp_release_karatsuba_ctx(&karactx); } if (negative_result && rsize) { if (mod_shift_cnt) mpihelp_rshift(mp, mp, msize, mod_shift_cnt); mpihelp_sub(rp, mp, msize, rp, rsize); rsize = msize; rsign = msign; MPN_NORMALIZE(rp, rsize); } res->nlimbs = rsize; res->sign = rsign; leave: rc = 0; enomem: if (assign_rp) mpi_assign_limb_space(res, rp, size); if (mp_marker) mpi_free_limb_space(mp_marker); if (bp_marker) mpi_free_limb_space(bp_marker); if (ep_marker) mpi_free_limb_space(ep_marker); if (xp_marker) mpi_free_limb_space(xp_marker); if (tspace) mpi_free_limb_space(tspace); return rc; }
52,745,785,378,836,690,000,000,000,000,000,000,000
mpi-pow.c
265,062,024,780,737,600,000,000,000,000,000,000,000
[ "CWE-20" ]
CVE-2016-8650
The mpi_powm function in lib/mpi/mpi-pow.c in the Linux kernel through 4.8.11 does not ensure that memory is allocated for limb data, which allows local users to cause a denial of service (stack memory corruption and panic) via an add_key system call for an RSA key with a zero exponent.
https://nvd.nist.gov/vuln/detail/CVE-2016-8650
9,333
linux
8148a73c9901a8794a50f950083c00ccf97d43b3
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/8148a73c9901a8794a50f950083c00ccf97d43b3
proc: prevent accessing /proc/<PID>/environ until it's ready If /proc/<PID>/environ gets read before the envp[] array is fully set up in create_{aout,elf,elf_fdpic,flat}_tables(), we might end up trying to read more bytes than are actually written, as env_start will already be set but env_end will still be zero, making the range calculation underflow, allowing to read beyond the end of what has been written. Fix this as it is done for /proc/<PID>/cmdline by testing env_end for zero. It is, apparently, intentionally set last in create_*_tables(). This bug was found by the PaX size_overflow plugin that detected the arithmetic underflow of 'this_len = env_end - (env_start + src)' when env_end is still zero. The expected consequence is that userland trying to access /proc/<PID>/environ of a not yet fully set up process may get inconsistent data as we're in the middle of copying in the environment variables. Fixes: https://forums.grsecurity.net/viewtopic.php?f=3&t=4363 Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=116461 Signed-off-by: Mathias Krause <minipli@googlemail.com> Cc: Emese Revfy <re.emese@gmail.com> Cc: Pax Team <pageexec@freemail.hu> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Mateusz Guzik <mguzik@redhat.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Cyrill Gorcunov <gorcunov@openvz.org> Cc: Jarod Wilson <jarod@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1
static ssize_t environ_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { char *page; unsigned long src = *ppos; int ret = 0; struct mm_struct *mm = file->private_data; unsigned long env_start, env_end; if (!mm) return 0; page = (char *)__get_free_page(GFP_TEMPORARY); if (!page) return -ENOMEM; ret = 0; if (!atomic_inc_not_zero(&mm->mm_users)) goto free; down_read(&mm->mmap_sem); env_start = mm->env_start; env_end = mm->env_end; up_read(&mm->mmap_sem); while (count > 0) { size_t this_len, max_len; int retval; if (src >= (env_end - env_start)) break; this_len = env_end - (env_start + src); max_len = min_t(size_t, PAGE_SIZE, count); this_len = min(max_len, this_len); retval = access_remote_vm(mm, (env_start + src), page, this_len, 0); if (retval <= 0) { ret = retval; break; } if (copy_to_user(buf, page, retval)) { ret = -EFAULT; break; } ret += retval; src += retval; buf += retval; count -= retval; } *ppos = src; mmput(mm); free: free_page((unsigned long) page); return ret; }
154,024,528,585,163,020,000,000,000,000,000,000,000
base.c
174,177,174,530,935,130,000,000,000,000,000,000,000
[ "CWE-362" ]
CVE-2016-7916
Race condition in the environ_read function in fs/proc/base.c in the Linux kernel before 4.5.4 allows local users to obtain sensitive information from kernel memory by reading a /proc/*/environ file during a process-setup time interval in which environment-variable copying is incomplete.
https://nvd.nist.gov/vuln/detail/CVE-2016-7916
9,334
linux
7bc2b55a5c030685b399bb65b6baa9ccc3d1f167
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/7bc2b55a5c030685b399bb65b6baa9ccc3d1f167
scsi: arcmsr: Buffer overflow in arcmsr_iop_message_xfer() We need to put an upper bound on "user_len" so the memcpy() doesn't overflow. Cc: <stable@vger.kernel.org> Reported-by: Marco Grassi <marco.gra@gmail.com> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
1
static int arcmsr_iop_message_xfer(struct AdapterControlBlock *acb, struct scsi_cmnd *cmd) { char *buffer; unsigned short use_sg; int retvalue = 0, transfer_len = 0; unsigned long flags; struct CMD_MESSAGE_FIELD *pcmdmessagefld; uint32_t controlcode = (uint32_t)cmd->cmnd[5] << 24 | (uint32_t)cmd->cmnd[6] << 16 | (uint32_t)cmd->cmnd[7] << 8 | (uint32_t)cmd->cmnd[8]; struct scatterlist *sg; use_sg = scsi_sg_count(cmd); sg = scsi_sglist(cmd); buffer = kmap_atomic(sg_page(sg)) + sg->offset; if (use_sg > 1) { retvalue = ARCMSR_MESSAGE_FAIL; goto message_out; } transfer_len += sg->length; if (transfer_len > sizeof(struct CMD_MESSAGE_FIELD)) { retvalue = ARCMSR_MESSAGE_FAIL; pr_info("%s: ARCMSR_MESSAGE_FAIL!\n", __func__); goto message_out; } pcmdmessagefld = (struct CMD_MESSAGE_FIELD *)buffer; switch (controlcode) { case ARCMSR_MESSAGE_READ_RQBUFFER: { unsigned char *ver_addr; uint8_t *ptmpQbuffer; uint32_t allxfer_len = 0; ver_addr = kmalloc(ARCMSR_API_DATA_BUFLEN, GFP_ATOMIC); if (!ver_addr) { retvalue = ARCMSR_MESSAGE_FAIL; pr_info("%s: memory not enough!\n", __func__); goto message_out; } ptmpQbuffer = ver_addr; spin_lock_irqsave(&acb->rqbuffer_lock, flags); if (acb->rqbuf_getIndex != acb->rqbuf_putIndex) { unsigned int tail = acb->rqbuf_getIndex; unsigned int head = acb->rqbuf_putIndex; unsigned int cnt_to_end = CIRC_CNT_TO_END(head, tail, ARCMSR_MAX_QBUFFER); allxfer_len = CIRC_CNT(head, tail, ARCMSR_MAX_QBUFFER); if (allxfer_len > ARCMSR_API_DATA_BUFLEN) allxfer_len = ARCMSR_API_DATA_BUFLEN; if (allxfer_len <= cnt_to_end) memcpy(ptmpQbuffer, acb->rqbuffer + tail, allxfer_len); else { memcpy(ptmpQbuffer, acb->rqbuffer + tail, cnt_to_end); memcpy(ptmpQbuffer + cnt_to_end, acb->rqbuffer, allxfer_len - cnt_to_end); } acb->rqbuf_getIndex = (acb->rqbuf_getIndex + allxfer_len) % ARCMSR_MAX_QBUFFER; } memcpy(pcmdmessagefld->messagedatabuffer, ver_addr, allxfer_len); if (acb->acb_flags & ACB_F_IOPDATA_OVERFLOW) { struct QBUFFER __iomem *prbuffer; acb->acb_flags &= ~ACB_F_IOPDATA_OVERFLOW; prbuffer = arcmsr_get_iop_rqbuffer(acb); if (arcmsr_Read_iop_rqbuffer_data(acb, prbuffer) == 0) acb->acb_flags |= ACB_F_IOPDATA_OVERFLOW; } spin_unlock_irqrestore(&acb->rqbuffer_lock, flags); kfree(ver_addr); pcmdmessagefld->cmdmessage.Length = allxfer_len; if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_OK; break; } case ARCMSR_MESSAGE_WRITE_WQBUFFER: { unsigned char *ver_addr; int32_t user_len, cnt2end; uint8_t *pQbuffer, *ptmpuserbuffer; ver_addr = kmalloc(ARCMSR_API_DATA_BUFLEN, GFP_ATOMIC); if (!ver_addr) { retvalue = ARCMSR_MESSAGE_FAIL; goto message_out; } ptmpuserbuffer = ver_addr; user_len = pcmdmessagefld->cmdmessage.Length; memcpy(ptmpuserbuffer, pcmdmessagefld->messagedatabuffer, user_len); spin_lock_irqsave(&acb->wqbuffer_lock, flags); if (acb->wqbuf_putIndex != acb->wqbuf_getIndex) { struct SENSE_DATA *sensebuffer = (struct SENSE_DATA *)cmd->sense_buffer; arcmsr_write_ioctldata2iop(acb); /* has error report sensedata */ sensebuffer->ErrorCode = SCSI_SENSE_CURRENT_ERRORS; sensebuffer->SenseKey = ILLEGAL_REQUEST; sensebuffer->AdditionalSenseLength = 0x0A; sensebuffer->AdditionalSenseCode = 0x20; sensebuffer->Valid = 1; retvalue = ARCMSR_MESSAGE_FAIL; } else { pQbuffer = &acb->wqbuffer[acb->wqbuf_putIndex]; cnt2end = ARCMSR_MAX_QBUFFER - acb->wqbuf_putIndex; if (user_len > cnt2end) { memcpy(pQbuffer, ptmpuserbuffer, cnt2end); ptmpuserbuffer += cnt2end; user_len -= cnt2end; acb->wqbuf_putIndex = 0; pQbuffer = acb->wqbuffer; } memcpy(pQbuffer, ptmpuserbuffer, user_len); acb->wqbuf_putIndex += user_len; acb->wqbuf_putIndex %= ARCMSR_MAX_QBUFFER; if (acb->acb_flags & ACB_F_MESSAGE_WQBUFFER_CLEARED) { acb->acb_flags &= ~ACB_F_MESSAGE_WQBUFFER_CLEARED; arcmsr_write_ioctldata2iop(acb); } } spin_unlock_irqrestore(&acb->wqbuffer_lock, flags); kfree(ver_addr); if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_OK; break; } case ARCMSR_MESSAGE_CLEAR_RQBUFFER: { uint8_t *pQbuffer = acb->rqbuffer; arcmsr_clear_iop2drv_rqueue_buffer(acb); spin_lock_irqsave(&acb->rqbuffer_lock, flags); acb->acb_flags |= ACB_F_MESSAGE_RQBUFFER_CLEARED; acb->rqbuf_getIndex = 0; acb->rqbuf_putIndex = 0; memset(pQbuffer, 0, ARCMSR_MAX_QBUFFER); spin_unlock_irqrestore(&acb->rqbuffer_lock, flags); if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_OK; break; } case ARCMSR_MESSAGE_CLEAR_WQBUFFER: { uint8_t *pQbuffer = acb->wqbuffer; spin_lock_irqsave(&acb->wqbuffer_lock, flags); acb->acb_flags |= (ACB_F_MESSAGE_WQBUFFER_CLEARED | ACB_F_MESSAGE_WQBUFFER_READED); acb->wqbuf_getIndex = 0; acb->wqbuf_putIndex = 0; memset(pQbuffer, 0, ARCMSR_MAX_QBUFFER); spin_unlock_irqrestore(&acb->wqbuffer_lock, flags); if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_OK; break; } case ARCMSR_MESSAGE_CLEAR_ALLQBUFFER: { uint8_t *pQbuffer; arcmsr_clear_iop2drv_rqueue_buffer(acb); spin_lock_irqsave(&acb->rqbuffer_lock, flags); acb->acb_flags |= ACB_F_MESSAGE_RQBUFFER_CLEARED; acb->rqbuf_getIndex = 0; acb->rqbuf_putIndex = 0; pQbuffer = acb->rqbuffer; memset(pQbuffer, 0, sizeof(struct QBUFFER)); spin_unlock_irqrestore(&acb->rqbuffer_lock, flags); spin_lock_irqsave(&acb->wqbuffer_lock, flags); acb->acb_flags |= (ACB_F_MESSAGE_WQBUFFER_CLEARED | ACB_F_MESSAGE_WQBUFFER_READED); acb->wqbuf_getIndex = 0; acb->wqbuf_putIndex = 0; pQbuffer = acb->wqbuffer; memset(pQbuffer, 0, sizeof(struct QBUFFER)); spin_unlock_irqrestore(&acb->wqbuffer_lock, flags); if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_OK; break; } case ARCMSR_MESSAGE_RETURN_CODE_3F: { if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_3F; break; } case ARCMSR_MESSAGE_SAY_HELLO: { int8_t *hello_string = "Hello! I am ARCMSR"; if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_OK; memcpy(pcmdmessagefld->messagedatabuffer, hello_string, (int16_t)strlen(hello_string)); break; } case ARCMSR_MESSAGE_SAY_GOODBYE: { if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_OK; arcmsr_iop_parking(acb); break; } case ARCMSR_MESSAGE_FLUSH_ADAPTER_CACHE: { if (acb->fw_flag == FW_DEADLOCK) pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_BUS_HANG_ON; else pcmdmessagefld->cmdmessage.ReturnCode = ARCMSR_MESSAGE_RETURNCODE_OK; arcmsr_flush_adapter_cache(acb); break; } default: retvalue = ARCMSR_MESSAGE_FAIL; pr_info("%s: unknown controlcode!\n", __func__); } message_out: if (use_sg) { struct scatterlist *sg = scsi_sglist(cmd); kunmap_atomic(buffer - sg->offset); } return retvalue; }
280,628,888,206,953,260,000,000,000,000,000,000,000
arcmsr_hba.c
263,709,549,106,019,150,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2016-7425
The arcmsr_iop_message_xfer function in drivers/scsi/arcmsr/arcmsr_hba.c in the Linux kernel through 4.8.2 does not restrict a certain length field, which allows local users to gain privileges or cause a denial of service (heap-based buffer overflow) via an ARCMSR_MESSAGE_WRITE_WQBUFFER control code.
https://nvd.nist.gov/vuln/detail/CVE-2016-7425
9,336
php-src
b88393f08a558eec14964a55d3c680fe67407712
https://github.com/php/php-src
https://github.com/php/php-src/commit/b88393f08a558eec14964a55d3c680fe67407712?w=1
Fix bug #72860: wddx_deserialize use-after-free
1
static int wddx_stack_destroy(wddx_stack *stack) { register int i; if (stack->elements) { for (i = 0; i < stack->top; i++) { if (((st_entry *)stack->elements[i])->data) { zval_ptr_dtor(&((st_entry *)stack->elements[i])->data); } if (((st_entry *)stack->elements[i])->varname) { efree(((st_entry *)stack->elements[i])->varname); } efree(stack->elements[i]); } efree(stack->elements); } return SUCCESS; }
167,114,955,492,293,730,000,000,000,000,000,000,000
None
null
[ "CWE-416" ]
CVE-2016-7413
Use-after-free vulnerability in the wddx_stack_destroy function in ext/wddx/wddx.c in PHP before 5.6.26 and 7.x before 7.0.11 allows remote attackers to cause a denial of service or possibly have unspecified other impact via a wddxPacket XML document that lacks an end-tag for a recordset field element, leading to mishandling in a wddx_deserialize call.
https://nvd.nist.gov/vuln/detail/CVE-2016-7413
9,337
charybdis
818a3fda944b26d4814132cee14cfda4ea4aa824
https://github.com/charybdis-ircd/charybdis
https://github.com/charybdis-ircd/charybdis/commit/818a3fda944b26d4814132cee14cfda4ea4aa824
SASL: Disallow beginning : and space anywhere in AUTHENTICATE parameter This is a FIX FOR A SECURITY VULNERABILITY. All Charybdis users must apply this fix if you support SASL on your servers, or unload m_sasl.so in the meantime.
1
m_authenticate(struct Client *client_p, struct Client *source_p, int parc, const char *parv[]) { struct Client *agent_p = NULL; struct Client *saslserv_p = NULL; /* They really should use CAP for their own sake. */ if(!IsCapable(source_p, CLICAP_SASL)) return 0; if (strlen(client_p->id) == 3) { exit_client(client_p, client_p, client_p, "Mixing client and server protocol"); return 0; } saslserv_p = find_named_client(ConfigFileEntry.sasl_service); if (saslserv_p == NULL || !IsService(saslserv_p)) { sendto_one(source_p, form_str(ERR_SASLABORTED), me.name, EmptyString(source_p->name) ? "*" : source_p->name); return 0; } if(source_p->localClient->sasl_complete) { *source_p->localClient->sasl_agent = '\0'; source_p->localClient->sasl_complete = 0; } if(strlen(parv[1]) > 400) { sendto_one(source_p, form_str(ERR_SASLTOOLONG), me.name, EmptyString(source_p->name) ? "*" : source_p->name); return 0; } if(!*source_p->id) { /* Allocate a UID. */ strcpy(source_p->id, generate_uid()); add_to_id_hash(source_p->id, source_p); } if(*source_p->localClient->sasl_agent) agent_p = find_id(source_p->localClient->sasl_agent); if(agent_p == NULL) { sendto_one(saslserv_p, ":%s ENCAP %s SASL %s %s H %s %s", me.id, saslserv_p->servptr->name, source_p->id, saslserv_p->id, source_p->host, source_p->sockhost); if (!strcmp(parv[1], "EXTERNAL") && source_p->certfp != NULL) sendto_one(saslserv_p, ":%s ENCAP %s SASL %s %s S %s %s", me.id, saslserv_p->servptr->name, source_p->id, saslserv_p->id, parv[1], source_p->certfp); else sendto_one(saslserv_p, ":%s ENCAP %s SASL %s %s S %s", me.id, saslserv_p->servptr->name, source_p->id, saslserv_p->id, parv[1]); rb_strlcpy(source_p->localClient->sasl_agent, saslserv_p->id, IDLEN); } else sendto_one(agent_p, ":%s ENCAP %s SASL %s %s C %s", me.id, agent_p->servptr->name, source_p->id, agent_p->id, parv[1]); source_p->localClient->sasl_out++; return 0; }
245,147,725,102,129,400,000,000,000,000,000,000,000
m_sasl.c
250,746,651,209,914,400,000,000,000,000,000,000,000
[ "CWE-285" ]
CVE-2016-7143
The m_authenticate function in modules/m_sasl.c in Charybdis before 3.5.3 allows remote attackers to spoof certificate fingerprints and consequently log in as another user via a crafted AUTHENTICATE parameter.
https://nvd.nist.gov/vuln/detail/CVE-2016-7143
9,341
php-src
20ce2fe8e3c211a42fee05a461a5881be9a8790e
https://github.com/php/php-src
https://github.com/php/php-src/commit/20ce2fe8e3c211a42fee05a461a5881be9a8790e?w=1
Fix bug #72663 - destroy broken object when unserializing (cherry picked from commit 448c9be157f4147e121f1a2a524536c75c9c6059)
1
static int php_var_unserialize_internal(UNSERIALIZE_PARAMETER) { const unsigned char *cursor, *limit, *marker, *start; zval *rval_ref; limit = max; cursor = *p; if (YYCURSOR >= YYLIMIT) { return 0; } if (var_hash && (*p)[0] != 'R') { var_push(var_hash, rval); } start = cursor; #line 554 "ext/standard/var_unserializer.c" { YYCTYPE yych; static const unsigned char yybm[] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }; if ((YYLIMIT - YYCURSOR) < 7) YYFILL(7); yych = *YYCURSOR; switch (yych) { case 'C': case 'O': goto yy13; case 'N': goto yy5; case 'R': goto yy2; case 'S': goto yy10; case 'a': goto yy11; case 'b': goto yy6; case 'd': goto yy8; case 'i': goto yy7; case 'o': goto yy12; case 'r': goto yy4; case 's': goto yy9; case '}': goto yy14; default: goto yy16; } yy2: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy95; yy3: #line 884 "ext/standard/var_unserializer.re" { return 0; } #line 580 "ext/standard/var_unserializer.c" yy4: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy89; goto yy3; yy5: yych = *++YYCURSOR; if (yych == ';') goto yy87; goto yy3; yy6: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy83; goto yy3; yy7: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy77; goto yy3; yy8: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy53; goto yy3; yy9: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy46; goto yy3; yy10: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy39; goto yy3; yy11: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy32; goto yy3; yy12: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy25; goto yy3; yy13: yych = *(YYMARKER = ++YYCURSOR); if (yych == ':') goto yy17; goto yy3; yy14: ++YYCURSOR; #line 878 "ext/standard/var_unserializer.re" { /* this is the case where we have less data than planned */ php_error_docref(NULL, E_NOTICE, "Unexpected end of serialized data"); return 0; /* not sure if it should be 0 or 1 here? */ } #line 629 "ext/standard/var_unserializer.c" yy16: yych = *++YYCURSOR; goto yy3; yy17: yych = *++YYCURSOR; if (yybm[0+yych] & 128) { goto yy20; } if (yych == '+') goto yy19; yy18: YYCURSOR = YYMARKER; goto yy3; yy19: yych = *++YYCURSOR; if (yybm[0+yych] & 128) { goto yy20; } goto yy18; yy20: ++YYCURSOR; if ((YYLIMIT - YYCURSOR) < 2) YYFILL(2); yych = *YYCURSOR; if (yybm[0+yych] & 128) { goto yy20; } if (yych != ':') goto yy18; yych = *++YYCURSOR; if (yych != '"') goto yy18; ++YYCURSOR; #line 733 "ext/standard/var_unserializer.re" { size_t len, len2, len3, maxlen; zend_long elements; char *str; zend_string *class_name; zend_class_entry *ce; int incomplete_class = 0; int custom_object = 0; zval user_func; zval retval; zval args[1]; if (!var_hash) return 0; if (*start == 'C') { custom_object = 1; } len2 = len = parse_uiv(start + 2); maxlen = max - YYCURSOR; if (maxlen < len || len == 0) { *p = start + 2; return 0; } str = (char*)YYCURSOR; YYCURSOR += len; if (*(YYCURSOR) != '"') { *p = YYCURSOR; return 0; } if (*(YYCURSOR+1) != ':') { *p = YYCURSOR+1; return 0; } len3 = strspn(str, "0123456789_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\177\200\201\202\203\204\205\206\207\210\211\212\213\214\215\216\217\220\221\222\223\224\225\226\227\230\231\232\233\234\235\236\237\240\241\242\243\244\245\246\247\250\251\252\253\254\255\256\257\260\261\262\263\264\265\266\267\270\271\272\273\274\275\276\277\300\301\302\303\304\305\306\307\310\311\312\313\314\315\316\317\320\321\322\323\324\325\326\327\330\331\332\333\334\335\336\337\340\341\342\343\344\345\346\347\350\351\352\353\354\355\356\357\360\361\362\363\364\365\366\367\370\371\372\373\374\375\376\377\\"); if (len3 != len) { *p = YYCURSOR + len3 - len; return 0; } class_name = zend_string_init(str, len, 0); do { if(!unserialize_allowed_class(class_name, classes)) { incomplete_class = 1; ce = PHP_IC_ENTRY; break; } /* Try to find class directly */ BG(serialize_lock)++; ce = zend_lookup_class(class_name); if (ce) { BG(serialize_lock)--; if (EG(exception)) { zend_string_release(class_name); return 0; } break; } BG(serialize_lock)--; if (EG(exception)) { zend_string_release(class_name); return 0; } /* Check for unserialize callback */ if ((PG(unserialize_callback_func) == NULL) || (PG(unserialize_callback_func)[0] == '\0')) { incomplete_class = 1; ce = PHP_IC_ENTRY; break; } /* Call unserialize callback */ ZVAL_STRING(&user_func, PG(unserialize_callback_func)); ZVAL_STR_COPY(&args[0], class_name); BG(serialize_lock)++; if (call_user_function_ex(CG(function_table), NULL, &user_func, &retval, 1, args, 0, NULL) != SUCCESS) { BG(serialize_lock)--; if (EG(exception)) { zend_string_release(class_name); zval_ptr_dtor(&user_func); zval_ptr_dtor(&args[0]); return 0; } php_error_docref(NULL, E_WARNING, "defined (%s) but not found", Z_STRVAL(user_func)); incomplete_class = 1; ce = PHP_IC_ENTRY; zval_ptr_dtor(&user_func); zval_ptr_dtor(&args[0]); break; } BG(serialize_lock)--; zval_ptr_dtor(&retval); if (EG(exception)) { zend_string_release(class_name); zval_ptr_dtor(&user_func); zval_ptr_dtor(&args[0]); return 0; } /* The callback function may have defined the class */ if ((ce = zend_lookup_class(class_name)) == NULL) { php_error_docref(NULL, E_WARNING, "Function %s() hasn't defined the class it was called for", Z_STRVAL(user_func)); incomplete_class = 1; ce = PHP_IC_ENTRY; } zval_ptr_dtor(&user_func); zval_ptr_dtor(&args[0]); break; } while (1); *p = YYCURSOR; if (custom_object) { int ret; ret = object_custom(UNSERIALIZE_PASSTHRU, ce); if (ret && incomplete_class) { php_store_class_name(rval, ZSTR_VAL(class_name), len2); } zend_string_release(class_name); return ret; } elements = object_common1(UNSERIALIZE_PASSTHRU, ce); if (incomplete_class) { php_store_class_name(rval, ZSTR_VAL(class_name), len2); } zend_string_release(class_name); return object_common2(UNSERIALIZE_PASSTHRU, elements); } #line 804 "ext/standard/var_unserializer.c" yy25: yych = *++YYCURSOR; if (yych <= ',') { if (yych != '+') goto yy18; } else { if (yych <= '-') goto yy26; if (yych <= '/') goto yy18; if (yych <= '9') goto yy27; goto yy18; } yy26: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy27: ++YYCURSOR; if ((YYLIMIT - YYCURSOR) < 2) YYFILL(2); yych = *YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy27; if (yych >= ';') goto yy18; yych = *++YYCURSOR; if (yych != '"') goto yy18; ++YYCURSOR; #line 726 "ext/standard/var_unserializer.re" { if (!var_hash) return 0; return object_common2(UNSERIALIZE_PASSTHRU, object_common1(UNSERIALIZE_PASSTHRU, ZEND_STANDARD_CLASS_DEF_PTR)); } #line 836 "ext/standard/var_unserializer.c" yy32: yych = *++YYCURSOR; if (yych == '+') goto yy33; if (yych <= '/') goto yy18; if (yych <= '9') goto yy34; goto yy18; yy33: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy34: ++YYCURSOR; if ((YYLIMIT - YYCURSOR) < 2) YYFILL(2); yych = *YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy34; if (yych >= ';') goto yy18; yych = *++YYCURSOR; if (yych != '{') goto yy18; ++YYCURSOR; #line 702 "ext/standard/var_unserializer.re" { zend_long elements = parse_iv(start + 2); /* use iv() not uiv() in order to check data range */ *p = YYCURSOR; if (!var_hash) return 0; if (elements < 0) { return 0; } array_init_size(rval, elements); if (elements) { /* we can't convert from packed to hash during unserialization, because reference to some zvals might be keept in var_hash (to support references) */ zend_hash_real_init(Z_ARRVAL_P(rval), 0); } if (!process_nested_data(UNSERIALIZE_PASSTHRU, Z_ARRVAL_P(rval), elements, 0)) { return 0; } return finish_nested_data(UNSERIALIZE_PASSTHRU); } #line 881 "ext/standard/var_unserializer.c" yy39: yych = *++YYCURSOR; if (yych == '+') goto yy40; if (yych <= '/') goto yy18; if (yych <= '9') goto yy41; goto yy18; yy40: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy41: ++YYCURSOR; if ((YYLIMIT - YYCURSOR) < 2) YYFILL(2); yych = *YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy41; if (yych >= ';') goto yy18; yych = *++YYCURSOR; if (yych != '"') goto yy18; ++YYCURSOR; #line 668 "ext/standard/var_unserializer.re" { size_t len, maxlen; zend_string *str; len = parse_uiv(start + 2); maxlen = max - YYCURSOR; if (maxlen < len) { *p = start + 2; return 0; } if ((str = unserialize_str(&YYCURSOR, len, maxlen)) == NULL) { return 0; } if (*(YYCURSOR) != '"') { zend_string_free(str); *p = YYCURSOR; return 0; } if (*(YYCURSOR + 1) != ';') { efree(str); *p = YYCURSOR + 1; return 0; } YYCURSOR += 2; *p = YYCURSOR; ZVAL_STR(rval, str); return 1; } #line 936 "ext/standard/var_unserializer.c" yy46: yych = *++YYCURSOR; if (yych == '+') goto yy47; if (yych <= '/') goto yy18; if (yych <= '9') goto yy48; goto yy18; yy47: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy48: ++YYCURSOR; if ((YYLIMIT - YYCURSOR) < 2) YYFILL(2); yych = *YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy48; if (yych >= ';') goto yy18; yych = *++YYCURSOR; if (yych != '"') goto yy18; ++YYCURSOR; #line 636 "ext/standard/var_unserializer.re" { size_t len, maxlen; char *str; len = parse_uiv(start + 2); maxlen = max - YYCURSOR; if (maxlen < len) { *p = start + 2; return 0; } str = (char*)YYCURSOR; YYCURSOR += len; if (*(YYCURSOR) != '"') { *p = YYCURSOR; return 0; } if (*(YYCURSOR + 1) != ';') { *p = YYCURSOR + 1; return 0; } YYCURSOR += 2; *p = YYCURSOR; ZVAL_STRINGL(rval, str, len); return 1; } #line 989 "ext/standard/var_unserializer.c" yy53: yych = *++YYCURSOR; if (yych <= '/') { if (yych <= ',') { if (yych == '+') goto yy57; goto yy18; } else { if (yych <= '-') goto yy55; if (yych <= '.') goto yy60; goto yy18; } } else { if (yych <= 'I') { if (yych <= '9') goto yy58; if (yych <= 'H') goto yy18; goto yy56; } else { if (yych != 'N') goto yy18; } } yych = *++YYCURSOR; if (yych == 'A') goto yy76; goto yy18; yy55: yych = *++YYCURSOR; if (yych <= '/') { if (yych == '.') goto yy60; goto yy18; } else { if (yych <= '9') goto yy58; if (yych != 'I') goto yy18; } yy56: yych = *++YYCURSOR; if (yych == 'N') goto yy72; goto yy18; yy57: yych = *++YYCURSOR; if (yych == '.') goto yy60; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy58: ++YYCURSOR; if ((YYLIMIT - YYCURSOR) < 4) YYFILL(4); yych = *YYCURSOR; if (yych <= ':') { if (yych <= '.') { if (yych <= '-') goto yy18; goto yy70; } else { if (yych <= '/') goto yy18; if (yych <= '9') goto yy58; goto yy18; } } else { if (yych <= 'E') { if (yych <= ';') goto yy63; if (yych <= 'D') goto yy18; goto yy65; } else { if (yych == 'e') goto yy65; goto yy18; } } yy60: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy61: ++YYCURSOR; if ((YYLIMIT - YYCURSOR) < 4) YYFILL(4); yych = *YYCURSOR; if (yych <= ';') { if (yych <= '/') goto yy18; if (yych <= '9') goto yy61; if (yych <= ':') goto yy18; } else { if (yych <= 'E') { if (yych <= 'D') goto yy18; goto yy65; } else { if (yych == 'e') goto yy65; goto yy18; } } yy63: ++YYCURSOR; #line 627 "ext/standard/var_unserializer.re" { #if SIZEOF_ZEND_LONG == 4 use_double: #endif *p = YYCURSOR; ZVAL_DOUBLE(rval, zend_strtod((const char *)start + 2, NULL)); return 1; } #line 1086 "ext/standard/var_unserializer.c" yy65: yych = *++YYCURSOR; if (yych <= ',') { if (yych != '+') goto yy18; } else { if (yych <= '-') goto yy66; if (yych <= '/') goto yy18; if (yych <= '9') goto yy67; goto yy18; } yy66: yych = *++YYCURSOR; if (yych <= ',') { if (yych == '+') goto yy69; goto yy18; } else { if (yych <= '-') goto yy69; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; } yy67: ++YYCURSOR; if (YYLIMIT <= YYCURSOR) YYFILL(1); yych = *YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy67; if (yych == ';') goto yy63; goto yy18; yy69: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy67; goto yy18; yy70: ++YYCURSOR; if ((YYLIMIT - YYCURSOR) < 4) YYFILL(4); yych = *YYCURSOR; if (yych <= ';') { if (yych <= '/') goto yy18; if (yych <= '9') goto yy70; if (yych <= ':') goto yy18; goto yy63; } else { if (yych <= 'E') { if (yych <= 'D') goto yy18; goto yy65; } else { if (yych == 'e') goto yy65; goto yy18; } } yy72: yych = *++YYCURSOR; if (yych != 'F') goto yy18; yy73: yych = *++YYCURSOR; if (yych != ';') goto yy18; ++YYCURSOR; #line 611 "ext/standard/var_unserializer.re" { *p = YYCURSOR; if (!strncmp((char*)start + 2, "NAN", 3)) { ZVAL_DOUBLE(rval, php_get_nan()); } else if (!strncmp((char*)start + 2, "INF", 3)) { ZVAL_DOUBLE(rval, php_get_inf()); } else if (!strncmp((char*)start + 2, "-INF", 4)) { ZVAL_DOUBLE(rval, -php_get_inf()); } else { ZVAL_NULL(rval); } return 1; } #line 1161 "ext/standard/var_unserializer.c" yy76: yych = *++YYCURSOR; if (yych == 'N') goto yy73; goto yy18; yy77: yych = *++YYCURSOR; if (yych <= ',') { if (yych != '+') goto yy18; } else { if (yych <= '-') goto yy78; if (yych <= '/') goto yy18; if (yych <= '9') goto yy79; goto yy18; } yy78: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy79: ++YYCURSOR; if (YYLIMIT <= YYCURSOR) YYFILL(1); yych = *YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy79; if (yych != ';') goto yy18; ++YYCURSOR; #line 585 "ext/standard/var_unserializer.re" { #if SIZEOF_ZEND_LONG == 4 int digits = YYCURSOR - start - 3; if (start[2] == '-' || start[2] == '+') { digits--; } /* Use double for large zend_long values that were serialized on a 64-bit system */ if (digits >= MAX_LENGTH_OF_LONG - 1) { if (digits == MAX_LENGTH_OF_LONG - 1) { int cmp = strncmp((char*)YYCURSOR - MAX_LENGTH_OF_LONG, long_min_digits, MAX_LENGTH_OF_LONG - 1); if (!(cmp < 0 || (cmp == 0 && start[2] == '-'))) { goto use_double; } } else { goto use_double; } } #endif *p = YYCURSOR; ZVAL_LONG(rval, parse_iv(start + 2)); return 1; } #line 1214 "ext/standard/var_unserializer.c" yy83: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= '2') goto yy18; yych = *++YYCURSOR; if (yych != ';') goto yy18; ++YYCURSOR; #line 579 "ext/standard/var_unserializer.re" { *p = YYCURSOR; ZVAL_BOOL(rval, parse_iv(start + 2)); return 1; } #line 1228 "ext/standard/var_unserializer.c" yy87: ++YYCURSOR; #line 573 "ext/standard/var_unserializer.re" { *p = YYCURSOR; ZVAL_NULL(rval); return 1; } #line 1237 "ext/standard/var_unserializer.c" yy89: yych = *++YYCURSOR; if (yych <= ',') { if (yych != '+') goto yy18; } else { if (yych <= '-') goto yy90; if (yych <= '/') goto yy18; if (yych <= '9') goto yy91; goto yy18; } yy90: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy91: ++YYCURSOR; if (YYLIMIT <= YYCURSOR) YYFILL(1); yych = *YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy91; if (yych != ';') goto yy18; ++YYCURSOR; #line 548 "ext/standard/var_unserializer.re" { zend_long id; *p = YYCURSOR; if (!var_hash) return 0; id = parse_iv(start + 2) - 1; if (id == -1 || (rval_ref = var_access(var_hash, id)) == NULL) { return 0; } if (rval_ref == rval) { return 0; } if (Z_ISUNDEF_P(rval_ref) || (Z_ISREF_P(rval_ref) && Z_ISUNDEF_P(Z_REFVAL_P(rval_ref)))) { ZVAL_UNDEF(rval); return 1; } ZVAL_COPY(rval, rval_ref); return 1; } #line 1285 "ext/standard/var_unserializer.c" yy95: yych = *++YYCURSOR; if (yych <= ',') { if (yych != '+') goto yy18; } else { if (yych <= '-') goto yy96; if (yych <= '/') goto yy18; if (yych <= '9') goto yy97; goto yy18; } yy96: yych = *++YYCURSOR; if (yych <= '/') goto yy18; if (yych >= ':') goto yy18; yy97: ++YYCURSOR; if (YYLIMIT <= YYCURSOR) YYFILL(1); yych = *YYCURSOR; if (yych <= '/') goto yy18; if (yych <= '9') goto yy97; if (yych != ';') goto yy18; ++YYCURSOR; #line 522 "ext/standard/var_unserializer.re" { zend_long id; *p = YYCURSOR; if (!var_hash) return 0; id = parse_iv(start + 2) - 1; if (id == -1 || (rval_ref = var_access(var_hash, id)) == NULL) { return 0; } zval_ptr_dtor(rval); if (Z_ISUNDEF_P(rval_ref) || (Z_ISREF_P(rval_ref) && Z_ISUNDEF_P(Z_REFVAL_P(rval_ref)))) { ZVAL_UNDEF(rval); return 1; } if (Z_ISREF_P(rval_ref)) { ZVAL_COPY(rval, rval_ref); } else { ZVAL_NEW_REF(rval_ref, rval_ref); ZVAL_COPY(rval, rval_ref); } return 1; } #line 1334 "ext/standard/var_unserializer.c" } #line 886 "ext/standard/var_unserializer.re" return 0; }
270,468,889,219,152,030,000,000,000,000,000,000,000
var_unserializer.c
166,352,165,499,312,480,000,000,000,000,000,000,000
[ "CWE-502" ]
CVE-2016-7124
ext/standard/var_unserializer.c in PHP before 5.6.25 and 7.x before 7.0.10 mishandles certain invalid objects, which allows remote attackers to cause a denial of service or possibly have unspecified other impact via crafted serialized data that leads to a (1) __destruct call or (2) magic method call.
https://nvd.nist.gov/vuln/detail/CVE-2016-7124
9,343
libgd
01c61f8ab110a77ae64b5ca67c244c728c506f03
https://github.com/libgd/libgd
https://github.com/libgd/libgd/commit/01c61f8ab110a77ae64b5ca67c244c728c506f03
Proper fix for #248
1
int read_image_tga( gdIOCtx *ctx, oTga *tga ) { int pixel_block_size = (tga->bits / 8); int image_block_size = (tga->width * tga->height) * pixel_block_size; uint8_t* decompression_buffer = NULL; unsigned char* conversion_buffer = NULL; int buffer_caret = 0; int bitmap_caret = 0; int i = 0; int j = 0; uint8_t encoded_pixels; if(overflow2(tga->width, tga->height)) { return -1; } if(overflow2(tga->width * tga->height, pixel_block_size)) { return -1; } if(overflow2(image_block_size, sizeof(int))) { return -1; } /*! \todo Add more image type support. */ if (tga->imagetype != TGA_TYPE_RGB && tga->imagetype != TGA_TYPE_RGB_RLE) return -1; /*! \brief Allocate memmory for image block * Allocate a chunk of memory for the image block to be passed into. */ tga->bitmap = (int *) gdMalloc(image_block_size * sizeof(int)); if (tga->bitmap == NULL) return -1; switch (tga->imagetype) { case TGA_TYPE_RGB: /*! \brief Read in uncompressed RGB TGA * Chunk load the pixel data from an uncompressed RGB type TGA. */ conversion_buffer = (unsigned char *) gdMalloc(image_block_size * sizeof(unsigned char)); if (conversion_buffer == NULL) { return -1; } if (gdGetBuf(conversion_buffer, image_block_size, ctx) != image_block_size) { gd_error("gd-tga: premature end of image data\n"); gdFree(conversion_buffer); return -1; } while (buffer_caret < image_block_size) { tga->bitmap[buffer_caret] = (int) conversion_buffer[buffer_caret]; buffer_caret++; } gdFree(conversion_buffer); break; case TGA_TYPE_RGB_RLE: /*! \brief Read in RLE compressed RGB TGA * Chunk load the pixel data from an RLE compressed RGB type TGA. */ decompression_buffer = (uint8_t*) gdMalloc(image_block_size * sizeof(uint8_t)); if (decompression_buffer == NULL) { return -1; } conversion_buffer = (unsigned char *) gdMalloc(image_block_size * sizeof(unsigned char)); if (conversion_buffer == NULL) { gd_error("gd-tga: premature end of image data\n"); gdFree( decompression_buffer ); return -1; } if (gdGetBuf(conversion_buffer, image_block_size, ctx) != image_block_size) { gdFree(conversion_buffer); gdFree(decompression_buffer); return -1; } buffer_caret = 0; while( buffer_caret < image_block_size) { decompression_buffer[buffer_caret] = (int)conversion_buffer[buffer_caret]; buffer_caret++; } buffer_caret = 0; while( bitmap_caret < image_block_size ) { if ((decompression_buffer[buffer_caret] & TGA_RLE_FLAG) == TGA_RLE_FLAG) { encoded_pixels = ( ( decompression_buffer[ buffer_caret ] & 127 ) + 1 ); buffer_caret++; if (encoded_pixels != 0) { if (!((buffer_caret + (encoded_pixels * pixel_block_size)) < image_block_size)) { gdFree( decompression_buffer ); gdFree( conversion_buffer ); return -1; } for (i = 0; i < encoded_pixels; i++) { for (j = 0; j < pixel_block_size; j++, bitmap_caret++) { tga->bitmap[ bitmap_caret ] = decompression_buffer[ buffer_caret + j ]; } } } buffer_caret += pixel_block_size; } else { encoded_pixels = decompression_buffer[ buffer_caret ] + 1; buffer_caret++; if (encoded_pixels != 0) { if (!((buffer_caret + (encoded_pixels * pixel_block_size)) < image_block_size)) { gdFree( decompression_buffer ); gdFree( conversion_buffer ); return -1; } for (i = 0; i < encoded_pixels; i++) { for( j = 0; j < pixel_block_size; j++, bitmap_caret++ ) { tga->bitmap[ bitmap_caret ] = decompression_buffer[ buffer_caret + j ]; } buffer_caret += pixel_block_size; } } } } gdFree( decompression_buffer ); gdFree( conversion_buffer ); break; } return 1; }
28,079,061,121,387,740,000,000,000,000,000,000,000
gd_tga.c
123,375,562,412,498,340,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2016-6905
The read_image_tga function in gd_tga.c in the GD Graphics Library (aka libgd) before 2.2.3 allows remote attackers to cause a denial of service (out-of-bounds read) via a crafted TGA image.
https://nvd.nist.gov/vuln/detail/CVE-2016-6905
9,348
linux
43761473c254b45883a64441dd0bc85a42f3645c
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/43761473c254b45883a64441dd0bc85a42f3645c
audit: fix a double fetch in audit_log_single_execve_arg() There is a double fetch problem in audit_log_single_execve_arg() where we first check the execve(2) argumnets for any "bad" characters which would require hex encoding and then re-fetch the arguments for logging in the audit record[1]. Of course this leaves a window of opportunity for an unsavory application to munge with the data. This patch reworks things by only fetching the argument data once[2] into a buffer where it is scanned and logged into the audit records(s). In addition to fixing the double fetch, this patch improves on the original code in a few other ways: better handling of large arguments which require encoding, stricter record length checking, and some performance improvements (completely unverified, but we got rid of some strlen() calls, that's got to be a good thing). As part of the development of this patch, I've also created a basic regression test for the audit-testsuite, the test can be tracked on GitHub at the following link: * https://github.com/linux-audit/audit-testsuite/issues/25 [1] If you pay careful attention, there is actually a triple fetch problem due to a strnlen_user() call at the top of the function. [2] This is a tiny white lie, we do make a call to strnlen_user() prior to fetching the argument data. I don't like it, but due to the way the audit record is structured we really have no choice unless we copy the entire argument at once (which would require a rather wasteful allocation). The good news is that with this patch the kernel no longer relies on this strnlen_user() value for anything beyond recording it in the log, we also update it with a trustworthy value whenever possible. Reported-by: Pengfei Wang <wpengfeinudt@gmail.com> Cc: <stable@vger.kernel.org> Signed-off-by: Paul Moore <paul@paul-moore.com>
1
static int audit_log_single_execve_arg(struct audit_context *context, struct audit_buffer **ab, int arg_num, size_t *len_sent, const char __user *p, char *buf) { char arg_num_len_buf[12]; const char __user *tmp_p = p; /* how many digits are in arg_num? 5 is the length of ' a=""' */ size_t arg_num_len = snprintf(arg_num_len_buf, 12, "%d", arg_num) + 5; size_t len, len_left, to_send; size_t max_execve_audit_len = MAX_EXECVE_AUDIT_LEN; unsigned int i, has_cntl = 0, too_long = 0; int ret; /* strnlen_user includes the null we don't want to send */ len_left = len = strnlen_user(p, MAX_ARG_STRLEN) - 1; /* * We just created this mm, if we can't find the strings * we just copied into it something is _very_ wrong. Similar * for strings that are too long, we should not have created * any. */ if (WARN_ON_ONCE(len < 0 || len > MAX_ARG_STRLEN - 1)) { send_sig(SIGKILL, current, 0); return -1; } /* walk the whole argument looking for non-ascii chars */ do { if (len_left > MAX_EXECVE_AUDIT_LEN) to_send = MAX_EXECVE_AUDIT_LEN; else to_send = len_left; ret = copy_from_user(buf, tmp_p, to_send); /* * There is no reason for this copy to be short. We just * copied them here, and the mm hasn't been exposed to user- * space yet. */ if (ret) { WARN_ON(1); send_sig(SIGKILL, current, 0); return -1; } buf[to_send] = '\0'; has_cntl = audit_string_contains_control(buf, to_send); if (has_cntl) { /* * hex messages get logged as 2 bytes, so we can only * send half as much in each message */ max_execve_audit_len = MAX_EXECVE_AUDIT_LEN / 2; break; } len_left -= to_send; tmp_p += to_send; } while (len_left > 0); len_left = len; if (len > max_execve_audit_len) too_long = 1; /* rewalk the argument actually logging the message */ for (i = 0; len_left > 0; i++) { int room_left; if (len_left > max_execve_audit_len) to_send = max_execve_audit_len; else to_send = len_left; /* do we have space left to send this argument in this ab? */ room_left = MAX_EXECVE_AUDIT_LEN - arg_num_len - *len_sent; if (has_cntl) room_left -= (to_send * 2); else room_left -= to_send; if (room_left < 0) { *len_sent = 0; audit_log_end(*ab); *ab = audit_log_start(context, GFP_KERNEL, AUDIT_EXECVE); if (!*ab) return 0; } /* * first record needs to say how long the original string was * so we can be sure nothing was lost. */ if ((i == 0) && (too_long)) audit_log_format(*ab, " a%d_len=%zu", arg_num, has_cntl ? 2*len : len); /* * normally arguments are small enough to fit and we already * filled buf above when we checked for control characters * so don't bother with another copy_from_user */ if (len >= max_execve_audit_len) ret = copy_from_user(buf, p, to_send); else ret = 0; if (ret) { WARN_ON(1); send_sig(SIGKILL, current, 0); return -1; } buf[to_send] = '\0'; /* actually log it */ audit_log_format(*ab, " a%d", arg_num); if (too_long) audit_log_format(*ab, "[%d]", i); audit_log_format(*ab, "="); if (has_cntl) audit_log_n_hex(*ab, buf, to_send); else audit_log_string(*ab, buf); p += to_send; len_left -= to_send; *len_sent += arg_num_len; if (has_cntl) *len_sent += to_send * 2; else *len_sent += to_send; } /* include the null we didn't log */ return len + 1; }
315,514,706,975,674,840,000,000,000,000,000,000,000
auditsc.c
30,820,339,966,201,292,000,000,000,000,000,000,000
[ "CWE-362" ]
CVE-2016-6136
Race condition in the audit_log_single_execve_arg function in kernel/auditsc.c in the Linux kernel through 4.7 allows local users to bypass intended character-set restrictions or disrupt system-call auditing by changing a certain string, aka a "double fetch" vulnerability.
https://nvd.nist.gov/vuln/detail/CVE-2016-6136
9,349
php-src
f6aef68089221c5ea047d4a74224ee3deead99a6
https://github.com/php/php-src
http://github.com/php/php-src/commit/f6aef68089221c5ea047d4a74224ee3deead99a6?w=1
Fix bug #72434: ZipArchive class Use After Free Vulnerability in PHP's GC algorithm and unserialize
1
static PHP_MINIT_FUNCTION(zip) { #ifdef PHP_ZIP_USE_OO zend_class_entry ce; memcpy(&zip_object_handlers, zend_get_std_object_handlers(), sizeof(zend_object_handlers)); zip_object_handlers.clone_obj = NULL; zip_object_handlers.get_property_ptr_ptr = php_zip_get_property_ptr_ptr; zip_object_handlers.get_properties = php_zip_get_properties; zip_object_handlers.read_property = php_zip_read_property; zip_object_handlers.has_property = php_zip_has_property; INIT_CLASS_ENTRY(ce, "ZipArchive", zip_class_functions); ce.create_object = php_zip_object_new; zip_class_entry = zend_register_internal_class(&ce TSRMLS_CC); zend_hash_init(&zip_prop_handlers, 0, NULL, NULL, 1); php_zip_register_prop_handler(&zip_prop_handlers, "status", php_zip_status, NULL, NULL, IS_LONG TSRMLS_CC); php_zip_register_prop_handler(&zip_prop_handlers, "statusSys", php_zip_status_sys, NULL, NULL, IS_LONG TSRMLS_CC); php_zip_register_prop_handler(&zip_prop_handlers, "numFiles", php_zip_get_num_files, NULL, NULL, IS_LONG TSRMLS_CC); php_zip_register_prop_handler(&zip_prop_handlers, "filename", NULL, NULL, php_zipobj_get_filename, IS_STRING TSRMLS_CC); php_zip_register_prop_handler(&zip_prop_handlers, "comment", NULL, php_zipobj_get_zip_comment, NULL, IS_STRING TSRMLS_CC); REGISTER_ZIP_CLASS_CONST_LONG("CREATE", ZIP_CREATE); REGISTER_ZIP_CLASS_CONST_LONG("EXCL", ZIP_EXCL); REGISTER_ZIP_CLASS_CONST_LONG("CHECKCONS", ZIP_CHECKCONS); REGISTER_ZIP_CLASS_CONST_LONG("OVERWRITE", ZIP_OVERWRITE); REGISTER_ZIP_CLASS_CONST_LONG("FL_NOCASE", ZIP_FL_NOCASE); REGISTER_ZIP_CLASS_CONST_LONG("FL_NODIR", ZIP_FL_NODIR); REGISTER_ZIP_CLASS_CONST_LONG("FL_COMPRESSED", ZIP_FL_COMPRESSED); REGISTER_ZIP_CLASS_CONST_LONG("FL_UNCHANGED", ZIP_FL_UNCHANGED); REGISTER_ZIP_CLASS_CONST_LONG("CM_DEFAULT", ZIP_CM_DEFAULT); REGISTER_ZIP_CLASS_CONST_LONG("CM_STORE", ZIP_CM_STORE); REGISTER_ZIP_CLASS_CONST_LONG("CM_SHRINK", ZIP_CM_SHRINK); REGISTER_ZIP_CLASS_CONST_LONG("CM_REDUCE_1", ZIP_CM_REDUCE_1); REGISTER_ZIP_CLASS_CONST_LONG("CM_REDUCE_2", ZIP_CM_REDUCE_2); REGISTER_ZIP_CLASS_CONST_LONG("CM_REDUCE_3", ZIP_CM_REDUCE_3); REGISTER_ZIP_CLASS_CONST_LONG("CM_REDUCE_4", ZIP_CM_REDUCE_4); REGISTER_ZIP_CLASS_CONST_LONG("CM_IMPLODE", ZIP_CM_IMPLODE); REGISTER_ZIP_CLASS_CONST_LONG("CM_DEFLATE", ZIP_CM_DEFLATE); REGISTER_ZIP_CLASS_CONST_LONG("CM_DEFLATE64", ZIP_CM_DEFLATE64); REGISTER_ZIP_CLASS_CONST_LONG("CM_PKWARE_IMPLODE", ZIP_CM_PKWARE_IMPLODE); REGISTER_ZIP_CLASS_CONST_LONG("CM_BZIP2", ZIP_CM_BZIP2); REGISTER_ZIP_CLASS_CONST_LONG("CM_LZMA", ZIP_CM_LZMA); REGISTER_ZIP_CLASS_CONST_LONG("CM_TERSE", ZIP_CM_TERSE); REGISTER_ZIP_CLASS_CONST_LONG("CM_LZ77", ZIP_CM_LZ77); REGISTER_ZIP_CLASS_CONST_LONG("CM_WAVPACK", ZIP_CM_WAVPACK); REGISTER_ZIP_CLASS_CONST_LONG("CM_PPMD", ZIP_CM_PPMD); /* Error code */ REGISTER_ZIP_CLASS_CONST_LONG("ER_OK", ZIP_ER_OK); /* N No error */ REGISTER_ZIP_CLASS_CONST_LONG("ER_MULTIDISK", ZIP_ER_MULTIDISK); /* N Multi-disk zip archives not supported */ REGISTER_ZIP_CLASS_CONST_LONG("ER_RENAME", ZIP_ER_RENAME); /* S Renaming temporary file failed */ REGISTER_ZIP_CLASS_CONST_LONG("ER_CLOSE", ZIP_ER_CLOSE); /* S Closing zip archive failed */ REGISTER_ZIP_CLASS_CONST_LONG("ER_SEEK", ZIP_ER_SEEK); /* S Seek error */ REGISTER_ZIP_CLASS_CONST_LONG("ER_READ", ZIP_ER_READ); /* S Read error */ REGISTER_ZIP_CLASS_CONST_LONG("ER_WRITE", ZIP_ER_WRITE); /* S Write error */ REGISTER_ZIP_CLASS_CONST_LONG("ER_CRC", ZIP_ER_CRC); /* N CRC error */ REGISTER_ZIP_CLASS_CONST_LONG("ER_ZIPCLOSED", ZIP_ER_ZIPCLOSED); /* N Containing zip archive was closed */ REGISTER_ZIP_CLASS_CONST_LONG("ER_NOENT", ZIP_ER_NOENT); /* N No such file */ REGISTER_ZIP_CLASS_CONST_LONG("ER_EXISTS", ZIP_ER_EXISTS); /* N File already exists */ REGISTER_ZIP_CLASS_CONST_LONG("ER_OPEN", ZIP_ER_OPEN); /* S Can't open file */ REGISTER_ZIP_CLASS_CONST_LONG("ER_TMPOPEN", ZIP_ER_TMPOPEN); /* S Failure to create temporary file */ REGISTER_ZIP_CLASS_CONST_LONG("ER_ZLIB", ZIP_ER_ZLIB); /* Z Zlib error */ REGISTER_ZIP_CLASS_CONST_LONG("ER_MEMORY", ZIP_ER_MEMORY); /* N Malloc failure */ REGISTER_ZIP_CLASS_CONST_LONG("ER_CHANGED", ZIP_ER_CHANGED); /* N Entry has been changed */ REGISTER_ZIP_CLASS_CONST_LONG("ER_COMPNOTSUPP", ZIP_ER_COMPNOTSUPP);/* N Compression method not supported */ REGISTER_ZIP_CLASS_CONST_LONG("ER_EOF", ZIP_ER_EOF); /* N Premature EOF */ REGISTER_ZIP_CLASS_CONST_LONG("ER_INVAL", ZIP_ER_INVAL); /* N Invalid argument */ REGISTER_ZIP_CLASS_CONST_LONG("ER_NOZIP", ZIP_ER_NOZIP); /* N Not a zip archive */ REGISTER_ZIP_CLASS_CONST_LONG("ER_INTERNAL", ZIP_ER_INTERNAL); /* N Internal error */ REGISTER_ZIP_CLASS_CONST_LONG("ER_INCONS", ZIP_ER_INCONS); /* N Zip archive inconsistent */ REGISTER_ZIP_CLASS_CONST_LONG("ER_REMOVE", ZIP_ER_REMOVE); /* S Can't remove file */ REGISTER_ZIP_CLASS_CONST_LONG("ER_DELETED", ZIP_ER_DELETED); /* N Entry has been deleted */ php_register_url_stream_wrapper("zip", &php_stream_zip_wrapper TSRMLS_CC); #endif le_zip_dir = zend_register_list_destructors_ex(php_zip_free_dir, NULL, le_zip_dir_name, module_number); le_zip_entry = zend_register_list_destructors_ex(php_zip_free_entry, NULL, le_zip_entry_name, module_number); return SUCCESS; }
132,104,834,997,612,040,000,000,000,000,000,000,000
php_zip.c
89,169,305,548,046,970,000,000,000,000,000,000,000
[ "CWE-416" ]
CVE-2016-5773
php_zip.c in the zip extension in PHP before 5.5.37, 5.6.x before 5.6.23, and 7.x before 7.0.8 improperly interacts with the unserialize implementation and garbage collection, which allows remote attackers to execute arbitrary code or cause a denial of service (use-after-free and application crash) via crafted serialized data containing a ZipArchive object.
https://nvd.nist.gov/vuln/detail/CVE-2016-5773
9,372
linux
1f461dcdd296eecedaffffc6bae2bfa90bd7eb89
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/1f461dcdd296eecedaffffc6bae2bfa90bd7eb89
ppp: take reference on channels netns Let channels hold a reference on their network namespace. Some channel types, like ppp_async and ppp_synctty, can have their userspace controller running in a different namespace. Therefore they can't rely on them to preclude their netns from being removed from under them. ================================================================== BUG: KASAN: use-after-free in ppp_unregister_channel+0x372/0x3a0 at addr ffff880064e217e0 Read of size 8 by task syz-executor/11581 ============================================================================= BUG net_namespace (Not tainted): kasan: bad access detected ----------------------------------------------------------------------------- Disabling lock debugging due to kernel taint INFO: Allocated in copy_net_ns+0x6b/0x1a0 age=92569 cpu=3 pid=6906 [< none >] ___slab_alloc+0x4c7/0x500 kernel/mm/slub.c:2440 [< none >] __slab_alloc+0x4c/0x90 kernel/mm/slub.c:2469 [< inline >] slab_alloc_node kernel/mm/slub.c:2532 [< inline >] slab_alloc kernel/mm/slub.c:2574 [< none >] kmem_cache_alloc+0x23a/0x2b0 kernel/mm/slub.c:2579 [< inline >] kmem_cache_zalloc kernel/include/linux/slab.h:597 [< inline >] net_alloc kernel/net/core/net_namespace.c:325 [< none >] copy_net_ns+0x6b/0x1a0 kernel/net/core/net_namespace.c:360 [< none >] create_new_namespaces+0x2f6/0x610 kernel/kernel/nsproxy.c:95 [< none >] copy_namespaces+0x297/0x320 kernel/kernel/nsproxy.c:150 [< none >] copy_process.part.35+0x1bf4/0x5760 kernel/kernel/fork.c:1451 [< inline >] copy_process kernel/kernel/fork.c:1274 [< none >] _do_fork+0x1bc/0xcb0 kernel/kernel/fork.c:1723 [< inline >] SYSC_clone kernel/kernel/fork.c:1832 [< none >] SyS_clone+0x37/0x50 kernel/kernel/fork.c:1826 [< none >] entry_SYSCALL_64_fastpath+0x16/0x7a kernel/arch/x86/entry/entry_64.S:185 INFO: Freed in net_drop_ns+0x67/0x80 age=575 cpu=2 pid=2631 [< none >] __slab_free+0x1fc/0x320 kernel/mm/slub.c:2650 [< inline >] slab_free kernel/mm/slub.c:2805 [< none >] kmem_cache_free+0x2a0/0x330 kernel/mm/slub.c:2814 [< inline >] net_free kernel/net/core/net_namespace.c:341 [< none >] net_drop_ns+0x67/0x80 kernel/net/core/net_namespace.c:348 [< none >] cleanup_net+0x4e5/0x600 kernel/net/core/net_namespace.c:448 [< none >] process_one_work+0x794/0x1440 kernel/kernel/workqueue.c:2036 [< none >] worker_thread+0xdb/0xfc0 kernel/kernel/workqueue.c:2170 [< none >] kthread+0x23f/0x2d0 kernel/drivers/block/aoe/aoecmd.c:1303 [< none >] ret_from_fork+0x3f/0x70 kernel/arch/x86/entry/entry_64.S:468 INFO: Slab 0xffffea0001938800 objects=3 used=0 fp=0xffff880064e20000 flags=0x5fffc0000004080 INFO: Object 0xffff880064e20000 @offset=0 fp=0xffff880064e24200 CPU: 1 PID: 11581 Comm: syz-executor Tainted: G B 4.4.0+ Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014 00000000ffffffff ffff8800662c7790 ffffffff8292049d ffff88003e36a300 ffff880064e20000 ffff880064e20000 ffff8800662c77c0 ffffffff816f2054 ffff88003e36a300 ffffea0001938800 ffff880064e20000 0000000000000000 Call Trace: [< inline >] __dump_stack kernel/lib/dump_stack.c:15 [<ffffffff8292049d>] dump_stack+0x6f/0xa2 kernel/lib/dump_stack.c:50 [<ffffffff816f2054>] print_trailer+0xf4/0x150 kernel/mm/slub.c:654 [<ffffffff816f875f>] object_err+0x2f/0x40 kernel/mm/slub.c:661 [< inline >] print_address_description kernel/mm/kasan/report.c:138 [<ffffffff816fb0c5>] kasan_report_error+0x215/0x530 kernel/mm/kasan/report.c:236 [< inline >] kasan_report kernel/mm/kasan/report.c:259 [<ffffffff816fb4de>] __asan_report_load8_noabort+0x3e/0x40 kernel/mm/kasan/report.c:280 [< inline >] ? ppp_pernet kernel/include/linux/compiler.h:218 [<ffffffff83ad71b2>] ? ppp_unregister_channel+0x372/0x3a0 kernel/drivers/net/ppp/ppp_generic.c:2392 [< inline >] ppp_pernet kernel/include/linux/compiler.h:218 [<ffffffff83ad71b2>] ppp_unregister_channel+0x372/0x3a0 kernel/drivers/net/ppp/ppp_generic.c:2392 [< inline >] ? ppp_pernet kernel/drivers/net/ppp/ppp_generic.c:293 [<ffffffff83ad6f26>] ? ppp_unregister_channel+0xe6/0x3a0 kernel/drivers/net/ppp/ppp_generic.c:2392 [<ffffffff83ae18f3>] ppp_asynctty_close+0xa3/0x130 kernel/drivers/net/ppp/ppp_async.c:241 [<ffffffff83ae1850>] ? async_lcp_peek+0x5b0/0x5b0 kernel/drivers/net/ppp/ppp_async.c:1000 [<ffffffff82c33239>] tty_ldisc_close.isra.1+0x99/0xe0 kernel/drivers/tty/tty_ldisc.c:478 [<ffffffff82c332c0>] tty_ldisc_kill+0x40/0x170 kernel/drivers/tty/tty_ldisc.c:744 [<ffffffff82c34943>] tty_ldisc_release+0x1b3/0x260 kernel/drivers/tty/tty_ldisc.c:772 [<ffffffff82c1ef21>] tty_release+0xac1/0x13e0 kernel/drivers/tty/tty_io.c:1901 [<ffffffff82c1e460>] ? release_tty+0x320/0x320 kernel/drivers/tty/tty_io.c:1688 [<ffffffff8174de36>] __fput+0x236/0x780 kernel/fs/file_table.c:208 [<ffffffff8174e405>] ____fput+0x15/0x20 kernel/fs/file_table.c:244 [<ffffffff813595ab>] task_work_run+0x16b/0x200 kernel/kernel/task_work.c:115 [< inline >] exit_task_work kernel/include/linux/task_work.h:21 [<ffffffff81307105>] do_exit+0x8b5/0x2c60 kernel/kernel/exit.c:750 [<ffffffff813fdd20>] ? debug_check_no_locks_freed+0x290/0x290 kernel/kernel/locking/lockdep.c:4123 [<ffffffff81306850>] ? mm_update_next_owner+0x6f0/0x6f0 kernel/kernel/exit.c:357 [<ffffffff813215e6>] ? __dequeue_signal+0x136/0x470 kernel/kernel/signal.c:550 [<ffffffff8132067b>] ? recalc_sigpending_tsk+0x13b/0x180 kernel/kernel/signal.c:145 [<ffffffff81309628>] do_group_exit+0x108/0x330 kernel/kernel/exit.c:880 [<ffffffff8132b9d4>] get_signal+0x5e4/0x14f0 kernel/kernel/signal.c:2307 [< inline >] ? kretprobe_table_lock kernel/kernel/kprobes.c:1113 [<ffffffff8151d355>] ? kprobe_flush_task+0xb5/0x450 kernel/kernel/kprobes.c:1158 [<ffffffff8115f7d3>] do_signal+0x83/0x1c90 kernel/arch/x86/kernel/signal.c:712 [<ffffffff8151d2a0>] ? recycle_rp_inst+0x310/0x310 kernel/include/linux/list.h:655 [<ffffffff8115f750>] ? setup_sigcontext+0x780/0x780 kernel/arch/x86/kernel/signal.c:165 [<ffffffff81380864>] ? finish_task_switch+0x424/0x5f0 kernel/kernel/sched/core.c:2692 [< inline >] ? finish_lock_switch kernel/kernel/sched/sched.h:1099 [<ffffffff81380560>] ? finish_task_switch+0x120/0x5f0 kernel/kernel/sched/core.c:2678 [< inline >] ? context_switch kernel/kernel/sched/core.c:2807 [<ffffffff85d794e9>] ? __schedule+0x919/0x1bd0 kernel/kernel/sched/core.c:3283 [<ffffffff81003901>] exit_to_usermode_loop+0xf1/0x1a0 kernel/arch/x86/entry/common.c:247 [< inline >] prepare_exit_to_usermode kernel/arch/x86/entry/common.c:282 [<ffffffff810062ef>] syscall_return_slowpath+0x19f/0x210 kernel/arch/x86/entry/common.c:344 [<ffffffff85d88022>] int_ret_from_sys_call+0x25/0x9f kernel/arch/x86/entry/entry_64.S:281 Memory state around the buggy address: ffff880064e21680: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff880064e21700: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff880064e21780: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff880064e21800: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff880064e21880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ================================================================== Fixes: 273ec51dd7ce ("net: ppp_generic - introduce net-namespace functionality v2") Reported-by: Baozeng Ding <sploving1@gmail.com> Signed-off-by: Guillaume Nault <g.nault@alphalink.fr> Reviewed-by: Cyrill Gorcunov <gorcunov@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
1
ppp_unregister_channel(struct ppp_channel *chan) { struct channel *pch = chan->ppp; struct ppp_net *pn; if (!pch) return; /* should never happen */ chan->ppp = NULL; /* * This ensures that we have returned from any calls into the * the channel's start_xmit or ioctl routine before we proceed. */ down_write(&pch->chan_sem); spin_lock_bh(&pch->downl); pch->chan = NULL; spin_unlock_bh(&pch->downl); up_write(&pch->chan_sem); ppp_disconnect_channel(pch); pn = ppp_pernet(pch->chan_net); spin_lock_bh(&pn->all_channels_lock); list_del(&pch->list); spin_unlock_bh(&pn->all_channels_lock); pch->file.dead = 1; wake_up_interruptible(&pch->file.rwait); if (atomic_dec_and_test(&pch->file.refcnt)) ppp_destroy_channel(pch); }
219,099,927,735,173,400,000,000,000,000,000,000,000
ppp_generic.c
33,074,513,998,217,310,000,000,000,000,000,000,000
[ "CWE-416" ]
CVE-2016-4805
Use-after-free vulnerability in drivers/net/ppp/ppp_generic.c in the Linux kernel before 4.5.2 allows local users to cause a denial of service (memory corruption and system crash, or spinlock) or possibly have unspecified other impact by removing a network namespace, related to the ppp_register_net_channel and ppp_unregister_channel functions.
https://nvd.nist.gov/vuln/detail/CVE-2016-4805
9,389
openssl
0ed26acce328ec16a3aa635f1ca37365e8c7403a
https://github.com/openssl/openssl
https://github.com/openssl/openssl/commit/0ed26acce328ec16a3aa635f1ca37365e8c7403a
Fix OOB read in TS_OBJ_print_bio(). TS_OBJ_print_bio() misuses OBJ_txt2obj: it should print the result as a null terminated buffer. The length value returned is the total length the complete text reprsentation would need not the amount of data written. CVE-2016-2180 Thanks to Shi Lei for reporting this bug. Reviewed-by: Matt Caswell <matt@openssl.org>
1
int TS_OBJ_print_bio(BIO *bio, const ASN1_OBJECT *obj) { char obj_txt[128]; int len = OBJ_obj2txt(obj_txt, sizeof(obj_txt), obj, 0); BIO_write(bio, obj_txt, len); BIO_write(bio, "\n", 1); return 1; }
23,563,415,281,384,434,000,000,000,000,000,000,000
None
null
[ "CWE-125" ]
CVE-2016-2180
The TS_OBJ_print_bio function in crypto/ts/ts_lib.c in the X.509 Public Key Infrastructure Time-Stamp Protocol (TSP) implementation in OpenSSL through 1.0.2h allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted time-stamp file that is mishandled by the "openssl ts" command.
https://nvd.nist.gov/vuln/detail/CVE-2016-2180
9,390
linux
23c8a812dc3c621009e4f0e5342aa4e2ede1ceaa
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/23c8a812dc3c621009e4f0e5342aa4e2ede1ceaa
KEYS: Fix ASN.1 indefinite length object parsing This fixes CVE-2016-0758. In the ASN.1 decoder, when the length field of an ASN.1 value is extracted, it isn't validated against the remaining amount of data before being added to the cursor. With a sufficiently large size indicated, the check: datalen - dp < 2 may then fail due to integer overflow. Fix this by checking the length indicated against the amount of remaining data in both places a definite length is determined. Whilst we're at it, make the following changes: (1) Check the maximum size of extended length does not exceed the capacity of the variable it's being stored in (len) rather than the type that variable is assumed to be (size_t). (2) Compare the EOC tag to the symbolic constant ASN1_EOC rather than the integer 0. (3) To reduce confusion, move the initialisation of len outside of: for (len = 0; n > 0; n--) { since it doesn't have anything to do with the loop counter n. Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Mimi Zohar <zohar@linux.vnet.ibm.com> Acked-by: David Woodhouse <David.Woodhouse@intel.com> Acked-by: Peter Jones <pjones@redhat.com>
1
static int asn1_find_indefinite_length(const unsigned char *data, size_t datalen, size_t *_dp, size_t *_len, const char **_errmsg) { unsigned char tag, tmp; size_t dp = *_dp, len, n; int indef_level = 1; next_tag: if (unlikely(datalen - dp < 2)) { if (datalen == dp) goto missing_eoc; goto data_overrun_error; } /* Extract a tag from the data */ tag = data[dp++]; if (tag == 0) { /* It appears to be an EOC. */ if (data[dp++] != 0) goto invalid_eoc; if (--indef_level <= 0) { *_len = dp - *_dp; *_dp = dp; return 0; } goto next_tag; } if (unlikely((tag & 0x1f) == ASN1_LONG_TAG)) { do { if (unlikely(datalen - dp < 2)) goto data_overrun_error; tmp = data[dp++]; } while (tmp & 0x80); } /* Extract the length */ len = data[dp++]; if (len <= 0x7f) { dp += len; goto next_tag; } if (unlikely(len == ASN1_INDEFINITE_LENGTH)) { /* Indefinite length */ if (unlikely((tag & ASN1_CONS_BIT) == ASN1_PRIM << 5)) goto indefinite_len_primitive; indef_level++; goto next_tag; } n = len - 0x80; if (unlikely(n > sizeof(size_t) - 1)) goto length_too_long; if (unlikely(n > datalen - dp)) goto data_overrun_error; for (len = 0; n > 0; n--) { len <<= 8; len |= data[dp++]; } dp += len; goto next_tag; length_too_long: *_errmsg = "Unsupported length"; goto error; indefinite_len_primitive: *_errmsg = "Indefinite len primitive not permitted"; goto error; invalid_eoc: *_errmsg = "Invalid length EOC"; goto error; data_overrun_error: *_errmsg = "Data overrun error"; goto error; missing_eoc: *_errmsg = "Missing EOC in indefinite len cons"; error: *_dp = dp; return -1; }
207,849,252,963,623,300,000,000,000,000,000,000,000
asn1_decoder.c
194,942,407,892,131,830,000,000,000,000,000,000,000
[ "CWE-787" ]
CVE-2016-0758
Integer overflow in lib/asn1_decoder.c in the Linux kernel before 4.6 allows local users to gain privileges via crafted ASN.1 data.
https://nvd.nist.gov/vuln/detail/CVE-2016-0758
9,391
linux
5c17c861a357e9458001f021a7afa7aab9937439
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/5c17c861a357e9458001f021a7afa7aab9937439
tty: Fix unsafe ldisc reference via ioctl(TIOCGETD) ioctl(TIOCGETD) retrieves the line discipline id directly from the ldisc because the line discipline id (c_line) in termios is untrustworthy; userspace may have set termios via ioctl(TCSETS*) without actually changing the line discipline via ioctl(TIOCSETD). However, directly accessing the current ldisc via tty->ldisc is unsafe; the ldisc ptr dereferenced may be stale if the line discipline is changing via ioctl(TIOCSETD) or hangup. Wait for the line discipline reference (just like read() or write()) to retrieve the "current" line discipline id. Cc: <stable@vger.kernel.org> Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1
long tty_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct tty_struct *tty = file_tty(file); struct tty_struct *real_tty; void __user *p = (void __user *)arg; int retval; struct tty_ldisc *ld; if (tty_paranoia_check(tty, file_inode(file), "tty_ioctl")) return -EINVAL; real_tty = tty_pair_get_tty(tty); /* * Factor out some common prep work */ switch (cmd) { case TIOCSETD: case TIOCSBRK: case TIOCCBRK: case TCSBRK: case TCSBRKP: retval = tty_check_change(tty); if (retval) return retval; if (cmd != TIOCCBRK) { tty_wait_until_sent(tty, 0); if (signal_pending(current)) return -EINTR; } break; } /* * Now do the stuff. */ switch (cmd) { case TIOCSTI: return tiocsti(tty, p); case TIOCGWINSZ: return tiocgwinsz(real_tty, p); case TIOCSWINSZ: return tiocswinsz(real_tty, p); case TIOCCONS: return real_tty != tty ? -EINVAL : tioccons(file); case FIONBIO: return fionbio(file, p); case TIOCEXCL: set_bit(TTY_EXCLUSIVE, &tty->flags); return 0; case TIOCNXCL: clear_bit(TTY_EXCLUSIVE, &tty->flags); return 0; case TIOCGEXCL: { int excl = test_bit(TTY_EXCLUSIVE, &tty->flags); return put_user(excl, (int __user *)p); } case TIOCNOTTY: if (current->signal->tty != tty) return -ENOTTY; no_tty(); return 0; case TIOCSCTTY: return tiocsctty(real_tty, file, arg); case TIOCGPGRP: return tiocgpgrp(tty, real_tty, p); case TIOCSPGRP: return tiocspgrp(tty, real_tty, p); case TIOCGSID: return tiocgsid(tty, real_tty, p); case TIOCGETD: return put_user(tty->ldisc->ops->num, (int __user *)p); case TIOCSETD: return tiocsetd(tty, p); case TIOCVHANGUP: if (!capable(CAP_SYS_ADMIN)) return -EPERM; tty_vhangup(tty); return 0; case TIOCGDEV: { unsigned int ret = new_encode_dev(tty_devnum(real_tty)); return put_user(ret, (unsigned int __user *)p); } /* * Break handling */ case TIOCSBRK: /* Turn break on, unconditionally */ if (tty->ops->break_ctl) return tty->ops->break_ctl(tty, -1); return 0; case TIOCCBRK: /* Turn break off, unconditionally */ if (tty->ops->break_ctl) return tty->ops->break_ctl(tty, 0); return 0; case TCSBRK: /* SVID version: non-zero arg --> no break */ /* non-zero arg means wait for all output data * to be sent (performed above) but don't send break. * This is used by the tcdrain() termios function. */ if (!arg) return send_break(tty, 250); return 0; case TCSBRKP: /* support for POSIX tcsendbreak() */ return send_break(tty, arg ? arg*100 : 250); case TIOCMGET: return tty_tiocmget(tty, p); case TIOCMSET: case TIOCMBIC: case TIOCMBIS: return tty_tiocmset(tty, cmd, p); case TIOCGICOUNT: retval = tty_tiocgicount(tty, p); /* For the moment allow fall through to the old method */ if (retval != -EINVAL) return retval; break; case TCFLSH: switch (arg) { case TCIFLUSH: case TCIOFLUSH: /* flush tty buffer and allow ldisc to process ioctl */ tty_buffer_flush(tty, NULL); break; } break; case TIOCSSERIAL: tty_warn_deprecated_flags(p); break; } if (tty->ops->ioctl) { retval = tty->ops->ioctl(tty, cmd, arg); if (retval != -ENOIOCTLCMD) return retval; } ld = tty_ldisc_ref_wait(tty); retval = -EINVAL; if (ld->ops->ioctl) { retval = ld->ops->ioctl(tty, file, cmd, arg); if (retval == -ENOIOCTLCMD) retval = -ENOTTY; } tty_ldisc_deref(ld); return retval; }
250,376,623,282,860,260,000,000,000,000,000,000,000
tty_io.c
271,836,844,159,911,300,000,000,000,000,000,000,000
[ "CWE-362" ]
CVE-2016-0723
Race condition in the tty_ioctl function in drivers/tty/tty_io.c in the Linux kernel through 4.4.1 allows local users to obtain sensitive information from kernel memory or cause a denial of service (use-after-free and system crash) by making a TIOCGETD ioctl call during processing of a TIOCSETD ioctl call.
https://nvd.nist.gov/vuln/detail/CVE-2016-0723
9,392
dosfstools
07908124838afcc99c577d1d3e84cef2dbd39cb7
https://github.com/dosfstools/dosfstools
https://github.com/dosfstools/dosfstools/commit/07908124838afcc99c577d1d3e84cef2dbd39cb7
set_fat(): Fix off-by-2 error leading to corruption in FAT12 In FAT12 two 12 bit entries are combined to a 24 bit value (three bytes). Therefore, when an even numbered FAT entry is set in FAT12, it must be be combined with the following entry. To prevent accessing beyond the end of the FAT array, it must be checked that the cluster is not the last one. Previously, the check tested that the requested cluster was equal to fs->clusters - 1. However, fs->clusters is the number of data clusters not including the two reserved FAT entries at the start so the test triggered two clusters early. If the third to last entry was written on a FAT12 filesystem with an odd number of clusters, the second to last entry would be corrupted. This corruption may also lead to invalid memory accesses when the corrupted entry becomes out of bounds and is used later. Change the test to fs->clusters + 1 to fix. Reported-by: Hanno Böck Signed-off-by: Andreas Bombe <aeb@debian.org>
1
void set_fat(DOS_FS * fs, uint32_t cluster, int32_t new) { unsigned char *data = NULL; int size; loff_t offs; if (new == -1) new = FAT_EOF(fs); else if ((long)new == -2) new = FAT_BAD(fs); switch (fs->fat_bits) { case 12: data = fs->fat + cluster * 3 / 2; offs = fs->fat_start + cluster * 3 / 2; if (cluster & 1) { FAT_ENTRY prevEntry; get_fat(&prevEntry, fs->fat, cluster - 1, fs); data[0] = ((new & 0xf) << 4) | (prevEntry.value >> 8); data[1] = new >> 4; } else { FAT_ENTRY subseqEntry; if (cluster != fs->clusters - 1) get_fat(&subseqEntry, fs->fat, cluster + 1, fs); else subseqEntry.value = 0; data[0] = new & 0xff; data[1] = (new >> 8) | ((0xff & subseqEntry.value) << 4); } size = 2; break; case 16: data = fs->fat + cluster * 2; offs = fs->fat_start + cluster * 2; *(unsigned short *)data = htole16(new); size = 2; break; case 32: { FAT_ENTRY curEntry; get_fat(&curEntry, fs->fat, cluster, fs); data = fs->fat + cluster * 4; offs = fs->fat_start + cluster * 4; /* According to M$, the high 4 bits of a FAT32 entry are reserved and * are not part of the cluster number. So we never touch them. */ *(uint32_t *)data = htole32((new & 0xfffffff) | (curEntry.reserved << 28)); size = 4; } break; default: die("Bad FAT entry size: %d bits.", fs->fat_bits); } fs_write(offs, size, data); if (fs->nfats > 1) { fs_write(offs + fs->fat_size, size, data); } }
55,173,971,612,937,990,000,000,000,000,000,000,000
fat.c
15,574,431,569,440,080,000,000,000,000,000,000,000
[ "CWE-189" ]
CVE-2015-8872
The set_fat function in fat.c in dosfstools before 4.0 might allow attackers to corrupt a FAT12 filesystem or cause a denial of service (invalid memory read and crash) by writing an odd number of clusters to the third to last entry on a FAT12 filesystem, which triggers an "off-by-two error."
https://nvd.nist.gov/vuln/detail/CVE-2015-8872
9,394
linux
e50293ef9775c5f1cf3fcc093037dd6a8c5684ea
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/e50293ef9775c5f1cf3fcc093037dd6a8c5684ea
USB: fix invalid memory access in hub_activate() Commit 8520f38099cc ("USB: change hub initialization sleeps to delayed_work") changed the hub_activate() routine to make part of it run in a workqueue. However, the commit failed to take a reference to the usb_hub structure or to lock the hub interface while doing so. As a result, if a hub is plugged in and quickly unplugged before the work routine can run, the routine will try to access memory that has been deallocated. Or, if the hub is unplugged while the routine is running, the memory may be deallocated while it is in active use. This patch fixes the problem by taking a reference to the usb_hub at the start of hub_activate() and releasing it at the end (when the work is finished), and by locking the hub interface while the work routine is running. It also adds a check at the start of the routine to see if the hub has already been disconnected, in which nothing should be done. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-by: Alexandru Cornea <alexandru.cornea@intel.com> Tested-by: Alexandru Cornea <alexandru.cornea@intel.com> Fixes: 8520f38099cc ("USB: change hub initialization sleeps to delayed_work") CC: <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1
static void hub_activate(struct usb_hub *hub, enum hub_activation_type type) { struct usb_device *hdev = hub->hdev; struct usb_hcd *hcd; int ret; int port1; int status; bool need_debounce_delay = false; unsigned delay; /* Continue a partial initialization */ if (type == HUB_INIT2) goto init2; if (type == HUB_INIT3) goto init3; /* The superspeed hub except for root hub has to use Hub Depth * value as an offset into the route string to locate the bits * it uses to determine the downstream port number. So hub driver * should send a set hub depth request to superspeed hub after * the superspeed hub is set configuration in initialization or * reset procedure. * * After a resume, port power should still be on. * For any other type of activation, turn it on. */ if (type != HUB_RESUME) { if (hdev->parent && hub_is_superspeed(hdev)) { ret = usb_control_msg(hdev, usb_sndctrlpipe(hdev, 0), HUB_SET_DEPTH, USB_RT_HUB, hdev->level - 1, 0, NULL, 0, USB_CTRL_SET_TIMEOUT); if (ret < 0) dev_err(hub->intfdev, "set hub depth failed\n"); } /* Speed up system boot by using a delayed_work for the * hub's initial power-up delays. This is pretty awkward * and the implementation looks like a home-brewed sort of * setjmp/longjmp, but it saves at least 100 ms for each * root hub (assuming usbcore is compiled into the kernel * rather than as a module). It adds up. * * This can't be done for HUB_RESUME or HUB_RESET_RESUME * because for those activation types the ports have to be * operational when we return. In theory this could be done * for HUB_POST_RESET, but it's easier not to. */ if (type == HUB_INIT) { delay = hub_power_on_good_delay(hub); hub_power_on(hub, false); INIT_DELAYED_WORK(&hub->init_work, hub_init_func2); queue_delayed_work(system_power_efficient_wq, &hub->init_work, msecs_to_jiffies(delay)); /* Suppress autosuspend until init is done */ usb_autopm_get_interface_no_resume( to_usb_interface(hub->intfdev)); return; /* Continues at init2: below */ } else if (type == HUB_RESET_RESUME) { /* The internal host controller state for the hub device * may be gone after a host power loss on system resume. * Update the device's info so the HW knows it's a hub. */ hcd = bus_to_hcd(hdev->bus); if (hcd->driver->update_hub_device) { ret = hcd->driver->update_hub_device(hcd, hdev, &hub->tt, GFP_NOIO); if (ret < 0) { dev_err(hub->intfdev, "Host not " "accepting hub info " "update.\n"); dev_err(hub->intfdev, "LS/FS devices " "and hubs may not work " "under this hub\n."); } } hub_power_on(hub, true); } else { hub_power_on(hub, true); } } init2: /* * Check each port and set hub->change_bits to let hub_wq know * which ports need attention. */ for (port1 = 1; port1 <= hdev->maxchild; ++port1) { struct usb_port *port_dev = hub->ports[port1 - 1]; struct usb_device *udev = port_dev->child; u16 portstatus, portchange; portstatus = portchange = 0; status = hub_port_status(hub, port1, &portstatus, &portchange); if (udev || (portstatus & USB_PORT_STAT_CONNECTION)) dev_dbg(&port_dev->dev, "status %04x change %04x\n", portstatus, portchange); /* * After anything other than HUB_RESUME (i.e., initialization * or any sort of reset), every port should be disabled. * Unconnected ports should likewise be disabled (paranoia), * and so should ports for which we have no usb_device. */ if ((portstatus & USB_PORT_STAT_ENABLE) && ( type != HUB_RESUME || !(portstatus & USB_PORT_STAT_CONNECTION) || !udev || udev->state == USB_STATE_NOTATTACHED)) { /* * USB3 protocol ports will automatically transition * to Enabled state when detect an USB3.0 device attach. * Do not disable USB3 protocol ports, just pretend * power was lost */ portstatus &= ~USB_PORT_STAT_ENABLE; if (!hub_is_superspeed(hdev)) usb_clear_port_feature(hdev, port1, USB_PORT_FEAT_ENABLE); } /* Clear status-change flags; we'll debounce later */ if (portchange & USB_PORT_STAT_C_CONNECTION) { need_debounce_delay = true; usb_clear_port_feature(hub->hdev, port1, USB_PORT_FEAT_C_CONNECTION); } if (portchange & USB_PORT_STAT_C_ENABLE) { need_debounce_delay = true; usb_clear_port_feature(hub->hdev, port1, USB_PORT_FEAT_C_ENABLE); } if (portchange & USB_PORT_STAT_C_RESET) { need_debounce_delay = true; usb_clear_port_feature(hub->hdev, port1, USB_PORT_FEAT_C_RESET); } if ((portchange & USB_PORT_STAT_C_BH_RESET) && hub_is_superspeed(hub->hdev)) { need_debounce_delay = true; usb_clear_port_feature(hub->hdev, port1, USB_PORT_FEAT_C_BH_PORT_RESET); } /* We can forget about a "removed" device when there's a * physical disconnect or the connect status changes. */ if (!(portstatus & USB_PORT_STAT_CONNECTION) || (portchange & USB_PORT_STAT_C_CONNECTION)) clear_bit(port1, hub->removed_bits); if (!udev || udev->state == USB_STATE_NOTATTACHED) { /* Tell hub_wq to disconnect the device or * check for a new connection */ if (udev || (portstatus & USB_PORT_STAT_CONNECTION) || (portstatus & USB_PORT_STAT_OVERCURRENT)) set_bit(port1, hub->change_bits); } else if (portstatus & USB_PORT_STAT_ENABLE) { bool port_resumed = (portstatus & USB_PORT_STAT_LINK_STATE) == USB_SS_PORT_LS_U0; /* The power session apparently survived the resume. * If there was an overcurrent or suspend change * (i.e., remote wakeup request), have hub_wq * take care of it. Look at the port link state * for USB 3.0 hubs, since they don't have a suspend * change bit, and they don't set the port link change * bit on device-initiated resume. */ if (portchange || (hub_is_superspeed(hub->hdev) && port_resumed)) set_bit(port1, hub->change_bits); } else if (udev->persist_enabled) { #ifdef CONFIG_PM udev->reset_resume = 1; #endif /* Don't set the change_bits when the device * was powered off. */ if (test_bit(port1, hub->power_bits)) set_bit(port1, hub->change_bits); } else { /* The power session is gone; tell hub_wq */ usb_set_device_state(udev, USB_STATE_NOTATTACHED); set_bit(port1, hub->change_bits); } } /* If no port-status-change flags were set, we don't need any * debouncing. If flags were set we can try to debounce the * ports all at once right now, instead of letting hub_wq do them * one at a time later on. * * If any port-status changes do occur during this delay, hub_wq * will see them later and handle them normally. */ if (need_debounce_delay) { delay = HUB_DEBOUNCE_STABLE; /* Don't do a long sleep inside a workqueue routine */ if (type == HUB_INIT2) { INIT_DELAYED_WORK(&hub->init_work, hub_init_func3); queue_delayed_work(system_power_efficient_wq, &hub->init_work, msecs_to_jiffies(delay)); return; /* Continues at init3: below */ } else { msleep(delay); } } init3: hub->quiescing = 0; status = usb_submit_urb(hub->urb, GFP_NOIO); if (status < 0) dev_err(hub->intfdev, "activate --> %d\n", status); if (hub->has_indicators && blinkenlights) queue_delayed_work(system_power_efficient_wq, &hub->leds, LED_CYCLE_PERIOD); /* Scan all ports that need attention */ kick_hub_wq(hub); /* Allow autosuspend if it was suppressed */ if (type <= HUB_INIT3) usb_autopm_put_interface_async(to_usb_interface(hub->intfdev)); }
70,755,082,858,795,600,000,000,000,000,000,000,000
hub.c
290,566,164,472,724,600,000,000,000,000,000,000,000
[ "CWE-703" ]
CVE-2015-8816
The hub_activate function in drivers/usb/core/hub.c in the Linux kernel before 4.3.5 does not properly maintain a hub-interface data structure, which allows physically proximate attackers to cause a denial of service (invalid memory access and system crash) or possibly have unspecified other impact by unplugging a USB hub device.
https://nvd.nist.gov/vuln/detail/CVE-2015-8816
9,401
hexchat
c9b63f7f9be01692b03fa15275135a4910a7e02d
https://github.com/hexchat/hexchat
https://github.com/hexchat/hexchat/commit/c9b63f7f9be01692b03fa15275135a4910a7e02d
ssl: Validate hostnames Closes #524
1
ssl_do_connect (server * serv) { char buf[128]; g_sess = serv->server_session; if (SSL_connect (serv->ssl) <= 0) { char err_buf[128]; int err; g_sess = NULL; if ((err = ERR_get_error ()) > 0) { ERR_error_string (err, err_buf); snprintf (buf, sizeof (buf), "(%d) %s", err, err_buf); EMIT_SIGNAL (XP_TE_CONNFAIL, serv->server_session, buf, NULL, NULL, NULL, 0); if (ERR_GET_REASON (err) == SSL_R_WRONG_VERSION_NUMBER) PrintText (serv->server_session, _("Are you sure this is a SSL capable server and port?\n")); server_cleanup (serv); if (prefs.hex_net_auto_reconnectonfail) auto_reconnect (serv, FALSE, -1); return (0); /* remove it (0) */ } } g_sess = NULL; if (SSL_is_init_finished (serv->ssl)) { struct cert_info cert_info; struct chiper_info *chiper_info; int verify_error; int i; if (!_SSL_get_cert_info (&cert_info, serv->ssl)) { snprintf (buf, sizeof (buf), "* Certification info:"); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); snprintf (buf, sizeof (buf), " Subject:"); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); for (i = 0; cert_info.subject_word[i]; i++) { snprintf (buf, sizeof (buf), " %s", cert_info.subject_word[i]); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); } snprintf (buf, sizeof (buf), " Issuer:"); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); for (i = 0; cert_info.issuer_word[i]; i++) { snprintf (buf, sizeof (buf), " %s", cert_info.issuer_word[i]); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); } snprintf (buf, sizeof (buf), " Public key algorithm: %s (%d bits)", cert_info.algorithm, cert_info.algorithm_bits); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); /*if (cert_info.rsa_tmp_bits) { snprintf (buf, sizeof (buf), " Public key algorithm uses ephemeral key with %d bits", cert_info.rsa_tmp_bits); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); }*/ snprintf (buf, sizeof (buf), " Sign algorithm %s", cert_info.sign_algorithm/*, cert_info.sign_algorithm_bits*/); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); snprintf (buf, sizeof (buf), " Valid since %s to %s", cert_info.notbefore, cert_info.notafter); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); } else { snprintf (buf, sizeof (buf), " * No Certificate"); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); } chiper_info = _SSL_get_cipher_info (serv->ssl); /* static buffer */ snprintf (buf, sizeof (buf), "* Cipher info:"); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); snprintf (buf, sizeof (buf), " Version: %s, cipher %s (%u bits)", chiper_info->version, chiper_info->chiper, chiper_info->chiper_bits); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); verify_error = SSL_get_verify_result (serv->ssl); switch (verify_error) { case X509_V_OK: /* snprintf (buf, sizeof (buf), "* Verify OK (?)"); */ /* EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); */ break; case X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: case X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE: case X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT: case X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN: case X509_V_ERR_CERT_HAS_EXPIRED: if (serv->accept_invalid_cert) { snprintf (buf, sizeof (buf), "* Verify E: %s.? (%d) -- Ignored", X509_verify_cert_error_string (verify_error), verify_error); EMIT_SIGNAL (XP_TE_SSLMESSAGE, serv->server_session, buf, NULL, NULL, NULL, 0); break; } default: snprintf (buf, sizeof (buf), "%s.? (%d)", X509_verify_cert_error_string (verify_error), verify_error); EMIT_SIGNAL (XP_TE_CONNFAIL, serv->server_session, buf, NULL, NULL, NULL, 0); server_cleanup (serv); return (0); } server_stopconnecting (serv); /* activate gtk poll */ server_connected (serv); return (0); /* remove it (0) */ } else { if (serv->ssl->session && serv->ssl->session->time + SSLTMOUT < time (NULL)) { snprintf (buf, sizeof (buf), "SSL handshake timed out"); EMIT_SIGNAL (XP_TE_CONNFAIL, serv->server_session, buf, NULL, NULL, NULL, 0); server_cleanup (serv); /* ->connecting = FALSE */ if (prefs.hex_net_auto_reconnectonfail) auto_reconnect (serv, FALSE, -1); return (0); /* remove it (0) */ } return (1); /* call it more (1) */ } }
283,876,863,796,405,900,000,000,000,000,000,000,000
server.c
59,660,602,516,359,420,000,000,000,000,000,000,000
[ "CWE-310" ]
CVE-2013-7449
The ssl_do_connect function in common/server.c in HexChat before 2.10.2, XChat, and XChat-GNOME does not verify that the server hostname matches a domain name in the X.509 certificate, which allows man-in-the-middle attackers to spoof SSL servers via an arbitrary valid certificate.
https://nvd.nist.gov/vuln/detail/CVE-2013-7449
9,402
linux
712f4aad406bb1ed67f3f98d04c044191f0ff593
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/712f4aad406bb1ed67f3f98d04c044191f0ff593
unix: properly account for FDs passed over unix sockets It is possible for a process to allocate and accumulate far more FDs than the process' limit by sending them over a unix socket then closing them to keep the process' fd count low. This change addresses this problem by keeping track of the number of FDs in flight per user and preventing non-privileged processes from having more FDs in flight than their configured FD limit. Reported-by: socketpair@gmail.com Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Mitigates: CVE-2013-4312 (Linux 2.0+) Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: David S. Miller <davem@davemloft.net>
1
static int unix_attach_fds(struct scm_cookie *scm, struct sk_buff *skb) { int i; unsigned char max_level = 0; int unix_sock_count = 0; for (i = scm->fp->count - 1; i >= 0; i--) { struct sock *sk = unix_get_socket(scm->fp->fp[i]); if (sk) { unix_sock_count++; max_level = max(max_level, unix_sk(sk)->recursion_level); } } if (unlikely(max_level > MAX_RECURSION_LEVEL)) return -ETOOMANYREFS; /* * Need to duplicate file references for the sake of garbage * collection. Otherwise a socket in the fps might become a * candidate for GC while the skb is not yet queued. */ UNIXCB(skb).fp = scm_fp_dup(scm->fp); if (!UNIXCB(skb).fp) return -ENOMEM; if (unix_sock_count) { for (i = scm->fp->count - 1; i >= 0; i--) unix_inflight(scm->fp->fp[i]); } return max_level; }
129,468,171,766,962,700,000,000,000,000,000,000,000
af_unix.c
94,301,712,041,177,040,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2016-2550
The Linux kernel before 4.5 allows local users to bypass file-descriptor limits and cause a denial of service (memory consumption) by leveraging incorrect tracking of descriptor ownership and sending each descriptor over a UNIX socket before closing it. NOTE: this vulnerability exists because of an incorrect fix for CVE-2013-4312.
https://nvd.nist.gov/vuln/detail/CVE-2016-2550
9,407
linux
2b7e8665b4ff51c034c55df3cff76518d1a9ee3a
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/2b7e8665b4ff51c034c55df3cff76518d1a9ee3a
fork: fix incorrect fput of ->exe_file causing use-after-free Commit 7c051267931a ("mm, fork: make dup_mmap wait for mmap_sem for write killable") made it possible to kill a forking task while it is waiting to acquire its ->mmap_sem for write, in dup_mmap(). However, it was overlooked that this introduced an new error path before a reference is taken on the mm_struct's ->exe_file. Since the ->exe_file of the new mm_struct was already set to the old ->exe_file by the memcpy() in dup_mm(), it was possible for the mmput() in the error path of dup_mm() to drop a reference to ->exe_file which was never taken. This caused the struct file to later be freed prematurely. Fix it by updating mm_init() to NULL out the ->exe_file, in the same place it clears other things like the list of mmaps. This bug was found by syzkaller. It can be reproduced using the following C program: #define _GNU_SOURCE #include <pthread.h> #include <stdlib.h> #include <sys/mman.h> #include <sys/syscall.h> #include <sys/wait.h> #include <unistd.h> static void *mmap_thread(void *_arg) { for (;;) { mmap(NULL, 0x1000000, PROT_READ, MAP_POPULATE|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0); } } static void *fork_thread(void *_arg) { usleep(rand() % 10000); fork(); } int main(void) { fork(); fork(); fork(); for (;;) { if (fork() == 0) { pthread_t t; pthread_create(&t, NULL, mmap_thread, NULL); pthread_create(&t, NULL, fork_thread, NULL); usleep(rand() % 10000); syscall(__NR_exit_group, 0); } wait(NULL); } } No special kernel config options are needed. It usually causes a NULL pointer dereference in __remove_shared_vm_struct() during exit, or in dup_mmap() (which is usually inlined into copy_process()) during fork. Both are due to a vm_area_struct's ->vm_file being used after it's already been freed. Google Bug Id: 64772007 Link: http://lkml.kernel.org/r/20170823211408.31198-1-ebiggers3@gmail.com Fixes: 7c051267931a ("mm, fork: make dup_mmap wait for mmap_sem for write killable") Signed-off-by: Eric Biggers <ebiggers@google.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> [v4.7+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1
static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, struct user_namespace *user_ns) { mm->mmap = NULL; mm->mm_rb = RB_ROOT; mm->vmacache_seqnum = 0; atomic_set(&mm->mm_users, 1); atomic_set(&mm->mm_count, 1); init_rwsem(&mm->mmap_sem); INIT_LIST_HEAD(&mm->mmlist); mm->core_state = NULL; atomic_long_set(&mm->nr_ptes, 0); mm_nr_pmds_init(mm); mm->map_count = 0; mm->locked_vm = 0; mm->pinned_vm = 0; memset(&mm->rss_stat, 0, sizeof(mm->rss_stat)); spin_lock_init(&mm->page_table_lock); mm_init_cpumask(mm); mm_init_aio(mm); mm_init_owner(mm, p); mmu_notifier_mm_init(mm); init_tlb_flush_pending(mm); #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS mm->pmd_huge_pte = NULL; #endif if (current->mm) { mm->flags = current->mm->flags & MMF_INIT_MASK; mm->def_flags = current->mm->def_flags & VM_INIT_DEF_MASK; } else { mm->flags = default_dump_filter; mm->def_flags = 0; } if (mm_alloc_pgd(mm)) goto fail_nopgd; if (init_new_context(p, mm)) goto fail_nocontext; mm->user_ns = get_user_ns(user_ns); return mm; fail_nocontext: mm_free_pgd(mm); fail_nopgd: free_mm(mm); return NULL; }
266,885,714,175,674,640,000,000,000,000,000,000,000
None
null
[ "CWE-416" ]
CVE-2017-17052
The mm_init function in kernel/fork.c in the Linux kernel before 4.12.10 does not clear the ->exe_file member of a new process's mm_struct, allowing a local attacker to achieve a use-after-free or possibly have unspecified other impact by running a specially crafted program.
https://nvd.nist.gov/vuln/detail/CVE-2017-17052
9,408
linux
7c80f9e4a588f1925b07134bb2e3689335f6c6d8
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/7c80f9e4a588f1925b07134bb2e3689335f6c6d8
usb: usbtest: fix NULL pointer dereference If the usbtest driver encounters a device with an IN bulk endpoint but no OUT bulk endpoint, it will try to dereference a NULL pointer (out->desc.bEndpointAddress). The problem can be solved by adding a missing test. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
1
get_endpoints(struct usbtest_dev *dev, struct usb_interface *intf) { int tmp; struct usb_host_interface *alt; struct usb_host_endpoint *in, *out; struct usb_host_endpoint *iso_in, *iso_out; struct usb_host_endpoint *int_in, *int_out; struct usb_device *udev; for (tmp = 0; tmp < intf->num_altsetting; tmp++) { unsigned ep; in = out = NULL; iso_in = iso_out = NULL; int_in = int_out = NULL; alt = intf->altsetting + tmp; if (override_alt >= 0 && override_alt != alt->desc.bAlternateSetting) continue; /* take the first altsetting with in-bulk + out-bulk; * ignore other endpoints and altsettings. */ for (ep = 0; ep < alt->desc.bNumEndpoints; ep++) { struct usb_host_endpoint *e; int edi; e = alt->endpoint + ep; edi = usb_endpoint_dir_in(&e->desc); switch (usb_endpoint_type(&e->desc)) { case USB_ENDPOINT_XFER_BULK: endpoint_update(edi, &in, &out, e); continue; case USB_ENDPOINT_XFER_INT: if (dev->info->intr) endpoint_update(edi, &int_in, &int_out, e); continue; case USB_ENDPOINT_XFER_ISOC: if (dev->info->iso) endpoint_update(edi, &iso_in, &iso_out, e); /* FALLTHROUGH */ default: continue; } } if ((in && out) || iso_in || iso_out || int_in || int_out) goto found; } return -EINVAL; found: udev = testdev_to_usbdev(dev); dev->info->alt = alt->desc.bAlternateSetting; if (alt->desc.bAlternateSetting != 0) { tmp = usb_set_interface(udev, alt->desc.bInterfaceNumber, alt->desc.bAlternateSetting); if (tmp < 0) return tmp; } if (in) { dev->in_pipe = usb_rcvbulkpipe(udev, in->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); dev->out_pipe = usb_sndbulkpipe(udev, out->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); } if (iso_in) { dev->iso_in = &iso_in->desc; dev->in_iso_pipe = usb_rcvisocpipe(udev, iso_in->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); } if (iso_out) { dev->iso_out = &iso_out->desc; dev->out_iso_pipe = usb_sndisocpipe(udev, iso_out->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); } if (int_in) { dev->int_in = &int_in->desc; dev->in_int_pipe = usb_rcvintpipe(udev, int_in->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); } if (int_out) { dev->int_out = &int_out->desc; dev->out_int_pipe = usb_sndintpipe(udev, int_out->desc.bEndpointAddress & USB_ENDPOINT_NUMBER_MASK); } return 0; }
283,507,030,675,456,930,000,000,000,000,000,000,000
usbtest.c
52,688,706,411,922,670,000,000,000,000,000,000,000
[ "CWE-476" ]
CVE-2017-16532
The get_endpoints function in drivers/usb/misc/usbtest.c in the Linux kernel through 4.13.11 allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via a crafted USB device.
https://nvd.nist.gov/vuln/detail/CVE-2017-16532
9,409
linux
786de92b3cb26012d3d0f00ee37adf14527f35c4
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/786de92b3cb26012d3d0f00ee37adf14527f35c4
USB: uas: fix bug in handling of alternate settings The uas driver has a subtle bug in the way it handles alternate settings. The uas_find_uas_alt_setting() routine returns an altsetting value (the bAlternateSetting number in the descriptor), but uas_use_uas_driver() then treats that value as an index to the intf->altsetting array, which it isn't. Normally this doesn't cause any problems because the various alternate settings have bAlternateSetting values 0, 1, 2, ..., so the value is equal to the index in the array. But this is not guaranteed, and Andrey Konovalov used the syzkaller fuzzer with KASAN to get a slab-out-of-bounds error by violating this assumption. This patch fixes the bug by making uas_find_uas_alt_setting() return a pointer to the altsetting entry rather than either the value or the index. Pointers are less subject to misinterpretation. Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Reported-by: Andrey Konovalov <andreyknvl@google.com> Tested-by: Andrey Konovalov <andreyknvl@google.com> CC: Oliver Neukum <oneukum@suse.com> CC: <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
1
static int uas_switch_interface(struct usb_device *udev, struct usb_interface *intf) { int alt; alt = uas_find_uas_alt_setting(intf); if (alt < 0) return alt; return usb_set_interface(udev, intf->altsetting[0].desc.bInterfaceNumber, alt); }
140,637,843,410,934,800,000,000,000,000,000,000,000
uas.c
20,737,748,293,101,703,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2017-16530
The uas driver in the Linux kernel before 4.13.6 allows local users to cause a denial of service (out-of-bounds read and system crash) or possibly have unspecified other impact via a crafted USB device, related to drivers/usb/storage/uas-detect.h and drivers/usb/storage/uas.c.
https://nvd.nist.gov/vuln/detail/CVE-2017-16530
9,415
FFmpeg
c42a1388a6d1bfd8001bf6a4241d8ca27e49326d
https://github.com/FFmpeg/FFmpeg
https://github.com/FFmpeg/FFmpeg/commit/c42a1388a6d1bfd8001bf6a4241d8ca27e49326d
avformat/rtpdec_h264: Fix heap-buffer-overflow Fixes: rtp_sdp/poc.sdp Found-by: Bingchang <l.bing.chang.bc@gmail.com> Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
1
static int sdp_parse_fmtp_config_h264(AVFormatContext *s, AVStream *stream, PayloadContext *h264_data, const char *attr, const char *value) { AVCodecParameters *par = stream->codecpar; if (!strcmp(attr, "packetization-mode")) { av_log(s, AV_LOG_DEBUG, "RTP Packetization Mode: %d\n", atoi(value)); h264_data->packetization_mode = atoi(value); /* * Packetization Mode: * 0 or not present: Single NAL mode (Only nals from 1-23 are allowed) * 1: Non-interleaved Mode: 1-23, 24 (STAP-A), 28 (FU-A) are allowed. * 2: Interleaved Mode: 25 (STAP-B), 26 (MTAP16), 27 (MTAP24), 28 (FU-A), * and 29 (FU-B) are allowed. */ if (h264_data->packetization_mode > 1) av_log(s, AV_LOG_ERROR, "Interleaved RTP mode is not supported yet.\n"); } else if (!strcmp(attr, "profile-level-id")) { if (strlen(value) == 6) parse_profile_level_id(s, h264_data, value); } else if (!strcmp(attr, "sprop-parameter-sets")) { int ret; if (value[strlen(value) - 1] == ',') { av_log(s, AV_LOG_WARNING, "Missing PPS in sprop-parameter-sets, ignoring\n"); return 0; } par->extradata_size = 0; av_freep(&par->extradata); ret = ff_h264_parse_sprop_parameter_sets(s, &par->extradata, &par->extradata_size, value); av_log(s, AV_LOG_DEBUG, "Extradata set to %p (size: %d)\n", par->extradata, par->extradata_size); return ret; } return 0; }
3,803,066,890,696,449,000,000,000,000,000,000,000
rtpdec_h264.c
222,521,992,068,081,200,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2017-14767
The sdp_parse_fmtp_config_h264 function in libavformat/rtpdec_h264.c in FFmpeg before 3.3.4 mishandles empty sprop-parameter-sets values, which allows remote attackers to cause a denial of service (heap buffer overflow) or possibly have unspecified other impact via a crafted sdp file.
https://nvd.nist.gov/vuln/detail/CVE-2017-14767
9,417
ImageMagick
4eae304e773bad8a876c3c26fdffac24d4253ae4
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/4eae304e773bad8a876c3c26fdffac24d4253ae4
None
1
static Image *ReadWPGImage(const ImageInfo *image_info, ExceptionInfo *exception) { typedef struct { size_t FileId; MagickOffsetType DataOffset; unsigned int ProductType; unsigned int FileType; unsigned char MajorVersion; unsigned char MinorVersion; unsigned int EncryptKey; unsigned int Reserved; } WPGHeader; typedef struct { unsigned char RecType; size_t RecordLength; } WPGRecord; typedef struct { unsigned char Class; unsigned char RecType; size_t Extension; size_t RecordLength; } WPG2Record; typedef struct { unsigned HorizontalUnits; unsigned VerticalUnits; unsigned char PosSizePrecision; } WPG2Start; typedef struct { unsigned int Width; unsigned int Height; unsigned int Depth; unsigned int HorzRes; unsigned int VertRes; } WPGBitmapType1; typedef struct { unsigned int Width; unsigned int Height; unsigned char Depth; unsigned char Compression; } WPG2BitmapType1; typedef struct { unsigned int RotAngle; unsigned int LowLeftX; unsigned int LowLeftY; unsigned int UpRightX; unsigned int UpRightY; unsigned int Width; unsigned int Height; unsigned int Depth; unsigned int HorzRes; unsigned int VertRes; } WPGBitmapType2; typedef struct { unsigned int StartIndex; unsigned int NumOfEntries; } WPGColorMapRec; /* typedef struct { size_t PS_unknown1; unsigned int PS_unknown2; unsigned int PS_unknown3; } WPGPSl1Record; */ Image *image; unsigned int status; WPGHeader Header; WPGRecord Rec; WPG2Record Rec2; WPG2Start StartWPG; WPGBitmapType1 BitmapHeader1; WPG2BitmapType1 Bitmap2Header1; WPGBitmapType2 BitmapHeader2; WPGColorMapRec WPG_Palette; int i, bpp, WPG2Flags; ssize_t ldblk; size_t one; unsigned char *BImgBuff; tCTM CTM; /*current transform matrix*/ /* Open image file. */ assert(image_info != (const ImageInfo *) NULL); assert(image_info->signature == MagickSignature); assert(exception != (ExceptionInfo *) NULL); assert(exception->signature == MagickSignature); one=1; image=AcquireImage(image_info); image->depth=8; status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception); if (status == MagickFalse) { image=DestroyImageList(image); return((Image *) NULL); } /* Read WPG image. */ Header.FileId=ReadBlobLSBLong(image); Header.DataOffset=(MagickOffsetType) ReadBlobLSBLong(image); Header.ProductType=ReadBlobLSBShort(image); Header.FileType=ReadBlobLSBShort(image); Header.MajorVersion=ReadBlobByte(image); Header.MinorVersion=ReadBlobByte(image); Header.EncryptKey=ReadBlobLSBShort(image); Header.Reserved=ReadBlobLSBShort(image); if (Header.FileId!=0x435057FF || (Header.ProductType>>8)!=0x16) ThrowReaderException(CorruptImageError,"ImproperImageHeader"); if (Header.EncryptKey!=0) ThrowReaderException(CoderError,"EncryptedWPGImageFileNotSupported"); image->columns = 1; image->rows = 1; image->colors = 0; bpp=0; BitmapHeader2.RotAngle=0; Rec2.RecordLength = 0; switch(Header.FileType) { case 1: /* WPG level 1 */ while(!EOFBlob(image)) /* object parser loop */ { (void) SeekBlob(image,Header.DataOffset,SEEK_SET); if(EOFBlob(image)) break; Rec.RecType=(i=ReadBlobByte(image)); if(i==EOF) break; Rd_WP_DWORD(image,&Rec.RecordLength); if(EOFBlob(image)) break; Header.DataOffset=TellBlob(image)+Rec.RecordLength; switch(Rec.RecType) { case 0x0B: /* bitmap type 1 */ BitmapHeader1.Width=ReadBlobLSBShort(image); BitmapHeader1.Height=ReadBlobLSBShort(image); if ((BitmapHeader1.Width == 0) || (BitmapHeader1.Height == 0)) ThrowReaderException(CorruptImageError,"ImproperImageHeader"); BitmapHeader1.Depth=ReadBlobLSBShort(image); BitmapHeader1.HorzRes=ReadBlobLSBShort(image); BitmapHeader1.VertRes=ReadBlobLSBShort(image); if(BitmapHeader1.HorzRes && BitmapHeader1.VertRes) { image->units=PixelsPerCentimeterResolution; image->x_resolution=BitmapHeader1.HorzRes/470.0; image->y_resolution=BitmapHeader1.VertRes/470.0; } image->columns=BitmapHeader1.Width; image->rows=BitmapHeader1.Height; bpp=BitmapHeader1.Depth; goto UnpackRaster; case 0x0E: /*Color palette */ WPG_Palette.StartIndex=ReadBlobLSBShort(image); WPG_Palette.NumOfEntries=ReadBlobLSBShort(image); if ((WPG_Palette.NumOfEntries-WPG_Palette.StartIndex) > (Rec2.RecordLength-2-2) / 3) ThrowReaderException(CorruptImageError,"InvalidColormapIndex"); image->colors=WPG_Palette.NumOfEntries; if (!AcquireImageColormap(image,image->colors)) goto NoMemory; for (i=WPG_Palette.StartIndex; i < (int)WPG_Palette.NumOfEntries; i++) { image->colormap[i].red=ScaleCharToQuantum((unsigned char) ReadBlobByte(image)); image->colormap[i].green=ScaleCharToQuantum((unsigned char) ReadBlobByte(image)); image->colormap[i].blue=ScaleCharToQuantum((unsigned char) ReadBlobByte(image)); } break; case 0x11: /* Start PS l1 */ if(Rec.RecordLength > 8) image=ExtractPostscript(image,image_info, TellBlob(image)+8, /* skip PS header in the wpg */ (ssize_t) Rec.RecordLength-8,exception); break; case 0x14: /* bitmap type 2 */ BitmapHeader2.RotAngle=ReadBlobLSBShort(image); BitmapHeader2.LowLeftX=ReadBlobLSBShort(image); BitmapHeader2.LowLeftY=ReadBlobLSBShort(image); BitmapHeader2.UpRightX=ReadBlobLSBShort(image); BitmapHeader2.UpRightY=ReadBlobLSBShort(image); BitmapHeader2.Width=ReadBlobLSBShort(image); BitmapHeader2.Height=ReadBlobLSBShort(image); if ((BitmapHeader2.Width == 0) || (BitmapHeader2.Height == 0)) ThrowReaderException(CorruptImageError,"ImproperImageHeader"); BitmapHeader2.Depth=ReadBlobLSBShort(image); BitmapHeader2.HorzRes=ReadBlobLSBShort(image); BitmapHeader2.VertRes=ReadBlobLSBShort(image); image->units=PixelsPerCentimeterResolution; image->page.width=(unsigned int) ((BitmapHeader2.LowLeftX-BitmapHeader2.UpRightX)/470.0); image->page.height=(unsigned int) ((BitmapHeader2.LowLeftX-BitmapHeader2.UpRightY)/470.0); image->page.x=(int) (BitmapHeader2.LowLeftX/470.0); image->page.y=(int) (BitmapHeader2.LowLeftX/470.0); if(BitmapHeader2.HorzRes && BitmapHeader2.VertRes) { image->x_resolution=BitmapHeader2.HorzRes/470.0; image->y_resolution=BitmapHeader2.VertRes/470.0; } image->columns=BitmapHeader2.Width; image->rows=BitmapHeader2.Height; bpp=BitmapHeader2.Depth; UnpackRaster: status=SetImageExtent(image,image->columns,image->rows); if (status == MagickFalse) break; if ((image->colors == 0) && (bpp != 24)) { image->colors=one << bpp; if (!AcquireImageColormap(image,image->colors)) { NoMemory: ThrowReaderException(ResourceLimitError, "MemoryAllocationFailed"); } /* printf("Load default colormap \n"); */ for (i=0; (i < (int) image->colors) && (i < 256); i++) { image->colormap[i].red=ScaleCharToQuantum(WPG1_Palette[i].Red); image->colormap[i].green=ScaleCharToQuantum(WPG1_Palette[i].Green); image->colormap[i].blue=ScaleCharToQuantum(WPG1_Palette[i].Blue); } } else { if (bpp < 24) if ( (image->colors < (one << bpp)) && (bpp != 24) ) image->colormap=(PixelPacket *) ResizeQuantumMemory( image->colormap,(size_t) (one << bpp), sizeof(*image->colormap)); } if (bpp == 1) { if(image->colormap[0].red==0 && image->colormap[0].green==0 && image->colormap[0].blue==0 && image->colormap[1].red==0 && image->colormap[1].green==0 && image->colormap[1].blue==0) { /* fix crippled monochrome palette */ image->colormap[1].red = image->colormap[1].green = image->colormap[1].blue = QuantumRange; } } if(UnpackWPGRaster(image,bpp) < 0) /* The raster cannot be unpacked */ { DecompressionFailed: ThrowReaderException(CoderError,"UnableToDecompressImage"); } if(Rec.RecType==0x14 && BitmapHeader2.RotAngle!=0 && !image_info->ping) { /* flop command */ if(BitmapHeader2.RotAngle & 0x8000) { Image *flop_image; flop_image = FlopImage(image, exception); if (flop_image != (Image *) NULL) { DuplicateBlob(flop_image,image); ReplaceImageInList(&image,flop_image); } } /* flip command */ if(BitmapHeader2.RotAngle & 0x2000) { Image *flip_image; flip_image = FlipImage(image, exception); if (flip_image != (Image *) NULL) { DuplicateBlob(flip_image,image); ReplaceImageInList(&image,flip_image); } } /* rotate command */ if(BitmapHeader2.RotAngle & 0x0FFF) { Image *rotate_image; rotate_image=RotateImage(image,(BitmapHeader2.RotAngle & 0x0FFF), exception); if (rotate_image != (Image *) NULL) { DuplicateBlob(rotate_image,image); ReplaceImageInList(&image,rotate_image); } } } /* Allocate next image structure. */ AcquireNextImage(image_info,image); image->depth=8; if (image->next == (Image *) NULL) goto Finish; image=SyncNextImageInList(image); image->columns=image->rows=1; image->colors=0; break; case 0x1B: /* Postscript l2 */ if(Rec.RecordLength>0x3C) image=ExtractPostscript(image,image_info, TellBlob(image)+0x3C, /* skip PS l2 header in the wpg */ (ssize_t) Rec.RecordLength-0x3C,exception); break; } } break; case 2: /* WPG level 2 */ (void) memset(CTM,0,sizeof(CTM)); StartWPG.PosSizePrecision = 0; while(!EOFBlob(image)) /* object parser loop */ { (void) SeekBlob(image,Header.DataOffset,SEEK_SET); if(EOFBlob(image)) break; Rec2.Class=(i=ReadBlobByte(image)); if(i==EOF) break; Rec2.RecType=(i=ReadBlobByte(image)); if(i==EOF) break; Rd_WP_DWORD(image,&Rec2.Extension); Rd_WP_DWORD(image,&Rec2.RecordLength); if(EOFBlob(image)) break; Header.DataOffset=TellBlob(image)+Rec2.RecordLength; switch(Rec2.RecType) { case 1: StartWPG.HorizontalUnits=ReadBlobLSBShort(image); StartWPG.VerticalUnits=ReadBlobLSBShort(image); StartWPG.PosSizePrecision=ReadBlobByte(image); break; case 0x0C: /* Color palette */ WPG_Palette.StartIndex=ReadBlobLSBShort(image); WPG_Palette.NumOfEntries=ReadBlobLSBShort(image); if ((WPG_Palette.NumOfEntries-WPG_Palette.StartIndex) > (Rec2.RecordLength-2-2) / 3) ThrowReaderException(CorruptImageError,"InvalidColormapIndex"); image->colors=WPG_Palette.NumOfEntries; if (AcquireImageColormap(image,image->colors) == MagickFalse) ThrowReaderException(ResourceLimitError, "MemoryAllocationFailed"); for (i=WPG_Palette.StartIndex; i < (int)WPG_Palette.NumOfEntries; i++) { image->colormap[i].red=ScaleCharToQuantum((char) ReadBlobByte(image)); image->colormap[i].green=ScaleCharToQuantum((char) ReadBlobByte(image)); image->colormap[i].blue=ScaleCharToQuantum((char) ReadBlobByte(image)); (void) ReadBlobByte(image); /*Opacity??*/ } break; case 0x0E: Bitmap2Header1.Width=ReadBlobLSBShort(image); Bitmap2Header1.Height=ReadBlobLSBShort(image); if ((Bitmap2Header1.Width == 0) || (Bitmap2Header1.Height == 0)) ThrowReaderException(CorruptImageError,"ImproperImageHeader"); Bitmap2Header1.Depth=ReadBlobByte(image); Bitmap2Header1.Compression=ReadBlobByte(image); if(Bitmap2Header1.Compression > 1) continue; /*Unknown compression method */ switch(Bitmap2Header1.Depth) { case 1: bpp=1; break; case 2: bpp=2; break; case 3: bpp=4; break; case 4: bpp=8; break; case 8: bpp=24; break; default: continue; /*Ignore raster with unknown depth*/ } image->columns=Bitmap2Header1.Width; image->rows=Bitmap2Header1.Height; status=SetImageExtent(image,image->columns,image->rows); if (status == MagickFalse) break; if ((image->colors == 0) && (bpp != 24)) { size_t one; one=1; image->colors=one << bpp; if (!AcquireImageColormap(image,image->colors)) goto NoMemory; } else { if(bpp < 24) if( image->colors<(one << bpp) && bpp!=24 ) image->colormap=(PixelPacket *) ResizeQuantumMemory( image->colormap,(size_t) (one << bpp), sizeof(*image->colormap)); } switch(Bitmap2Header1.Compression) { case 0: /*Uncompressed raster*/ { ldblk=(ssize_t) ((bpp*image->columns+7)/8); BImgBuff=(unsigned char *) AcquireQuantumMemory((size_t) ldblk+1,sizeof(*BImgBuff)); if (BImgBuff == (unsigned char *) NULL) goto NoMemory; for(i=0; i< (ssize_t) image->rows; i++) { (void) ReadBlob(image,ldblk,BImgBuff); InsertRow(BImgBuff,i,image,bpp); } if(BImgBuff) BImgBuff=(unsigned char *) RelinquishMagickMemory(BImgBuff); break; } case 1: /*RLE for WPG2 */ { if( UnpackWPG2Raster(image,bpp) < 0) goto DecompressionFailed; break; } } if(CTM[0][0]<0 && !image_info->ping) { /*?? RotAngle=360-RotAngle;*/ Image *flop_image; flop_image = FlopImage(image, exception); if (flop_image != (Image *) NULL) { DuplicateBlob(flop_image,image); ReplaceImageInList(&image,flop_image); } /* Try to change CTM according to Flip - I am not sure, must be checked. Tx(0,0)=-1; Tx(1,0)=0; Tx(2,0)=0; Tx(0,1)= 0; Tx(1,1)=1; Tx(2,1)=0; Tx(0,2)=(WPG._2Rect.X_ur+WPG._2Rect.X_ll); Tx(1,2)=0; Tx(2,2)=1; */ } if(CTM[1][1]<0 && !image_info->ping) { /*?? RotAngle=360-RotAngle;*/ Image *flip_image; flip_image = FlipImage(image, exception); if (flip_image != (Image *) NULL) { DuplicateBlob(flip_image,image); ReplaceImageInList(&image,flip_image); } /* Try to change CTM according to Flip - I am not sure, must be checked. float_matrix Tx(3,3); Tx(0,0)= 1; Tx(1,0)= 0; Tx(2,0)=0; Tx(0,1)= 0; Tx(1,1)=-1; Tx(2,1)=0; Tx(0,2)= 0; Tx(1,2)=(WPG._2Rect.Y_ur+WPG._2Rect.Y_ll); Tx(2,2)=1; */ } /* Allocate next image structure. */ AcquireNextImage(image_info,image); image->depth=8; if (image->next == (Image *) NULL) goto Finish; image=SyncNextImageInList(image); image->columns=image->rows=1; image->colors=0; break; case 0x12: /* Postscript WPG2*/ i=ReadBlobLSBShort(image); if(Rec2.RecordLength > (unsigned int) i) image=ExtractPostscript(image,image_info, TellBlob(image)+i, /*skip PS header in the wpg2*/ (ssize_t) (Rec2.RecordLength-i-2),exception); break; case 0x1B: /*bitmap rectangle*/ WPG2Flags = LoadWPG2Flags(image,StartWPG.PosSizePrecision,NULL,&CTM); (void) WPG2Flags; break; } } break; default: { ThrowReaderException(CoderError,"DataEncodingSchemeIsNotSupported"); } } Finish: (void) CloseBlob(image); { Image *p; ssize_t scene=0; /* Rewind list, removing any empty images while rewinding. */ p=image; image=NULL; while (p != (Image *) NULL) { Image *tmp=p; if ((p->rows == 0) || (p->columns == 0)) { p=p->previous; DeleteImageFromList(&tmp); } else { image=p; p=p->previous; } } /* Fix scene numbers. */ for (p=image; p != (Image *) NULL; p=p->next) p->scene=(size_t) scene++; } if (image == (Image *) NULL) ThrowReaderException(CorruptImageError, "ImageFileDoesNotContainAnyImageData"); return(image); }
148,171,285,010,940,660,000,000,000,000,000,000,000
None
null
[ "CWE-400" ]
CVE-2017-14341
ImageMagick 7.0.6-6 has a large loop vulnerability in ReadWPGImage in coders/wpg.c, causing CPU exhaustion via a crafted wpg image file.
https://nvd.nist.gov/vuln/detail/CVE-2017-14341
9,418
ImageMagick
8598a497e2d1f556a34458cf54b40ba40674734c
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/8598a497e2d1f556a34458cf54b40ba40674734c
None
1
static Image *ReadPSImage(const ImageInfo *image_info,ExceptionInfo *exception) { #define BoundingBox "BoundingBox:" #define BeginDocument "BeginDocument:" #define BeginXMPPacket "<?xpacket begin=" #define EndXMPPacket "<?xpacket end=" #define ICCProfile "BeginICCProfile:" #define CMYKCustomColor "CMYKCustomColor:" #define CMYKProcessColor "CMYKProcessColor:" #define DocumentMedia "DocumentMedia:" #define DocumentCustomColors "DocumentCustomColors:" #define DocumentProcessColors "DocumentProcessColors:" #define EndDocument "EndDocument:" #define HiResBoundingBox "HiResBoundingBox:" #define ImageData "ImageData:" #define PageBoundingBox "PageBoundingBox:" #define LanguageLevel "LanguageLevel:" #define PageMedia "PageMedia:" #define Pages "Pages:" #define PhotoshopProfile "BeginPhotoshop:" #define PostscriptLevel "!PS-" #define RenderPostscriptText " Rendering Postscript... " #define SpotColor "+ " char command[MaxTextExtent], *density, filename[MaxTextExtent], geometry[MaxTextExtent], input_filename[MaxTextExtent], message[MaxTextExtent], *options, postscript_filename[MaxTextExtent]; const char *option; const DelegateInfo *delegate_info; GeometryInfo geometry_info; Image *image, *next, *postscript_image; ImageInfo *read_info; int c, file; MagickBooleanType cmyk, fitPage, skip, status; MagickStatusType flags; PointInfo delta, resolution; RectangleInfo page; register char *p; register ssize_t i; SegmentInfo bounds, hires_bounds; short int hex_digits[256]; size_t length, priority; ssize_t count; StringInfo *profile; unsigned long columns, extent, language_level, pages, rows, scene, spotcolor; /* Open image file. */ assert(image_info != (const ImageInfo *) NULL); assert(image_info->signature == MagickSignature); if (image_info->debug != MagickFalse) (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s", image_info->filename); assert(exception != (ExceptionInfo *) NULL); assert(exception->signature == MagickSignature); image=AcquireImage(image_info); status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception); if (status == MagickFalse) { image=DestroyImageList(image); return((Image *) NULL); } status=AcquireUniqueSymbolicLink(image_info->filename,input_filename); if (status == MagickFalse) { ThrowFileException(exception,FileOpenError,"UnableToCreateTemporaryFile", image_info->filename); image=DestroyImageList(image); return((Image *) NULL); } /* Initialize hex values. */ (void) ResetMagickMemory(hex_digits,0,sizeof(hex_digits)); hex_digits[(int) '0']=0; hex_digits[(int) '1']=1; hex_digits[(int) '2']=2; hex_digits[(int) '3']=3; hex_digits[(int) '4']=4; hex_digits[(int) '5']=5; hex_digits[(int) '6']=6; hex_digits[(int) '7']=7; hex_digits[(int) '8']=8; hex_digits[(int) '9']=9; hex_digits[(int) 'a']=10; hex_digits[(int) 'b']=11; hex_digits[(int) 'c']=12; hex_digits[(int) 'd']=13; hex_digits[(int) 'e']=14; hex_digits[(int) 'f']=15; hex_digits[(int) 'A']=10; hex_digits[(int) 'B']=11; hex_digits[(int) 'C']=12; hex_digits[(int) 'D']=13; hex_digits[(int) 'E']=14; hex_digits[(int) 'F']=15; /* Set the page density. */ delta.x=DefaultResolution; delta.y=DefaultResolution; if ((image->x_resolution == 0.0) || (image->y_resolution == 0.0)) { flags=ParseGeometry(PSDensityGeometry,&geometry_info); image->x_resolution=geometry_info.rho; image->y_resolution=geometry_info.sigma; if ((flags & SigmaValue) == 0) image->y_resolution=image->x_resolution; } if (image_info->density != (char *) NULL) { flags=ParseGeometry(image_info->density,&geometry_info); image->x_resolution=geometry_info.rho; image->y_resolution=geometry_info.sigma; if ((flags & SigmaValue) == 0) image->y_resolution=image->x_resolution; } (void) ParseAbsoluteGeometry(PSPageGeometry,&page); if (image_info->page != (char *) NULL) (void) ParseAbsoluteGeometry(image_info->page,&page); resolution.x=image->x_resolution; resolution.y=image->y_resolution; page.width=(size_t) ceil((double) (page.width*resolution.x/delta.x)-0.5); page.height=(size_t) ceil((double) (page.height*resolution.y/delta.y)-0.5); /* Determine page geometry from the Postscript bounding box. */ (void) ResetMagickMemory(&bounds,0,sizeof(bounds)); (void) ResetMagickMemory(command,0,sizeof(command)); cmyk=image_info->colorspace == CMYKColorspace ? MagickTrue : MagickFalse; (void) ResetMagickMemory(&hires_bounds,0,sizeof(hires_bounds)); priority=0; columns=0; rows=0; extent=0; spotcolor=0; language_level=1; skip=MagickFalse; pages=(~0UL); p=command; for (c=ReadBlobByte(image); c != EOF; c=ReadBlobByte(image)) { /* Note document structuring comments. */ *p++=(char) c; if ((strchr("\n\r%",c) == (char *) NULL) && ((size_t) (p-command) < (MaxTextExtent-1))) continue; *p='\0'; p=command; /* Skip %%BeginDocument thru %%EndDocument. */ if (LocaleNCompare(BeginDocument,command,strlen(BeginDocument)) == 0) skip=MagickTrue; if (LocaleNCompare(EndDocument,command,strlen(EndDocument)) == 0) skip=MagickFalse; if (skip != MagickFalse) continue; if (LocaleNCompare(PostscriptLevel,command,strlen(PostscriptLevel)) == 0) { (void) SetImageProperty(image,"ps:Level",command+4); if (GlobExpression(command,"*EPSF-*",MagickTrue) != MagickFalse) pages=1; } if (LocaleNCompare(LanguageLevel,command,strlen(LanguageLevel)) == 0) (void) sscanf(command,LanguageLevel " %lu",&language_level); if (LocaleNCompare(Pages,command,strlen(Pages)) == 0) (void) sscanf(command,Pages " %lu",&pages); if (LocaleNCompare(ImageData,command,strlen(ImageData)) == 0) (void) sscanf(command,ImageData " %lu %lu",&columns,&rows); if (LocaleNCompare(ICCProfile,command,strlen(ICCProfile)) == 0) { unsigned char *datum; /* Read ICC profile. */ profile=AcquireStringInfo(MaxTextExtent); datum=GetStringInfoDatum(profile); for (i=0; (c=ProfileInteger(image,hex_digits)) != EOF; i++) { if (i >= (ssize_t) GetStringInfoLength(profile)) { SetStringInfoLength(profile,(size_t) i << 1); datum=GetStringInfoDatum(profile); } datum[i]=(unsigned char) c; } SetStringInfoLength(profile,(size_t) i+1); (void) SetImageProfile(image,"icc",profile); profile=DestroyStringInfo(profile); continue; } if (LocaleNCompare(PhotoshopProfile,command,strlen(PhotoshopProfile)) == 0) { unsigned char *p; /* Read Photoshop profile. */ count=(ssize_t) sscanf(command,PhotoshopProfile " %lu",&extent); if (count != 1) continue; length=extent; profile=BlobToStringInfo((const void *) NULL,length); if (profile != (StringInfo *) NULL) { p=GetStringInfoDatum(profile); for (i=0; i < (ssize_t) length; i++) *p++=(unsigned char) ProfileInteger(image,hex_digits); (void) SetImageProfile(image,"8bim",profile); profile=DestroyStringInfo(profile); } continue; } if (LocaleNCompare(BeginXMPPacket,command,strlen(BeginXMPPacket)) == 0) { register size_t i; /* Read XMP profile. */ p=command; profile=StringToStringInfo(command); for (i=GetStringInfoLength(profile)-1; c != EOF; i++) { SetStringInfoLength(profile,i+1); c=ReadBlobByte(image); GetStringInfoDatum(profile)[i]=(unsigned char) c; *p++=(char) c; if ((strchr("\n\r%",c) == (char *) NULL) && ((size_t) (p-command) < (MaxTextExtent-1))) continue; *p='\0'; p=command; if (LocaleNCompare(EndXMPPacket,command,strlen(EndXMPPacket)) == 0) break; } SetStringInfoLength(profile,i); (void) SetImageProfile(image,"xmp",profile); profile=DestroyStringInfo(profile); continue; } /* Is this a CMYK document? */ length=strlen(DocumentProcessColors); if (LocaleNCompare(DocumentProcessColors,command,length) == 0) { if ((GlobExpression(command,"*Cyan*",MagickTrue) != MagickFalse) || (GlobExpression(command,"*Magenta*",MagickTrue) != MagickFalse) || (GlobExpression(command,"*Yellow*",MagickTrue) != MagickFalse)) cmyk=MagickTrue; } if (LocaleNCompare(CMYKCustomColor,command,strlen(CMYKCustomColor)) == 0) cmyk=MagickTrue; if (LocaleNCompare(CMYKProcessColor,command,strlen(CMYKProcessColor)) == 0) cmyk=MagickTrue; length=strlen(DocumentCustomColors); if ((LocaleNCompare(DocumentCustomColors,command,length) == 0) || (LocaleNCompare(CMYKCustomColor,command,strlen(CMYKCustomColor)) == 0) || (LocaleNCompare(SpotColor,command,strlen(SpotColor)) == 0)) { char property[MaxTextExtent], *value; register char *p; /* Note spot names. */ (void) FormatLocaleString(property,MaxTextExtent,"ps:SpotColor-%.20g", (double) (spotcolor++)); for (p=command; *p != '\0'; p++) if (isspace((int) (unsigned char) *p) != 0) break; value=AcquireString(p); (void) SubstituteString(&value,"(",""); (void) SubstituteString(&value,")",""); (void) StripString(value); (void) SetImageProperty(image,property,value); value=DestroyString(value); continue; } if (image_info->page != (char *) NULL) continue; /* Note region defined by bounding box. */ count=0; i=0; if (LocaleNCompare(BoundingBox,command,strlen(BoundingBox)) == 0) { count=(ssize_t) sscanf(command,BoundingBox " %lf %lf %lf %lf", &bounds.x1,&bounds.y1,&bounds.x2,&bounds.y2); i=2; } if (LocaleNCompare(DocumentMedia,command,strlen(DocumentMedia)) == 0) { count=(ssize_t) sscanf(command,DocumentMedia " %lf %lf %lf %lf", &bounds.x1,&bounds.y1,&bounds.x2,&bounds.y2); i=1; } if (LocaleNCompare(HiResBoundingBox,command,strlen(HiResBoundingBox)) == 0) { count=(ssize_t) sscanf(command,HiResBoundingBox " %lf %lf %lf %lf", &bounds.x1,&bounds.y1,&bounds.x2,&bounds.y2); i=3; } if (LocaleNCompare(PageBoundingBox,command,strlen(PageBoundingBox)) == 0) { count=(ssize_t) sscanf(command,PageBoundingBox " %lf %lf %lf %lf", &bounds.x1,&bounds.y1,&bounds.x2,&bounds.y2); i=1; } if (LocaleNCompare(PageMedia,command,strlen(PageMedia)) == 0) { count=(ssize_t) sscanf(command,PageMedia " %lf %lf %lf %lf", &bounds.x1,&bounds.y1,&bounds.x2,&bounds.y2); i=1; } if ((count != 4) || (i < (ssize_t) priority)) continue; if ((fabs(bounds.x2-bounds.x1) <= fabs(hires_bounds.x2-hires_bounds.x1)) || (fabs(bounds.y2-bounds.y1) <= fabs(hires_bounds.y2-hires_bounds.y1))) if (i == (ssize_t) priority) continue; hires_bounds=bounds; priority=i; } if ((fabs(hires_bounds.x2-hires_bounds.x1) >= MagickEpsilon) && (fabs(hires_bounds.y2-hires_bounds.y1) >= MagickEpsilon)) { /* Set Postscript render geometry. */ (void) FormatLocaleString(geometry,MaxTextExtent,"%gx%g%+.15g%+.15g", hires_bounds.x2-hires_bounds.x1,hires_bounds.y2-hires_bounds.y1, hires_bounds.x1,hires_bounds.y1); (void) SetImageProperty(image,"ps:HiResBoundingBox",geometry); page.width=(size_t) ceil((double) ((hires_bounds.x2-hires_bounds.x1)* resolution.x/delta.x)-0.5); page.height=(size_t) ceil((double) ((hires_bounds.y2-hires_bounds.y1)* resolution.y/delta.y)-0.5); } fitPage=MagickFalse; option=GetImageOption(image_info,"eps:fit-page"); if (option != (char *) NULL) { char *geometry; MagickStatusType flags; geometry=GetPageGeometry(option); flags=ParseMetaGeometry(geometry,&page.x,&page.y,&page.width,&page.height); if (flags == NoValue) { (void) ThrowMagickException(exception,GetMagickModule(),OptionError, "InvalidGeometry","`%s'",option); image=DestroyImage(image); return((Image *) NULL); } page.width=(size_t) ceil((double) (page.width*image->x_resolution/delta.x) -0.5); page.height=(size_t) ceil((double) (page.height*image->y_resolution/ delta.y) -0.5); geometry=DestroyString(geometry); fitPage=MagickTrue; } (void) CloseBlob(image); if (IssRGBCompatibleColorspace(image_info->colorspace) != MagickFalse) cmyk=MagickFalse; /* Create Ghostscript control file. */ file=AcquireUniqueFileResource(postscript_filename); if (file == -1) { ThrowFileException(&image->exception,FileOpenError,"UnableToOpenFile", image_info->filename); image=DestroyImageList(image); return((Image *) NULL); } (void) CopyMagickString(command,"/setpagedevice {pop} bind 1 index where {" "dup wcheck {3 1 roll put} {pop def} ifelse} {def} ifelse\n" "<</UseCIEColor true>>setpagedevice\n",MaxTextExtent); count=write(file,command,(unsigned int) strlen(command)); if (image_info->page == (char *) NULL) { char translate_geometry[MaxTextExtent]; (void) FormatLocaleString(translate_geometry,MaxTextExtent, "%g %g translate\n",-hires_bounds.x1,-hires_bounds.y1); count=write(file,translate_geometry,(unsigned int) strlen(translate_geometry)); } file=close(file)-1; /* Render Postscript with the Ghostscript delegate. */ if (image_info->monochrome != MagickFalse) delegate_info=GetDelegateInfo("ps:mono",(char *) NULL,exception); else if (cmyk != MagickFalse) delegate_info=GetDelegateInfo("ps:cmyk",(char *) NULL,exception); else delegate_info=GetDelegateInfo("ps:alpha",(char *) NULL,exception); if (delegate_info == (const DelegateInfo *) NULL) { (void) RelinquishUniqueFileResource(postscript_filename); image=DestroyImageList(image); return((Image *) NULL); } density=AcquireString(""); options=AcquireString(""); (void) FormatLocaleString(density,MaxTextExtent,"%gx%g",resolution.x, resolution.y); (void) FormatLocaleString(options,MaxTextExtent,"-g%.20gx%.20g ",(double) page.width,(double) page.height); read_info=CloneImageInfo(image_info); *read_info->magick='\0'; if (read_info->number_scenes != 0) { char pages[MaxTextExtent]; (void) FormatLocaleString(pages,MaxTextExtent,"-dFirstPage=%.20g " "-dLastPage=%.20g ",(double) read_info->scene+1,(double) (read_info->scene+read_info->number_scenes)); (void) ConcatenateMagickString(options,pages,MaxTextExtent); read_info->number_scenes=0; if (read_info->scenes != (char *) NULL) *read_info->scenes='\0'; } if (*image_info->magick == 'E') { option=GetImageOption(image_info,"eps:use-cropbox"); if ((option == (const char *) NULL) || (IsStringTrue(option) != MagickFalse)) (void) ConcatenateMagickString(options,"-dEPSCrop ",MaxTextExtent); if (fitPage != MagickFalse) (void) ConcatenateMagickString(options,"-dEPSFitPage ",MaxTextExtent); } (void) CopyMagickString(filename,read_info->filename,MaxTextExtent); (void) AcquireUniqueFilename(filename); (void) RelinquishUniqueFileResource(filename); (void) ConcatenateMagickString(filename,"%d",MaxTextExtent); (void) FormatLocaleString(command,MaxTextExtent, GetDelegateCommands(delegate_info), read_info->antialias != MagickFalse ? 4 : 1, read_info->antialias != MagickFalse ? 4 : 1,density,options,filename, postscript_filename,input_filename); options=DestroyString(options); density=DestroyString(density); *message='\0'; status=InvokePostscriptDelegate(read_info->verbose,command,message,exception); (void) InterpretImageFilename(image_info,image,filename,1, read_info->filename); if ((status == MagickFalse) || (IsPostscriptRendered(read_info->filename) == MagickFalse)) { (void) ConcatenateMagickString(command," -c showpage",MaxTextExtent); status=InvokePostscriptDelegate(read_info->verbose,command,message, exception); } (void) RelinquishUniqueFileResource(postscript_filename); (void) RelinquishUniqueFileResource(input_filename); postscript_image=(Image *) NULL; if (status == MagickFalse) for (i=1; ; i++) { (void) InterpretImageFilename(image_info,image,filename,(int) i, read_info->filename); if (IsPostscriptRendered(read_info->filename) == MagickFalse) break; (void) RelinquishUniqueFileResource(read_info->filename); } else for (i=1; ; i++) { (void) InterpretImageFilename(image_info,image,filename,(int) i, read_info->filename); if (IsPostscriptRendered(read_info->filename) == MagickFalse) break; read_info->blob=NULL; read_info->length=0; next=ReadImage(read_info,exception); (void) RelinquishUniqueFileResource(read_info->filename); if (next == (Image *) NULL) break; AppendImageToList(&postscript_image,next); } (void) RelinquishUniqueFileResource(read_info->filename); read_info=DestroyImageInfo(read_info); if (postscript_image == (Image *) NULL) { if (*message != '\0') (void) ThrowMagickException(exception,GetMagickModule(),DelegateError, "PostscriptDelegateFailed","`%s'",message); image=DestroyImageList(image); return((Image *) NULL); } if (LocaleCompare(postscript_image->magick,"BMP") == 0) { Image *cmyk_image; cmyk_image=ConsolidateCMYKImages(postscript_image,exception); if (cmyk_image != (Image *) NULL) { postscript_image=DestroyImageList(postscript_image); postscript_image=cmyk_image; } } if (image_info->number_scenes != 0) { Image *clone_image; register ssize_t i; /* Add place holder images to meet the subimage specification requirement. */ for (i=0; i < (ssize_t) image_info->scene; i++) { clone_image=CloneImage(postscript_image,1,1,MagickTrue,exception); if (clone_image != (Image *) NULL) PrependImageToList(&postscript_image,clone_image); } } do { (void) CopyMagickString(postscript_image->filename,filename,MaxTextExtent); (void) CopyMagickString(postscript_image->magick,image->magick, MaxTextExtent); if (columns != 0) postscript_image->magick_columns=columns; if (rows != 0) postscript_image->magick_rows=rows; postscript_image->page=page; (void) CloneImageProfiles(postscript_image,image); (void) CloneImageProperties(postscript_image,image); next=SyncNextImageInList(postscript_image); if (next != (Image *) NULL) postscript_image=next; } while (next != (Image *) NULL); image=DestroyImageList(image); scene=0; for (next=GetFirstImageInList(postscript_image); next != (Image *) NULL; ) { next->scene=scene++; next=GetNextImageInList(next); } return(GetFirstImageInList(postscript_image)); }
136,311,021,343,540,440,000,000,000,000,000,000,000
None
null
[ "CWE-834" ]
CVE-2017-14172
In coders/ps.c in ImageMagick 7.0.7-0 Q16, a DoS in ReadPSImage() due to lack of an EOF (End of File) check might cause huge CPU consumption. When a crafted PSD file, which claims a large "extent" field in the header but does not contain sufficient backing data, is provided, the loop over "length" would consume huge CPU resources, since there is no EOF check inside the loop.
https://nvd.nist.gov/vuln/detail/CVE-2017-14172
9,422
ImageMagick
22e0310345499ffe906c604428f2a3a668942b05
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/22e0310345499ffe906c604428f2a3a668942b05
None
1
static Image *ReadOneMNGImage(MngInfo* mng_info, const ImageInfo *image_info, ExceptionInfo *exception) { char page_geometry[MaxTextExtent]; Image *image; MagickBooleanType logging; volatile int first_mng_object, object_id, term_chunk_found, skip_to_iend; volatile ssize_t image_count=0; MagickBooleanType status; MagickOffsetType offset; MngBox default_fb, fb, previous_fb; #if defined(MNG_INSERT_LAYERS) PixelPacket mng_background_color; #endif register unsigned char *p; register ssize_t i; size_t count; ssize_t loop_level; volatile short skipping_loop; #if defined(MNG_INSERT_LAYERS) unsigned int mandatory_back=0; #endif volatile unsigned int #ifdef MNG_OBJECT_BUFFERS mng_background_object=0, #endif mng_type=0; /* 0: PNG or JNG; 1: MNG; 2: MNG-LC; 3: MNG-VLC */ size_t default_frame_timeout, frame_timeout, #if defined(MNG_INSERT_LAYERS) image_height, image_width, #endif length; /* These delays are all measured in image ticks_per_second, * not in MNG ticks_per_second */ volatile size_t default_frame_delay, final_delay, final_image_delay, frame_delay, #if defined(MNG_INSERT_LAYERS) insert_layers, #endif mng_iterations=1, simplicity=0, subframe_height=0, subframe_width=0; previous_fb.top=0; previous_fb.bottom=0; previous_fb.left=0; previous_fb.right=0; default_fb.top=0; default_fb.bottom=0; default_fb.left=0; default_fb.right=0; logging=LogMagickEvent(CoderEvent,GetMagickModule(), " Enter ReadOneMNGImage()"); image=mng_info->image; if (LocaleCompare(image_info->magick,"MNG") == 0) { char magic_number[MaxTextExtent]; /* Verify MNG signature. */ count=(size_t) ReadBlob(image,8,(unsigned char *) magic_number); if (memcmp(magic_number,"\212MNG\r\n\032\n",8) != 0) ThrowReaderException(CorruptImageError,"ImproperImageHeader"); /* Initialize some nonzero members of the MngInfo structure. */ for (i=0; i < MNG_MAX_OBJECTS; i++) { mng_info->object_clip[i].right=(ssize_t) PNG_UINT_31_MAX; mng_info->object_clip[i].bottom=(ssize_t) PNG_UINT_31_MAX; } mng_info->exists[0]=MagickTrue; } skipping_loop=(-1); first_mng_object=MagickTrue; mng_type=0; #if defined(MNG_INSERT_LAYERS) insert_layers=MagickFalse; /* should be False when converting or mogrifying */ #endif default_frame_delay=0; default_frame_timeout=0; frame_delay=0; final_delay=1; mng_info->ticks_per_second=1UL*image->ticks_per_second; object_id=0; skip_to_iend=MagickFalse; term_chunk_found=MagickFalse; mng_info->framing_mode=1; #if defined(MNG_INSERT_LAYERS) mandatory_back=MagickFalse; #endif #if defined(MNG_INSERT_LAYERS) mng_background_color=image->background_color; #endif default_fb=mng_info->frame; previous_fb=mng_info->frame; do { char type[MaxTextExtent]; if (LocaleCompare(image_info->magick,"MNG") == 0) { unsigned char *chunk; /* Read a new chunk. */ type[0]='\0'; (void) ConcatenateMagickString(type,"errr",MaxTextExtent); length=ReadBlobMSBLong(image); count=(size_t) ReadBlob(image,4,(unsigned char *) type); if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Reading MNG chunk type %c%c%c%c, length: %.20g", type[0],type[1],type[2],type[3],(double) length); if (length > PNG_UINT_31_MAX) { status=MagickFalse; break; } if (count == 0) ThrowReaderException(CorruptImageError,"CorruptImage"); p=NULL; chunk=(unsigned char *) NULL; if (length != 0) { if (length > GetBlobSize(image)) ThrowReaderException(CorruptImageError, "InsufficientImageDataInFile"); chunk=(unsigned char *) AcquireQuantumMemory(length+ MagickPathExtent,sizeof(*chunk)); if (chunk == (unsigned char *) NULL) ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); for (i=0; i < (ssize_t) length; i++) { int c; c=ReadBlobByte(image); if (c == EOF) break; chunk[i]=(unsigned char) c; } p=chunk; } (void) ReadBlobMSBLong(image); /* read crc word */ #if !defined(JNG_SUPPORTED) if (memcmp(type,mng_JHDR,4) == 0) { skip_to_iend=MagickTrue; if (mng_info->jhdr_warning == 0) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"JNGCompressNotSupported","`%s'",image->filename); mng_info->jhdr_warning++; } #endif if (memcmp(type,mng_DHDR,4) == 0) { skip_to_iend=MagickTrue; if (mng_info->dhdr_warning == 0) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"DeltaPNGNotSupported","`%s'",image->filename); mng_info->dhdr_warning++; } if (memcmp(type,mng_MEND,4) == 0) break; if (skip_to_iend) { if (memcmp(type,mng_IEND,4) == 0) skip_to_iend=MagickFalse; if (length != 0) chunk=(unsigned char *) RelinquishMagickMemory(chunk); if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Skip to IEND."); continue; } if (memcmp(type,mng_MHDR,4) == 0) { if (length != 28) { chunk=(unsigned char *) RelinquishMagickMemory(chunk); ThrowReaderException(CorruptImageError,"CorruptImage"); } mng_info->mng_width=(size_t) ((p[0] << 24) | (p[1] << 16) | (p[2] << 8) | p[3]); mng_info->mng_height=(size_t) ((p[4] << 24) | (p[5] << 16) | (p[6] << 8) | p[7]); if (logging != MagickFalse) { (void) LogMagickEvent(CoderEvent,GetMagickModule(), " MNG width: %.20g",(double) mng_info->mng_width); (void) LogMagickEvent(CoderEvent,GetMagickModule(), " MNG height: %.20g",(double) mng_info->mng_height); } p+=8; mng_info->ticks_per_second=(size_t) mng_get_long(p); if (mng_info->ticks_per_second == 0) default_frame_delay=0; else default_frame_delay=1UL*image->ticks_per_second/ mng_info->ticks_per_second; frame_delay=default_frame_delay; simplicity=0; /* Skip nominal layer count, frame count, and play time */ p+=16; simplicity=(size_t) mng_get_long(p); mng_type=1; /* Full MNG */ if ((simplicity != 0) && ((simplicity | 11) == 11)) mng_type=2; /* LC */ if ((simplicity != 0) && ((simplicity | 9) == 9)) mng_type=3; /* VLC */ #if defined(MNG_INSERT_LAYERS) if (mng_type != 3) insert_layers=MagickTrue; #endif if (GetAuthenticPixelQueue(image) != (PixelPacket *) NULL) { /* Allocate next image structure. */ AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) return(DestroyImageList(image)); image=SyncNextImageInList(image); mng_info->image=image; } if ((mng_info->mng_width > 65535L) || (mng_info->mng_height > 65535L)) { chunk=(unsigned char *) RelinquishMagickMemory(chunk); ThrowReaderException(ImageError,"WidthOrHeightExceedsLimit"); } (void) FormatLocaleString(page_geometry,MaxTextExtent, "%.20gx%.20g+0+0",(double) mng_info->mng_width,(double) mng_info->mng_height); mng_info->frame.left=0; mng_info->frame.right=(ssize_t) mng_info->mng_width; mng_info->frame.top=0; mng_info->frame.bottom=(ssize_t) mng_info->mng_height; mng_info->clip=default_fb=previous_fb=mng_info->frame; for (i=0; i < MNG_MAX_OBJECTS; i++) mng_info->object_clip[i]=mng_info->frame; chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_TERM,4) == 0) { int repeat=0; if (length != 0) repeat=p[0]; if (repeat == 3 && length > 8) { final_delay=(png_uint_32) mng_get_long(&p[2]); mng_iterations=(png_uint_32) mng_get_long(&p[6]); if (mng_iterations == PNG_UINT_31_MAX) mng_iterations=0; image->iterations=mng_iterations; term_chunk_found=MagickTrue; } if (logging != MagickFalse) { (void) LogMagickEvent(CoderEvent,GetMagickModule(), " repeat=%d, final_delay=%.20g, iterations=%.20g", repeat,(double) final_delay, (double) image->iterations); } chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_DEFI,4) == 0) { if (mng_type == 3) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"DEFI chunk found in MNG-VLC datastream","`%s'", image->filename); if (length > 1) { object_id=(p[0] << 8) | p[1]; if (mng_type == 2 && object_id != 0) (void) ThrowMagickException(&image->exception, GetMagickModule(), CoderError,"Nonzero object_id in MNG-LC datastream", "`%s'", image->filename); if (object_id > MNG_MAX_OBJECTS) { /* Instead of using a warning we should allocate a larger MngInfo structure and continue. */ (void) ThrowMagickException(&image->exception, GetMagickModule(), CoderError, "object id too large","`%s'",image->filename); object_id=MNG_MAX_OBJECTS; } if (mng_info->exists[object_id]) if (mng_info->frozen[object_id]) { chunk=(unsigned char *) RelinquishMagickMemory(chunk); (void) ThrowMagickException(&image->exception, GetMagickModule(),CoderError, "DEFI cannot redefine a frozen MNG object","`%s'", image->filename); continue; } mng_info->exists[object_id]=MagickTrue; if (length > 2) mng_info->invisible[object_id]=p[2]; /* Extract object offset info. */ if (length > 11) { mng_info->x_off[object_id]=(ssize_t) ((p[4] << 24) | (p[5] << 16) | (p[6] << 8) | p[7]); mng_info->y_off[object_id]=(ssize_t) ((p[8] << 24) | (p[9] << 16) | (p[10] << 8) | p[11]); if (logging != MagickFalse) { (void) LogMagickEvent(CoderEvent,GetMagickModule(), " x_off[%d]: %.20g, y_off[%d]: %.20g", object_id,(double) mng_info->x_off[object_id], object_id,(double) mng_info->y_off[object_id]); } } /* Extract object clipping info. */ if (length > 27) mng_info->object_clip[object_id]= mng_read_box(mng_info->frame,0, &p[12]); } chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_bKGD,4) == 0) { mng_info->have_global_bkgd=MagickFalse; if (length > 5) { mng_info->mng_global_bkgd.red= ScaleShortToQuantum((unsigned short) ((p[0] << 8) | p[1])); mng_info->mng_global_bkgd.green= ScaleShortToQuantum((unsigned short) ((p[2] << 8) | p[3])); mng_info->mng_global_bkgd.blue= ScaleShortToQuantum((unsigned short) ((p[4] << 8) | p[5])); mng_info->have_global_bkgd=MagickTrue; } chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_BACK,4) == 0) { #if defined(MNG_INSERT_LAYERS) if (length > 6) mandatory_back=p[6]; else mandatory_back=0; if (mandatory_back && length > 5) { mng_background_color.red= ScaleShortToQuantum((unsigned short) ((p[0] << 8) | p[1])); mng_background_color.green= ScaleShortToQuantum((unsigned short) ((p[2] << 8) | p[3])); mng_background_color.blue= ScaleShortToQuantum((unsigned short) ((p[4] << 8) | p[5])); mng_background_color.opacity=OpaqueOpacity; } #ifdef MNG_OBJECT_BUFFERS if (length > 8) mng_background_object=(p[7] << 8) | p[8]; #endif #endif chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_PLTE,4) == 0) { /* Read global PLTE. */ if (length && (length < 769)) { if (mng_info->global_plte == (png_colorp) NULL) mng_info->global_plte=(png_colorp) AcquireQuantumMemory(256, sizeof(*mng_info->global_plte)); for (i=0; i < (ssize_t) (length/3); i++) { mng_info->global_plte[i].red=p[3*i]; mng_info->global_plte[i].green=p[3*i+1]; mng_info->global_plte[i].blue=p[3*i+2]; } mng_info->global_plte_length=(unsigned int) (length/3); } #ifdef MNG_LOOSE for ( ; i < 256; i++) { mng_info->global_plte[i].red=i; mng_info->global_plte[i].green=i; mng_info->global_plte[i].blue=i; } if (length != 0) mng_info->global_plte_length=256; #endif else mng_info->global_plte_length=0; chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_tRNS,4) == 0) { /* read global tRNS */ if (length > 0 && length < 257) for (i=0; i < (ssize_t) length; i++) mng_info->global_trns[i]=p[i]; #ifdef MNG_LOOSE for ( ; i < 256; i++) mng_info->global_trns[i]=255; #endif mng_info->global_trns_length=(unsigned int) length; chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_gAMA,4) == 0) { if (length == 4) { ssize_t igamma; igamma=mng_get_long(p); mng_info->global_gamma=((float) igamma)*0.00001; mng_info->have_global_gama=MagickTrue; } else mng_info->have_global_gama=MagickFalse; chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_cHRM,4) == 0) { /* Read global cHRM */ if (length == 32) { mng_info->global_chrm.white_point.x=0.00001*mng_get_long(p); mng_info->global_chrm.white_point.y=0.00001*mng_get_long(&p[4]); mng_info->global_chrm.red_primary.x=0.00001*mng_get_long(&p[8]); mng_info->global_chrm.red_primary.y=0.00001* mng_get_long(&p[12]); mng_info->global_chrm.green_primary.x=0.00001* mng_get_long(&p[16]); mng_info->global_chrm.green_primary.y=0.00001* mng_get_long(&p[20]); mng_info->global_chrm.blue_primary.x=0.00001* mng_get_long(&p[24]); mng_info->global_chrm.blue_primary.y=0.00001* mng_get_long(&p[28]); mng_info->have_global_chrm=MagickTrue; } else mng_info->have_global_chrm=MagickFalse; chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_sRGB,4) == 0) { /* Read global sRGB. */ if (length != 0) { mng_info->global_srgb_intent= Magick_RenderingIntent_from_PNG_RenderingIntent(p[0]); mng_info->have_global_srgb=MagickTrue; } else mng_info->have_global_srgb=MagickFalse; chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_iCCP,4) == 0) { /* To do: */ /* Read global iCCP. */ if (length != 0) chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_FRAM,4) == 0) { if (mng_type == 3) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"FRAM chunk found in MNG-VLC datastream","`%s'", image->filename); if ((mng_info->framing_mode == 2) || (mng_info->framing_mode == 4)) image->delay=frame_delay; frame_delay=default_frame_delay; frame_timeout=default_frame_timeout; fb=default_fb; if (length > 0) if (p[0]) mng_info->framing_mode=p[0]; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Framing_mode=%d",mng_info->framing_mode); if (length > 6) { /* Note the delay and frame clipping boundaries. */ p++; /* framing mode */ while (*p && ((p-chunk) < (ssize_t) length)) p++; /* frame name */ p++; /* frame name terminator */ if ((p-chunk) < (ssize_t) (length-4)) { int change_delay, change_timeout, change_clipping; change_delay=(*p++); change_timeout=(*p++); change_clipping=(*p++); p++; /* change_sync */ if (change_delay && (p-chunk) < (ssize_t) (length-4)) { frame_delay=1UL*image->ticks_per_second* mng_get_long(p); if (mng_info->ticks_per_second != 0) frame_delay/=mng_info->ticks_per_second; else frame_delay=PNG_UINT_31_MAX; if (change_delay == 2) default_frame_delay=frame_delay; p+=4; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Framing_delay=%.20g",(double) frame_delay); } if (change_timeout && (p-chunk) < (ssize_t) (length-4)) { frame_timeout=1UL*image->ticks_per_second* mng_get_long(p); if (mng_info->ticks_per_second != 0) frame_timeout/=mng_info->ticks_per_second; else frame_timeout=PNG_UINT_31_MAX; if (change_timeout == 2) default_frame_timeout=frame_timeout; p+=4; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Framing_timeout=%.20g",(double) frame_timeout); } if (change_clipping && (p-chunk) < (ssize_t) (length-17)) { fb=mng_read_box(previous_fb,(char) p[0],&p[1]); p+=17; previous_fb=fb; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Frame_clip: L=%.20g R=%.20g T=%.20g B=%.20g", (double) fb.left,(double) fb.right,(double) fb.top, (double) fb.bottom); if (change_clipping == 2) default_fb=fb; } } } mng_info->clip=fb; mng_info->clip=mng_minimum_box(fb,mng_info->frame); subframe_width=(size_t) (mng_info->clip.right -mng_info->clip.left); subframe_height=(size_t) (mng_info->clip.bottom -mng_info->clip.top); /* Insert a background layer behind the frame if framing_mode is 4. */ #if defined(MNG_INSERT_LAYERS) if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " subframe_width=%.20g, subframe_height=%.20g",(double) subframe_width,(double) subframe_height); if (insert_layers && (mng_info->framing_mode == 4) && (subframe_width) && (subframe_height)) { /* Allocate next image structure. */ if (GetAuthenticPixelQueue(image) != (PixelPacket *) NULL) { AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) return(DestroyImageList(image)); image=SyncNextImageInList(image); } mng_info->image=image; if (term_chunk_found) { image->start_loop=MagickTrue; image->iterations=mng_iterations; term_chunk_found=MagickFalse; } else image->start_loop=MagickFalse; image->columns=subframe_width; image->rows=subframe_height; image->page.width=subframe_width; image->page.height=subframe_height; image->page.x=mng_info->clip.left; image->page.y=mng_info->clip.top; image->background_color=mng_background_color; image->matte=MagickFalse; image->delay=0; (void) SetImageBackgroundColor(image); if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Insert backgd layer, L=%.20g, R=%.20g T=%.20g, B=%.20g", (double) mng_info->clip.left,(double) mng_info->clip.right, (double) mng_info->clip.top,(double) mng_info->clip.bottom); } #endif chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_CLIP,4) == 0) { unsigned int first_object, last_object; /* Read CLIP. */ if (length > 3) { first_object=(p[0] << 8) | p[1]; last_object=(p[2] << 8) | p[3]; p+=4; for (i=(int) first_object; i <= (int) last_object; i++) { if (mng_info->exists[i] && !mng_info->frozen[i]) { MngBox box; box=mng_info->object_clip[i]; if ((p-chunk) < (ssize_t) (length-17)) mng_info->object_clip[i]= mng_read_box(box,(char) p[0],&p[1]); } } } chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_SAVE,4) == 0) { for (i=1; i < MNG_MAX_OBJECTS; i++) if (mng_info->exists[i]) { mng_info->frozen[i]=MagickTrue; #ifdef MNG_OBJECT_BUFFERS if (mng_info->ob[i] != (MngBuffer *) NULL) mng_info->ob[i]->frozen=MagickTrue; #endif } if (length != 0) chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if ((memcmp(type,mng_DISC,4) == 0) || (memcmp(type,mng_SEEK,4) == 0)) { /* Read DISC or SEEK. */ if ((length == 0) || !memcmp(type,mng_SEEK,4)) { for (i=1; i < MNG_MAX_OBJECTS; i++) MngInfoDiscardObject(mng_info,i); } else { register ssize_t j; for (j=1; j < (ssize_t) length; j+=2) { i=p[j-1] << 8 | p[j]; MngInfoDiscardObject(mng_info,i); } } if (length != 0) chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_MOVE,4) == 0) { size_t first_object, last_object; /* read MOVE */ if (length > 3) { first_object=(p[0] << 8) | p[1]; last_object=(p[2] << 8) | p[3]; p+=4; for (i=(ssize_t) first_object; i <= (ssize_t) last_object; i++) { if ((i < 0) || (i >= MNG_MAX_OBJECTS)) continue; if (mng_info->exists[i] && !mng_info->frozen[i] && (p-chunk) < (ssize_t) (length-8)) { MngPair new_pair; MngPair old_pair; old_pair.a=mng_info->x_off[i]; old_pair.b=mng_info->y_off[i]; new_pair=mng_read_pair(old_pair,(int) p[0],&p[1]); mng_info->x_off[i]=new_pair.a; mng_info->y_off[i]=new_pair.b; } } } chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_LOOP,4) == 0) { ssize_t loop_iters=1; if (length > 4) { loop_level=chunk[0]; mng_info->loop_active[loop_level]=1; /* mark loop active */ /* Record starting point. */ loop_iters=mng_get_long(&chunk[1]); if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " LOOP level %.20g has %.20g iterations ", (double) loop_level, (double) loop_iters); if (loop_iters == 0) skipping_loop=loop_level; else { mng_info->loop_jump[loop_level]=TellBlob(image); mng_info->loop_count[loop_level]=loop_iters; } mng_info->loop_iteration[loop_level]=0; } chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_ENDL,4) == 0) { if (length > 0) { loop_level=chunk[0]; if (skipping_loop > 0) { if (skipping_loop == loop_level) { /* Found end of zero-iteration loop. */ skipping_loop=(-1); mng_info->loop_active[loop_level]=0; } } else { if (mng_info->loop_active[loop_level] == 1) { mng_info->loop_count[loop_level]--; mng_info->loop_iteration[loop_level]++; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " ENDL: LOOP level %.20g has %.20g remaining iters ", (double) loop_level,(double) mng_info->loop_count[loop_level]); if (mng_info->loop_count[loop_level] != 0) { offset=SeekBlob(image, mng_info->loop_jump[loop_level], SEEK_SET); if (offset < 0) { chunk=(unsigned char *) RelinquishMagickMemory( chunk); ThrowReaderException(CorruptImageError, "ImproperImageHeader"); } } else { short last_level; /* Finished loop. */ mng_info->loop_active[loop_level]=0; last_level=(-1); for (i=0; i < loop_level; i++) if (mng_info->loop_active[i] == 1) last_level=(short) i; loop_level=last_level; } } } } chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_CLON,4) == 0) { if (mng_info->clon_warning == 0) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"CLON is not implemented yet","`%s'", image->filename); mng_info->clon_warning++; } if (memcmp(type,mng_MAGN,4) == 0) { png_uint_16 magn_first, magn_last, magn_mb, magn_ml, magn_mr, magn_mt, magn_mx, magn_my, magn_methx, magn_methy; if (length > 1) magn_first=(p[0] << 8) | p[1]; else magn_first=0; if (length > 3) magn_last=(p[2] << 8) | p[3]; else magn_last=magn_first; #ifndef MNG_OBJECT_BUFFERS if (magn_first || magn_last) if (mng_info->magn_warning == 0) { (void) ThrowMagickException(&image->exception, GetMagickModule(),CoderError, "MAGN is not implemented yet for nonzero objects", "`%s'",image->filename); mng_info->magn_warning++; } #endif if (length > 4) magn_methx=p[4]; else magn_methx=0; if (length > 6) magn_mx=(p[5] << 8) | p[6]; else magn_mx=1; if (magn_mx == 0) magn_mx=1; if (length > 8) magn_my=(p[7] << 8) | p[8]; else magn_my=magn_mx; if (magn_my == 0) magn_my=1; if (length > 10) magn_ml=(p[9] << 8) | p[10]; else magn_ml=magn_mx; if (magn_ml == 0) magn_ml=1; if (length > 12) magn_mr=(p[11] << 8) | p[12]; else magn_mr=magn_mx; if (magn_mr == 0) magn_mr=1; if (length > 14) magn_mt=(p[13] << 8) | p[14]; else magn_mt=magn_my; if (magn_mt == 0) magn_mt=1; if (length > 16) magn_mb=(p[15] << 8) | p[16]; else magn_mb=magn_my; if (magn_mb == 0) magn_mb=1; if (length > 17) magn_methy=p[17]; else magn_methy=magn_methx; if (magn_methx > 5 || magn_methy > 5) if (mng_info->magn_warning == 0) { (void) ThrowMagickException(&image->exception, GetMagickModule(),CoderError, "Unknown MAGN method in MNG datastream","`%s'", image->filename); mng_info->magn_warning++; } #ifdef MNG_OBJECT_BUFFERS /* Magnify existing objects in the range magn_first to magn_last */ #endif if (magn_first == 0 || magn_last == 0) { /* Save the magnification factors for object 0 */ mng_info->magn_mb=magn_mb; mng_info->magn_ml=magn_ml; mng_info->magn_mr=magn_mr; mng_info->magn_mt=magn_mt; mng_info->magn_mx=magn_mx; mng_info->magn_my=magn_my; mng_info->magn_methx=magn_methx; mng_info->magn_methy=magn_methy; } } if (memcmp(type,mng_PAST,4) == 0) { if (mng_info->past_warning == 0) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"PAST is not implemented yet","`%s'", image->filename); mng_info->past_warning++; } if (memcmp(type,mng_SHOW,4) == 0) { if (mng_info->show_warning == 0) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"SHOW is not implemented yet","`%s'", image->filename); mng_info->show_warning++; } if (memcmp(type,mng_sBIT,4) == 0) { if (length < 4) mng_info->have_global_sbit=MagickFalse; else { mng_info->global_sbit.gray=p[0]; mng_info->global_sbit.red=p[0]; mng_info->global_sbit.green=p[1]; mng_info->global_sbit.blue=p[2]; mng_info->global_sbit.alpha=p[3]; mng_info->have_global_sbit=MagickTrue; } } if (memcmp(type,mng_pHYs,4) == 0) { if (length > 8) { mng_info->global_x_pixels_per_unit= (size_t) mng_get_long(p); mng_info->global_y_pixels_per_unit= (size_t) mng_get_long(&p[4]); mng_info->global_phys_unit_type=p[8]; mng_info->have_global_phys=MagickTrue; } else mng_info->have_global_phys=MagickFalse; } if (memcmp(type,mng_pHYg,4) == 0) { if (mng_info->phyg_warning == 0) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"pHYg is not implemented.","`%s'",image->filename); mng_info->phyg_warning++; } if (memcmp(type,mng_BASI,4) == 0) { skip_to_iend=MagickTrue; if (mng_info->basi_warning == 0) (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"BASI is not implemented yet","`%s'", image->filename); mng_info->basi_warning++; #ifdef MNG_BASI_SUPPORTED if (length > 11) { basi_width=(size_t) ((p[0] << 24) | (p[1] << 16) | (p[2] << 8) | p[3]); basi_height=(size_t) ((p[4] << 24) | (p[5] << 16) | (p[6] << 8) | p[7]); basi_color_type=p[8]; basi_compression_method=p[9]; basi_filter_type=p[10]; basi_interlace_method=p[11]; } if (length > 13) basi_red=(p[12] << 8) & p[13]; else basi_red=0; if (length > 15) basi_green=(p[14] << 8) & p[15]; else basi_green=0; if (length > 17) basi_blue=(p[16] << 8) & p[17]; else basi_blue=0; if (length > 19) basi_alpha=(p[18] << 8) & p[19]; else { if (basi_sample_depth == 16) basi_alpha=65535L; else basi_alpha=255; } if (length > 20) basi_viewable=p[20]; else basi_viewable=0; #endif chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } if (memcmp(type,mng_IHDR,4) #if defined(JNG_SUPPORTED) && memcmp(type,mng_JHDR,4) #endif ) { /* Not an IHDR or JHDR chunk */ if (length != 0) chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } /* Process IHDR */ if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Processing %c%c%c%c chunk",type[0],type[1],type[2],type[3]); mng_info->exists[object_id]=MagickTrue; mng_info->viewable[object_id]=MagickTrue; if (mng_info->invisible[object_id]) { if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Skipping invisible object"); skip_to_iend=MagickTrue; chunk=(unsigned char *) RelinquishMagickMemory(chunk); continue; } #if defined(MNG_INSERT_LAYERS) if (length < 8) { chunk=(unsigned char *) RelinquishMagickMemory(chunk); ThrowReaderException(CorruptImageError,"ImproperImageHeader"); } image_width=(size_t) mng_get_long(p); image_height=(size_t) mng_get_long(&p[4]); #endif chunk=(unsigned char *) RelinquishMagickMemory(chunk); /* Insert a transparent background layer behind the entire animation if it is not full screen. */ #if defined(MNG_INSERT_LAYERS) if (insert_layers && mng_type && first_mng_object) { if ((mng_info->clip.left > 0) || (mng_info->clip.top > 0) || (image_width < mng_info->mng_width) || (mng_info->clip.right < (ssize_t) mng_info->mng_width) || (image_height < mng_info->mng_height) || (mng_info->clip.bottom < (ssize_t) mng_info->mng_height)) { if (GetAuthenticPixelQueue(image) != (PixelPacket *) NULL) { /* Allocate next image structure. */ AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) return(DestroyImageList(image)); image=SyncNextImageInList(image); } mng_info->image=image; if (term_chunk_found) { image->start_loop=MagickTrue; image->iterations=mng_iterations; term_chunk_found=MagickFalse; } else image->start_loop=MagickFalse; /* Make a background rectangle. */ image->delay=0; image->columns=mng_info->mng_width; image->rows=mng_info->mng_height; image->page.width=mng_info->mng_width; image->page.height=mng_info->mng_height; image->page.x=0; image->page.y=0; image->background_color=mng_background_color; (void) SetImageBackgroundColor(image); if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Inserted transparent background layer, W=%.20g, H=%.20g", (double) mng_info->mng_width,(double) mng_info->mng_height); } } /* Insert a background layer behind the upcoming image if framing_mode is 3, and we haven't already inserted one. */ if (insert_layers && (mng_info->framing_mode == 3) && (subframe_width) && (subframe_height) && (simplicity == 0 || (simplicity & 0x08))) { if (GetAuthenticPixelQueue(image) != (PixelPacket *) NULL) { /* Allocate next image structure. */ AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) return(DestroyImageList(image)); image=SyncNextImageInList(image); } mng_info->image=image; if (term_chunk_found) { image->start_loop=MagickTrue; image->iterations=mng_iterations; term_chunk_found=MagickFalse; } else image->start_loop=MagickFalse; image->delay=0; image->columns=subframe_width; image->rows=subframe_height; image->page.width=subframe_width; image->page.height=subframe_height; image->page.x=mng_info->clip.left; image->page.y=mng_info->clip.top; image->background_color=mng_background_color; image->matte=MagickFalse; (void) SetImageBackgroundColor(image); if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Insert background layer, L=%.20g, R=%.20g T=%.20g, B=%.20g", (double) mng_info->clip.left,(double) mng_info->clip.right, (double) mng_info->clip.top,(double) mng_info->clip.bottom); } #endif /* MNG_INSERT_LAYERS */ first_mng_object=MagickFalse; if (GetAuthenticPixelQueue(image) != (PixelPacket *) NULL) { /* Allocate next image structure. */ AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) return(DestroyImageList(image)); image=SyncNextImageInList(image); } mng_info->image=image; status=SetImageProgress(image,LoadImagesTag,TellBlob(image), GetBlobSize(image)); if (status == MagickFalse) break; if (term_chunk_found) { image->start_loop=MagickTrue; term_chunk_found=MagickFalse; } else image->start_loop=MagickFalse; if (mng_info->framing_mode == 1 || mng_info->framing_mode == 3) { image->delay=frame_delay; frame_delay=default_frame_delay; } else image->delay=0; image->page.width=mng_info->mng_width; image->page.height=mng_info->mng_height; image->page.x=mng_info->x_off[object_id]; image->page.y=mng_info->y_off[object_id]; image->iterations=mng_iterations; /* Seek back to the beginning of the IHDR or JHDR chunk's length field. */ if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Seeking back to beginning of %c%c%c%c chunk",type[0],type[1], type[2],type[3]); offset=SeekBlob(image,-((ssize_t) length+12),SEEK_CUR); if (offset < 0) ThrowReaderException(CorruptImageError,"ImproperImageHeader"); } mng_info->image=image; mng_info->mng_type=mng_type; mng_info->object_id=object_id; if (memcmp(type,mng_IHDR,4) == 0) image=ReadOnePNGImage(mng_info,image_info,exception); #if defined(JNG_SUPPORTED) else image=ReadOneJNGImage(mng_info,image_info,exception); #endif if (image == (Image *) NULL) { if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), "exit ReadJNGImage() with error"); return((Image *) NULL); } if (image->columns == 0 || image->rows == 0) { (void) CloseBlob(image); return(DestroyImageList(image)); } mng_info->image=image; if (mng_type) { MngBox crop_box; if (mng_info->magn_methx || mng_info->magn_methy) { png_uint_32 magnified_height, magnified_width; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Processing MNG MAGN chunk"); if (mng_info->magn_methx == 1) { magnified_width=mng_info->magn_ml; if (image->columns > 1) magnified_width += mng_info->magn_mr; if (image->columns > 2) magnified_width += (png_uint_32) ((image->columns-2)*(mng_info->magn_mx)); } else { magnified_width=(png_uint_32) image->columns; if (image->columns > 1) magnified_width += mng_info->magn_ml-1; if (image->columns > 2) magnified_width += mng_info->magn_mr-1; if (image->columns > 3) magnified_width += (png_uint_32) ((image->columns-3)*(mng_info->magn_mx-1)); } if (mng_info->magn_methy == 1) { magnified_height=mng_info->magn_mt; if (image->rows > 1) magnified_height += mng_info->magn_mb; if (image->rows > 2) magnified_height += (png_uint_32) ((image->rows-2)*(mng_info->magn_my)); } else { magnified_height=(png_uint_32) image->rows; if (image->rows > 1) magnified_height += mng_info->magn_mt-1; if (image->rows > 2) magnified_height += mng_info->magn_mb-1; if (image->rows > 3) magnified_height += (png_uint_32) ((image->rows-3)*(mng_info->magn_my-1)); } if (magnified_height > image->rows || magnified_width > image->columns) { Image *large_image; int yy; ssize_t m, y; register ssize_t x; register PixelPacket *n, *q; PixelPacket *next, *prev; png_uint_16 magn_methx, magn_methy; /* Allocate next image structure. */ if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Allocate magnified image"); AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) return(DestroyImageList(image)); large_image=SyncNextImageInList(image); large_image->columns=magnified_width; large_image->rows=magnified_height; magn_methx=mng_info->magn_methx; magn_methy=mng_info->magn_methy; #if (MAGICKCORE_QUANTUM_DEPTH > 16) #define QM unsigned short if (magn_methx != 1 || magn_methy != 1) { /* Scale pixels to unsigned shorts to prevent overflow of intermediate values of interpolations */ for (y=0; y < (ssize_t) image->rows; y++) { q=GetAuthenticPixels(image,0,y,image->columns,1, exception); for (x=(ssize_t) image->columns-1; x >= 0; x--) { SetPixelRed(q,ScaleQuantumToShort( GetPixelRed(q))); SetPixelGreen(q,ScaleQuantumToShort( GetPixelGreen(q))); SetPixelBlue(q,ScaleQuantumToShort( GetPixelBlue(q))); SetPixelOpacity(q,ScaleQuantumToShort( GetPixelOpacity(q))); q++; } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; } } #else #define QM Quantum #endif if (image->matte != MagickFalse) (void) SetImageBackgroundColor(large_image); else { large_image->background_color.opacity=OpaqueOpacity; (void) SetImageBackgroundColor(large_image); if (magn_methx == 4) magn_methx=2; if (magn_methx == 5) magn_methx=3; if (magn_methy == 4) magn_methy=2; if (magn_methy == 5) magn_methy=3; } /* magnify the rows into the right side of the large image */ if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Magnify the rows to %.20g",(double) large_image->rows); m=(ssize_t) mng_info->magn_mt; yy=0; length=(size_t) image->columns; next=(PixelPacket *) AcquireQuantumMemory(length,sizeof(*next)); prev=(PixelPacket *) AcquireQuantumMemory(length,sizeof(*prev)); if ((prev == (PixelPacket *) NULL) || (next == (PixelPacket *) NULL)) { image=DestroyImageList(image); ThrowReaderException(ResourceLimitError, "MemoryAllocationFailed"); } n=GetAuthenticPixels(image,0,0,image->columns,1,exception); (void) CopyMagickMemory(next,n,length); for (y=0; y < (ssize_t) image->rows; y++) { if (y == 0) m=(ssize_t) mng_info->magn_mt; else if (magn_methy > 1 && y == (ssize_t) image->rows-2) m=(ssize_t) mng_info->magn_mb; else if (magn_methy <= 1 && y == (ssize_t) image->rows-1) m=(ssize_t) mng_info->magn_mb; else if (magn_methy > 1 && y == (ssize_t) image->rows-1) m=1; else m=(ssize_t) mng_info->magn_my; n=prev; prev=next; next=n; if (y < (ssize_t) image->rows-1) { n=GetAuthenticPixels(image,0,y+1,image->columns,1, exception); (void) CopyMagickMemory(next,n,length); } for (i=0; i < m; i++, yy++) { register PixelPacket *pixels; assert(yy < (ssize_t) large_image->rows); pixels=prev; n=next; q=GetAuthenticPixels(large_image,0,yy,large_image->columns, 1,exception); q+=(large_image->columns-image->columns); for (x=(ssize_t) image->columns-1; x >= 0; x--) { /* To do: get color as function of indexes[x] */ /* if (image->storage_class == PseudoClass) { } */ if (magn_methy <= 1) { /* replicate previous */ SetPixelRGBO(q,(pixels)); } else if (magn_methy == 2 || magn_methy == 4) { if (i == 0) { SetPixelRGBO(q,(pixels)); } else { /* Interpolate */ SetPixelRed(q, ((QM) (((ssize_t) (2*i*(GetPixelRed(n) -GetPixelRed(pixels)+m))/ ((ssize_t) (m*2)) +GetPixelRed(pixels))))); SetPixelGreen(q, ((QM) (((ssize_t) (2*i*(GetPixelGreen(n) -GetPixelGreen(pixels)+m))/ ((ssize_t) (m*2)) +GetPixelGreen(pixels))))); SetPixelBlue(q, ((QM) (((ssize_t) (2*i*(GetPixelBlue(n) -GetPixelBlue(pixels)+m))/ ((ssize_t) (m*2)) +GetPixelBlue(pixels))))); if (image->matte != MagickFalse) SetPixelOpacity(q, ((QM) (((ssize_t) (2*i*(GetPixelOpacity(n) -GetPixelOpacity(pixels)+m)) /((ssize_t) (m*2))+ GetPixelOpacity(pixels))))); } if (magn_methy == 4) { /* Replicate nearest */ if (i <= ((m+1) << 1)) SetPixelOpacity(q, (*pixels).opacity+0); else SetPixelOpacity(q, (*n).opacity+0); } } else /* if (magn_methy == 3 || magn_methy == 5) */ { /* Replicate nearest */ if (i <= ((m+1) << 1)) { SetPixelRGBO(q,(pixels)); } else { SetPixelRGBO(q,(n)); } if (magn_methy == 5) { SetPixelOpacity(q, (QM) (((ssize_t) (2*i* (GetPixelOpacity(n) -GetPixelOpacity(pixels)) +m))/((ssize_t) (m*2)) +GetPixelOpacity(pixels))); } } n++; q++; pixels++; } /* x */ if (SyncAuthenticPixels(large_image,exception) == 0) break; } /* i */ } /* y */ prev=(PixelPacket *) RelinquishMagickMemory(prev); next=(PixelPacket *) RelinquishMagickMemory(next); length=image->columns; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Delete original image"); DeleteImageFromList(&image); image=large_image; mng_info->image=image; /* magnify the columns */ if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Magnify the columns to %.20g",(double) image->columns); for (y=0; y < (ssize_t) image->rows; y++) { register PixelPacket *pixels; q=GetAuthenticPixels(image,0,y,image->columns,1,exception); pixels=q+(image->columns-length); n=pixels+1; for (x=(ssize_t) (image->columns-length); x < (ssize_t) image->columns; x++) { /* To do: Rewrite using Get/Set***PixelComponent() */ if (x == (ssize_t) (image->columns-length)) m=(ssize_t) mng_info->magn_ml; else if (magn_methx > 1 && x == (ssize_t) image->columns-2) m=(ssize_t) mng_info->magn_mr; else if (magn_methx <= 1 && x == (ssize_t) image->columns-1) m=(ssize_t) mng_info->magn_mr; else if (magn_methx > 1 && x == (ssize_t) image->columns-1) m=1; else m=(ssize_t) mng_info->magn_mx; for (i=0; i < m; i++) { if (magn_methx <= 1) { /* replicate previous */ SetPixelRGBO(q,(pixels)); } else if (magn_methx == 2 || magn_methx == 4) { if (i == 0) { SetPixelRGBO(q,(pixels)); } /* To do: Rewrite using Get/Set***PixelComponent() */ else { /* Interpolate */ SetPixelRed(q, (QM) ((2*i*( GetPixelRed(n) -GetPixelRed(pixels))+m) /((ssize_t) (m*2))+ GetPixelRed(pixels))); SetPixelGreen(q, (QM) ((2*i*( GetPixelGreen(n) -GetPixelGreen(pixels))+m) /((ssize_t) (m*2))+ GetPixelGreen(pixels))); SetPixelBlue(q, (QM) ((2*i*( GetPixelBlue(n) -GetPixelBlue(pixels))+m) /((ssize_t) (m*2))+ GetPixelBlue(pixels))); if (image->matte != MagickFalse) SetPixelOpacity(q, (QM) ((2*i*( GetPixelOpacity(n) -GetPixelOpacity(pixels))+m) /((ssize_t) (m*2))+ GetPixelOpacity(pixels))); } if (magn_methx == 4) { /* Replicate nearest */ if (i <= ((m+1) << 1)) { SetPixelOpacity(q, GetPixelOpacity(pixels)+0); } else { SetPixelOpacity(q, GetPixelOpacity(n)+0); } } } else /* if (magn_methx == 3 || magn_methx == 5) */ { /* Replicate nearest */ if (i <= ((m+1) << 1)) { SetPixelRGBO(q,(pixels)); } else { SetPixelRGBO(q,(n)); } if (magn_methx == 5) { /* Interpolate */ SetPixelOpacity(q, (QM) ((2*i*( GetPixelOpacity(n) -GetPixelOpacity(pixels))+m)/ ((ssize_t) (m*2)) +GetPixelOpacity(pixels))); } } q++; } n++; } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; } #if (MAGICKCORE_QUANTUM_DEPTH > 16) if (magn_methx != 1 || magn_methy != 1) { /* Rescale pixels to Quantum */ for (y=0; y < (ssize_t) image->rows; y++) { q=GetAuthenticPixels(image,0,y,image->columns,1,exception); for (x=(ssize_t) image->columns-1; x >= 0; x--) { SetPixelRed(q,ScaleShortToQuantum( GetPixelRed(q))); SetPixelGreen(q,ScaleShortToQuantum( GetPixelGreen(q))); SetPixelBlue(q,ScaleShortToQuantum( GetPixelBlue(q))); SetPixelOpacity(q,ScaleShortToQuantum( GetPixelOpacity(q))); q++; } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; } } #endif if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Finished MAGN processing"); } } /* Crop_box is with respect to the upper left corner of the MNG. */ crop_box.left=mng_info->image_box.left+mng_info->x_off[object_id]; crop_box.right=mng_info->image_box.right+mng_info->x_off[object_id]; crop_box.top=mng_info->image_box.top+mng_info->y_off[object_id]; crop_box.bottom=mng_info->image_box.bottom+mng_info->y_off[object_id]; crop_box=mng_minimum_box(crop_box,mng_info->clip); crop_box=mng_minimum_box(crop_box,mng_info->frame); crop_box=mng_minimum_box(crop_box,mng_info->object_clip[object_id]); if ((crop_box.left != (mng_info->image_box.left +mng_info->x_off[object_id])) || (crop_box.right != (mng_info->image_box.right +mng_info->x_off[object_id])) || (crop_box.top != (mng_info->image_box.top +mng_info->y_off[object_id])) || (crop_box.bottom != (mng_info->image_box.bottom +mng_info->y_off[object_id]))) { if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Crop the PNG image"); if ((crop_box.left < crop_box.right) && (crop_box.top < crop_box.bottom)) { Image *im; RectangleInfo crop_info; /* Crop_info is with respect to the upper left corner of the image. */ crop_info.x=(crop_box.left-mng_info->x_off[object_id]); crop_info.y=(crop_box.top-mng_info->y_off[object_id]); crop_info.width=(size_t) (crop_box.right-crop_box.left); crop_info.height=(size_t) (crop_box.bottom-crop_box.top); image->page.width=image->columns; image->page.height=image->rows; image->page.x=0; image->page.y=0; im=CropImage(image,&crop_info,exception); if (im != (Image *) NULL) { image->columns=im->columns; image->rows=im->rows; im=DestroyImage(im); image->page.width=image->columns; image->page.height=image->rows; image->page.x=crop_box.left; image->page.y=crop_box.top; } } else { /* No pixels in crop area. The MNG spec still requires a layer, though, so make a single transparent pixel in the top left corner. */ image->columns=1; image->rows=1; image->colors=2; (void) SetImageBackgroundColor(image); image->page.width=1; image->page.height=1; image->page.x=0; image->page.y=0; } } #ifndef PNG_READ_EMPTY_PLTE_SUPPORTED image=mng_info->image; #endif } #if (MAGICKCORE_QUANTUM_DEPTH > 16) /* PNG does not handle depths greater than 16 so reduce it even * if lossy, and promote any depths > 8 to 16. */ if (image->depth > 16) image->depth=16; #endif #if (MAGICKCORE_QUANTUM_DEPTH > 8) if (image->depth > 8) { /* To do: fill low byte properly */ image->depth=16; } if (LosslessReduceDepthOK(image) != MagickFalse) image->depth = 8; #endif GetImageException(image,exception); if (image_info->number_scenes != 0) { if (mng_info->scenes_found > (ssize_t) (image_info->first_scene+image_info->number_scenes)) break; } if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Finished reading image datastream."); } while (LocaleCompare(image_info->magick,"MNG") == 0); (void) CloseBlob(image); if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Finished reading all image datastreams."); #if defined(MNG_INSERT_LAYERS) if (insert_layers && !mng_info->image_found && (mng_info->mng_width) && (mng_info->mng_height)) { /* Insert a background layer if nothing else was found. */ if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " No images found. Inserting a background layer."); if (GetAuthenticPixelQueue(image) != (PixelPacket *) NULL) { /* Allocate next image structure. */ AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) { if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Allocation failed, returning NULL."); return(DestroyImageList(image)); } image=SyncNextImageInList(image); } image->columns=mng_info->mng_width; image->rows=mng_info->mng_height; image->page.width=mng_info->mng_width; image->page.height=mng_info->mng_height; image->page.x=0; image->page.y=0; image->background_color=mng_background_color; image->matte=MagickFalse; if (image_info->ping == MagickFalse) (void) SetImageBackgroundColor(image); mng_info->image_found++; } #endif image->iterations=mng_iterations; if (mng_iterations == 1) image->start_loop=MagickTrue; while (GetPreviousImageInList(image) != (Image *) NULL) { image_count++; if (image_count > 10*mng_info->image_found) { if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule()," No beginning"); (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"Linked list is corrupted, beginning of list not found", "`%s'",image_info->filename); return(DestroyImageList(image)); } image=GetPreviousImageInList(image); if (GetNextImageInList(image) == (Image *) NULL) { if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule()," Corrupt list"); (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"Linked list is corrupted; next_image is NULL","`%s'", image_info->filename); } } if (mng_info->ticks_per_second && mng_info->image_found > 1 && GetNextImageInList(image) == (Image *) NULL) { if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " First image null"); (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"image->next for first image is NULL but shouldn't be.", "`%s'",image_info->filename); } if (mng_info->image_found == 0) { if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " No visible images found."); (void) ThrowMagickException(&image->exception,GetMagickModule(), CoderError,"No visible images in file","`%s'",image_info->filename); return(DestroyImageList(image)); } if (mng_info->ticks_per_second) final_delay=1UL*MagickMax(image->ticks_per_second,1L)* final_delay/mng_info->ticks_per_second; else image->start_loop=MagickTrue; /* Find final nonzero image delay */ final_image_delay=0; while (GetNextImageInList(image) != (Image *) NULL) { if (image->delay) final_image_delay=image->delay; image=GetNextImageInList(image); } if (final_delay < final_image_delay) final_delay=final_image_delay; image->delay=final_delay; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " image->delay=%.20g, final_delay=%.20g",(double) image->delay, (double) final_delay); if (logging != MagickFalse) { int scene; scene=0; image=GetFirstImageInList(image); (void) LogMagickEvent(CoderEvent,GetMagickModule(), " Before coalesce:"); (void) LogMagickEvent(CoderEvent,GetMagickModule(), " scene 0 delay=%.20g",(double) image->delay); while (GetNextImageInList(image) != (Image *) NULL) { image=GetNextImageInList(image); (void) LogMagickEvent(CoderEvent,GetMagickModule(), " scene %.20g delay=%.20g",(double) scene++,(double) image->delay); } } image=GetFirstImageInList(image); #ifdef MNG_COALESCE_LAYERS if (insert_layers) { Image *next_image, *next; size_t scene; if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule()," Coalesce Images"); scene=image->scene; next_image=CoalesceImages(image,&image->exception); if (next_image == (Image *) NULL) ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); image=DestroyImageList(image); image=next_image; for (next=image; next != (Image *) NULL; next=next_image) { next->page.width=mng_info->mng_width; next->page.height=mng_info->mng_height; next->page.x=0; next->page.y=0; next->scene=scene++; next_image=GetNextImageInList(next); if (next_image == (Image *) NULL) break; if (next->delay == 0) { scene--; next_image->previous=GetPreviousImageInList(next); if (GetPreviousImageInList(next) == (Image *) NULL) image=next_image; else next->previous->next=next_image; next=DestroyImage(next); } } } #endif while (GetNextImageInList(image) != (Image *) NULL) image=GetNextImageInList(image); image->dispose=BackgroundDispose; if (logging != MagickFalse) { int scene; scene=0; image=GetFirstImageInList(image); (void) LogMagickEvent(CoderEvent,GetMagickModule(), " After coalesce:"); (void) LogMagickEvent(CoderEvent,GetMagickModule(), " scene 0 delay=%.20g dispose=%.20g",(double) image->delay, (double) image->dispose); while (GetNextImageInList(image) != (Image *) NULL) { image=GetNextImageInList(image); (void) LogMagickEvent(CoderEvent,GetMagickModule(), " scene %.20g delay=%.20g dispose=%.20g",(double) scene++, (double) image->delay,(double) image->dispose); } } if (logging != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(), " exit ReadOneJNGImage();"); return(image); }
81,322,809,210,957,050,000,000,000,000,000,000,000
None
null
[ "CWE-125" ]
CVE-2017-13139
In ImageMagick before 6.9.9-0 and 7.x before 7.0.6-1, the ReadOneMNGImage function in coders/png.c has an out-of-bounds read with the MNG CLIP chunk.
https://nvd.nist.gov/vuln/detail/CVE-2017-13139
9,425
tcpdump
da6f1a677bfa4476abaeaf9b1afe1c4390f51b41
https://github.com/the-tcpdump-group/tcpdump
https://github.com/the-tcpdump-group/tcpdump/commit/da6f1a677bfa4476abaeaf9b1afe1c4390f51b41
CVE-2017-13034/PGM: Add a bounds check. This fixes a buffer over-read discovered by Bhargava Shastry, SecT/TU Berlin. Add a test using the capture file supplied by the reporter(s), modified so the capture file won't be rejected as an invalid capture. Move a return to make the code a bit cleaner (i.e., make it more obvious that if we don't have enough of the PGM header, we just print the source and destination IP addresses, mark it as incomplete PGM, and don't try to look at the PGM header).
1
pgm_print(netdissect_options *ndo, register const u_char *bp, register u_int length, register const u_char *bp2) { register const struct pgm_header *pgm; register const struct ip *ip; register char ch; uint16_t sport, dport; u_int nla_afnum; char nla_buf[INET6_ADDRSTRLEN]; register const struct ip6_hdr *ip6; uint8_t opt_type, opt_len; uint32_t seq, opts_len, len, offset; pgm = (const struct pgm_header *)bp; ip = (const struct ip *)bp2; if (IP_V(ip) == 6) ip6 = (const struct ip6_hdr *)bp2; else ip6 = NULL; ch = '\0'; if (!ND_TTEST(pgm->pgm_dport)) { if (ip6) { ND_PRINT((ndo, "%s > %s: [|pgm]", ip6addr_string(ndo, &ip6->ip6_src), ip6addr_string(ndo, &ip6->ip6_dst))); return; } else { ND_PRINT((ndo, "%s > %s: [|pgm]", ipaddr_string(ndo, &ip->ip_src), ipaddr_string(ndo, &ip->ip_dst))); return; } } sport = EXTRACT_16BITS(&pgm->pgm_sport); dport = EXTRACT_16BITS(&pgm->pgm_dport); if (ip6) { if (ip6->ip6_nxt == IPPROTO_PGM) { ND_PRINT((ndo, "%s.%s > %s.%s: ", ip6addr_string(ndo, &ip6->ip6_src), tcpport_string(ndo, sport), ip6addr_string(ndo, &ip6->ip6_dst), tcpport_string(ndo, dport))); } else { ND_PRINT((ndo, "%s > %s: ", tcpport_string(ndo, sport), tcpport_string(ndo, dport))); } } else { if (ip->ip_p == IPPROTO_PGM) { ND_PRINT((ndo, "%s.%s > %s.%s: ", ipaddr_string(ndo, &ip->ip_src), tcpport_string(ndo, sport), ipaddr_string(ndo, &ip->ip_dst), tcpport_string(ndo, dport))); } else { ND_PRINT((ndo, "%s > %s: ", tcpport_string(ndo, sport), tcpport_string(ndo, dport))); } } ND_TCHECK(*pgm); ND_PRINT((ndo, "PGM, length %u", EXTRACT_16BITS(&pgm->pgm_length))); if (!ndo->ndo_vflag) return; ND_PRINT((ndo, " 0x%02x%02x%02x%02x%02x%02x ", pgm->pgm_gsid[0], pgm->pgm_gsid[1], pgm->pgm_gsid[2], pgm->pgm_gsid[3], pgm->pgm_gsid[4], pgm->pgm_gsid[5])); switch (pgm->pgm_type) { case PGM_SPM: { const struct pgm_spm *spm; spm = (const struct pgm_spm *)(pgm + 1); ND_TCHECK(*spm); bp = (const u_char *) (spm + 1); switch (EXTRACT_16BITS(&spm->pgms_nla_afi)) { case AFNUM_INET: ND_TCHECK2(*bp, sizeof(struct in_addr)); addrtostr(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in_addr); break; case AFNUM_INET6: ND_TCHECK2(*bp, sizeof(struct in6_addr)); addrtostr6(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in6_addr); break; default: goto trunc; break; } ND_PRINT((ndo, "SPM seq %u trail %u lead %u nla %s", EXTRACT_32BITS(&spm->pgms_seq), EXTRACT_32BITS(&spm->pgms_trailseq), EXTRACT_32BITS(&spm->pgms_leadseq), nla_buf)); break; } case PGM_POLL: { const struct pgm_poll *poll_msg; poll_msg = (const struct pgm_poll *)(pgm + 1); ND_TCHECK(*poll_msg); ND_PRINT((ndo, "POLL seq %u round %u", EXTRACT_32BITS(&poll_msg->pgmp_seq), EXTRACT_16BITS(&poll_msg->pgmp_round))); bp = (const u_char *) (poll_msg + 1); break; } case PGM_POLR: { const struct pgm_polr *polr; uint32_t ivl, rnd, mask; polr = (const struct pgm_polr *)(pgm + 1); ND_TCHECK(*polr); bp = (const u_char *) (polr + 1); switch (EXTRACT_16BITS(&polr->pgmp_nla_afi)) { case AFNUM_INET: ND_TCHECK2(*bp, sizeof(struct in_addr)); addrtostr(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in_addr); break; case AFNUM_INET6: ND_TCHECK2(*bp, sizeof(struct in6_addr)); addrtostr6(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in6_addr); break; default: goto trunc; break; } ND_TCHECK2(*bp, sizeof(uint32_t)); ivl = EXTRACT_32BITS(bp); bp += sizeof(uint32_t); ND_TCHECK2(*bp, sizeof(uint32_t)); rnd = EXTRACT_32BITS(bp); bp += sizeof(uint32_t); ND_TCHECK2(*bp, sizeof(uint32_t)); mask = EXTRACT_32BITS(bp); bp += sizeof(uint32_t); ND_PRINT((ndo, "POLR seq %u round %u nla %s ivl %u rnd 0x%08x " "mask 0x%08x", EXTRACT_32BITS(&polr->pgmp_seq), EXTRACT_16BITS(&polr->pgmp_round), nla_buf, ivl, rnd, mask)); break; } case PGM_ODATA: { const struct pgm_data *odata; odata = (const struct pgm_data *)(pgm + 1); ND_TCHECK(*odata); ND_PRINT((ndo, "ODATA trail %u seq %u", EXTRACT_32BITS(&odata->pgmd_trailseq), EXTRACT_32BITS(&odata->pgmd_seq))); bp = (const u_char *) (odata + 1); break; } case PGM_RDATA: { const struct pgm_data *rdata; rdata = (const struct pgm_data *)(pgm + 1); ND_TCHECK(*rdata); ND_PRINT((ndo, "RDATA trail %u seq %u", EXTRACT_32BITS(&rdata->pgmd_trailseq), EXTRACT_32BITS(&rdata->pgmd_seq))); bp = (const u_char *) (rdata + 1); break; } case PGM_NAK: case PGM_NULLNAK: case PGM_NCF: { const struct pgm_nak *nak; char source_buf[INET6_ADDRSTRLEN], group_buf[INET6_ADDRSTRLEN]; nak = (const struct pgm_nak *)(pgm + 1); ND_TCHECK(*nak); bp = (const u_char *) (nak + 1); /* * Skip past the source, saving info along the way * and stopping if we don't have enough. */ switch (EXTRACT_16BITS(&nak->pgmn_source_afi)) { case AFNUM_INET: ND_TCHECK2(*bp, sizeof(struct in_addr)); addrtostr(bp, source_buf, sizeof(source_buf)); bp += sizeof(struct in_addr); break; case AFNUM_INET6: ND_TCHECK2(*bp, sizeof(struct in6_addr)); addrtostr6(bp, source_buf, sizeof(source_buf)); bp += sizeof(struct in6_addr); break; default: goto trunc; break; } /* * Skip past the group, saving info along the way * and stopping if we don't have enough. */ bp += (2 * sizeof(uint16_t)); switch (EXTRACT_16BITS(bp)) { case AFNUM_INET: ND_TCHECK2(*bp, sizeof(struct in_addr)); addrtostr(bp, group_buf, sizeof(group_buf)); bp += sizeof(struct in_addr); break; case AFNUM_INET6: ND_TCHECK2(*bp, sizeof(struct in6_addr)); addrtostr6(bp, group_buf, sizeof(group_buf)); bp += sizeof(struct in6_addr); break; default: goto trunc; break; } /* * Options decoding can go here. */ switch (pgm->pgm_type) { case PGM_NAK: ND_PRINT((ndo, "NAK ")); break; case PGM_NULLNAK: ND_PRINT((ndo, "NNAK ")); break; case PGM_NCF: ND_PRINT((ndo, "NCF ")); break; default: break; } ND_PRINT((ndo, "(%s -> %s), seq %u", source_buf, group_buf, EXTRACT_32BITS(&nak->pgmn_seq))); break; } case PGM_ACK: { const struct pgm_ack *ack; ack = (const struct pgm_ack *)(pgm + 1); ND_TCHECK(*ack); ND_PRINT((ndo, "ACK seq %u", EXTRACT_32BITS(&ack->pgma_rx_max_seq))); bp = (const u_char *) (ack + 1); break; } case PGM_SPMR: ND_PRINT((ndo, "SPMR")); break; default: ND_PRINT((ndo, "UNKNOWN type 0x%02x", pgm->pgm_type)); break; } if (pgm->pgm_options & PGM_OPT_BIT_PRESENT) { /* * make sure there's enough for the first option header */ if (!ND_TTEST2(*bp, PGM_MIN_OPT_LEN)) { ND_PRINT((ndo, "[|OPT]")); return; } /* * That option header MUST be an OPT_LENGTH option * (see the first paragraph of section 9.1 in RFC 3208). */ opt_type = *bp++; if ((opt_type & PGM_OPT_MASK) != PGM_OPT_LENGTH) { ND_PRINT((ndo, "[First option bad, should be PGM_OPT_LENGTH, is %u]", opt_type & PGM_OPT_MASK)); return; } opt_len = *bp++; if (opt_len != 4) { ND_PRINT((ndo, "[Bad OPT_LENGTH option, length %u != 4]", opt_len)); return; } opts_len = EXTRACT_16BITS(bp); if (opts_len < 4) { ND_PRINT((ndo, "[Bad total option length %u < 4]", opts_len)); return; } bp += sizeof(uint16_t); ND_PRINT((ndo, " OPTS LEN %d", opts_len)); opts_len -= 4; while (opts_len) { if (opts_len < PGM_MIN_OPT_LEN) { ND_PRINT((ndo, "[Total option length leaves no room for final option]")); return; } if (!ND_TTEST2(*bp, 2)) { ND_PRINT((ndo, " [|OPT]")); return; } opt_type = *bp++; opt_len = *bp++; if (opt_len < PGM_MIN_OPT_LEN) { ND_PRINT((ndo, "[Bad option, length %u < %u]", opt_len, PGM_MIN_OPT_LEN)); break; } if (opts_len < opt_len) { ND_PRINT((ndo, "[Total option length leaves no room for final option]")); return; } if (!ND_TTEST2(*bp, opt_len - 2)) { ND_PRINT((ndo, " [|OPT]")); return; } switch (opt_type & PGM_OPT_MASK) { case PGM_OPT_LENGTH: #define PGM_OPT_LENGTH_LEN (2+2) if (opt_len != PGM_OPT_LENGTH_LEN) { ND_PRINT((ndo, "[Bad OPT_LENGTH option, length %u != %u]", opt_len, PGM_OPT_LENGTH_LEN)); return; } ND_PRINT((ndo, " OPTS LEN (extra?) %d", EXTRACT_16BITS(bp))); bp += 2; opts_len -= PGM_OPT_LENGTH_LEN; break; case PGM_OPT_FRAGMENT: #define PGM_OPT_FRAGMENT_LEN (2+2+4+4+4) if (opt_len != PGM_OPT_FRAGMENT_LEN) { ND_PRINT((ndo, "[Bad OPT_FRAGMENT option, length %u != %u]", opt_len, PGM_OPT_FRAGMENT_LEN)); return; } bp += 2; seq = EXTRACT_32BITS(bp); bp += 4; offset = EXTRACT_32BITS(bp); bp += 4; len = EXTRACT_32BITS(bp); bp += 4; ND_PRINT((ndo, " FRAG seq %u off %u len %u", seq, offset, len)); opts_len -= PGM_OPT_FRAGMENT_LEN; break; case PGM_OPT_NAK_LIST: bp += 2; opt_len -= 4; /* option header */ ND_PRINT((ndo, " NAK LIST")); while (opt_len) { if (opt_len < 4) { ND_PRINT((ndo, "[Option length not a multiple of 4]")); return; } ND_TCHECK2(*bp, 4); ND_PRINT((ndo, " %u", EXTRACT_32BITS(bp))); bp += 4; opt_len -= 4; opts_len -= 4; } break; case PGM_OPT_JOIN: #define PGM_OPT_JOIN_LEN (2+2+4) if (opt_len != PGM_OPT_JOIN_LEN) { ND_PRINT((ndo, "[Bad OPT_JOIN option, length %u != %u]", opt_len, PGM_OPT_JOIN_LEN)); return; } bp += 2; seq = EXTRACT_32BITS(bp); bp += 4; ND_PRINT((ndo, " JOIN %u", seq)); opts_len -= PGM_OPT_JOIN_LEN; break; case PGM_OPT_NAK_BO_IVL: #define PGM_OPT_NAK_BO_IVL_LEN (2+2+4+4) if (opt_len != PGM_OPT_NAK_BO_IVL_LEN) { ND_PRINT((ndo, "[Bad OPT_NAK_BO_IVL option, length %u != %u]", opt_len, PGM_OPT_NAK_BO_IVL_LEN)); return; } bp += 2; offset = EXTRACT_32BITS(bp); bp += 4; seq = EXTRACT_32BITS(bp); bp += 4; ND_PRINT((ndo, " BACKOFF ivl %u ivlseq %u", offset, seq)); opts_len -= PGM_OPT_NAK_BO_IVL_LEN; break; case PGM_OPT_NAK_BO_RNG: #define PGM_OPT_NAK_BO_RNG_LEN (2+2+4+4) if (opt_len != PGM_OPT_NAK_BO_RNG_LEN) { ND_PRINT((ndo, "[Bad OPT_NAK_BO_RNG option, length %u != %u]", opt_len, PGM_OPT_NAK_BO_RNG_LEN)); return; } bp += 2; offset = EXTRACT_32BITS(bp); bp += 4; seq = EXTRACT_32BITS(bp); bp += 4; ND_PRINT((ndo, " BACKOFF max %u min %u", offset, seq)); opts_len -= PGM_OPT_NAK_BO_RNG_LEN; break; case PGM_OPT_REDIRECT: #define PGM_OPT_REDIRECT_FIXED_LEN (2+2+2+2) if (opt_len < PGM_OPT_REDIRECT_FIXED_LEN) { ND_PRINT((ndo, "[Bad OPT_REDIRECT option, length %u < %u]", opt_len, PGM_OPT_REDIRECT_FIXED_LEN)); return; } bp += 2; nla_afnum = EXTRACT_16BITS(bp); bp += 2+2; switch (nla_afnum) { case AFNUM_INET: if (opt_len != PGM_OPT_REDIRECT_FIXED_LEN + sizeof(struct in_addr)) { ND_PRINT((ndo, "[Bad OPT_REDIRECT option, length %u != %u + address size]", opt_len, PGM_OPT_REDIRECT_FIXED_LEN)); return; } ND_TCHECK2(*bp, sizeof(struct in_addr)); addrtostr(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in_addr); opts_len -= PGM_OPT_REDIRECT_FIXED_LEN + sizeof(struct in_addr); break; case AFNUM_INET6: if (opt_len != PGM_OPT_REDIRECT_FIXED_LEN + sizeof(struct in6_addr)) { ND_PRINT((ndo, "[Bad OPT_REDIRECT option, length %u != %u + address size]", PGM_OPT_REDIRECT_FIXED_LEN, opt_len)); return; } ND_TCHECK2(*bp, sizeof(struct in6_addr)); addrtostr6(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in6_addr); opts_len -= PGM_OPT_REDIRECT_FIXED_LEN + sizeof(struct in6_addr); break; default: goto trunc; break; } ND_PRINT((ndo, " REDIRECT %s", nla_buf)); break; case PGM_OPT_PARITY_PRM: #define PGM_OPT_PARITY_PRM_LEN (2+2+4) if (opt_len != PGM_OPT_PARITY_PRM_LEN) { ND_PRINT((ndo, "[Bad OPT_PARITY_PRM option, length %u != %u]", opt_len, PGM_OPT_PARITY_PRM_LEN)); return; } bp += 2; len = EXTRACT_32BITS(bp); bp += 4; ND_PRINT((ndo, " PARITY MAXTGS %u", len)); opts_len -= PGM_OPT_PARITY_PRM_LEN; break; case PGM_OPT_PARITY_GRP: #define PGM_OPT_PARITY_GRP_LEN (2+2+4) if (opt_len != PGM_OPT_PARITY_GRP_LEN) { ND_PRINT((ndo, "[Bad OPT_PARITY_GRP option, length %u != %u]", opt_len, PGM_OPT_PARITY_GRP_LEN)); return; } bp += 2; seq = EXTRACT_32BITS(bp); bp += 4; ND_PRINT((ndo, " PARITY GROUP %u", seq)); opts_len -= PGM_OPT_PARITY_GRP_LEN; break; case PGM_OPT_CURR_TGSIZE: #define PGM_OPT_CURR_TGSIZE_LEN (2+2+4) if (opt_len != PGM_OPT_CURR_TGSIZE_LEN) { ND_PRINT((ndo, "[Bad OPT_CURR_TGSIZE option, length %u != %u]", opt_len, PGM_OPT_CURR_TGSIZE_LEN)); return; } bp += 2; len = EXTRACT_32BITS(bp); bp += 4; ND_PRINT((ndo, " PARITY ATGS %u", len)); opts_len -= PGM_OPT_CURR_TGSIZE_LEN; break; case PGM_OPT_NBR_UNREACH: #define PGM_OPT_NBR_UNREACH_LEN (2+2) if (opt_len != PGM_OPT_NBR_UNREACH_LEN) { ND_PRINT((ndo, "[Bad OPT_NBR_UNREACH option, length %u != %u]", opt_len, PGM_OPT_NBR_UNREACH_LEN)); return; } bp += 2; ND_PRINT((ndo, " NBR_UNREACH")); opts_len -= PGM_OPT_NBR_UNREACH_LEN; break; case PGM_OPT_PATH_NLA: ND_PRINT((ndo, " PATH_NLA [%d]", opt_len)); bp += opt_len; opts_len -= opt_len; break; case PGM_OPT_SYN: #define PGM_OPT_SYN_LEN (2+2) if (opt_len != PGM_OPT_SYN_LEN) { ND_PRINT((ndo, "[Bad OPT_SYN option, length %u != %u]", opt_len, PGM_OPT_SYN_LEN)); return; } bp += 2; ND_PRINT((ndo, " SYN")); opts_len -= PGM_OPT_SYN_LEN; break; case PGM_OPT_FIN: #define PGM_OPT_FIN_LEN (2+2) if (opt_len != PGM_OPT_FIN_LEN) { ND_PRINT((ndo, "[Bad OPT_FIN option, length %u != %u]", opt_len, PGM_OPT_FIN_LEN)); return; } bp += 2; ND_PRINT((ndo, " FIN")); opts_len -= PGM_OPT_FIN_LEN; break; case PGM_OPT_RST: #define PGM_OPT_RST_LEN (2+2) if (opt_len != PGM_OPT_RST_LEN) { ND_PRINT((ndo, "[Bad OPT_RST option, length %u != %u]", opt_len, PGM_OPT_RST_LEN)); return; } bp += 2; ND_PRINT((ndo, " RST")); opts_len -= PGM_OPT_RST_LEN; break; case PGM_OPT_CR: ND_PRINT((ndo, " CR")); bp += opt_len; opts_len -= opt_len; break; case PGM_OPT_CRQST: #define PGM_OPT_CRQST_LEN (2+2) if (opt_len != PGM_OPT_CRQST_LEN) { ND_PRINT((ndo, "[Bad OPT_CRQST option, length %u != %u]", opt_len, PGM_OPT_CRQST_LEN)); return; } bp += 2; ND_PRINT((ndo, " CRQST")); opts_len -= PGM_OPT_CRQST_LEN; break; case PGM_OPT_PGMCC_DATA: #define PGM_OPT_PGMCC_DATA_FIXED_LEN (2+2+4+2+2) if (opt_len < PGM_OPT_PGMCC_DATA_FIXED_LEN) { ND_PRINT((ndo, "[Bad OPT_PGMCC_DATA option, length %u < %u]", opt_len, PGM_OPT_PGMCC_DATA_FIXED_LEN)); return; } bp += 2; offset = EXTRACT_32BITS(bp); bp += 4; nla_afnum = EXTRACT_16BITS(bp); bp += 2+2; switch (nla_afnum) { case AFNUM_INET: if (opt_len != PGM_OPT_PGMCC_DATA_FIXED_LEN + sizeof(struct in_addr)) { ND_PRINT((ndo, "[Bad OPT_PGMCC_DATA option, length %u != %u + address size]", opt_len, PGM_OPT_PGMCC_DATA_FIXED_LEN)); return; } ND_TCHECK2(*bp, sizeof(struct in_addr)); addrtostr(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in_addr); opts_len -= PGM_OPT_PGMCC_DATA_FIXED_LEN + sizeof(struct in_addr); break; case AFNUM_INET6: if (opt_len != PGM_OPT_PGMCC_DATA_FIXED_LEN + sizeof(struct in6_addr)) { ND_PRINT((ndo, "[Bad OPT_PGMCC_DATA option, length %u != %u + address size]", opt_len, PGM_OPT_PGMCC_DATA_FIXED_LEN)); return; } ND_TCHECK2(*bp, sizeof(struct in6_addr)); addrtostr6(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in6_addr); opts_len -= PGM_OPT_PGMCC_DATA_FIXED_LEN + sizeof(struct in6_addr); break; default: goto trunc; break; } ND_PRINT((ndo, " PGMCC DATA %u %s", offset, nla_buf)); break; case PGM_OPT_PGMCC_FEEDBACK: #define PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN (2+2+4+2+2) if (opt_len < PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN) { ND_PRINT((ndo, "[Bad PGM_OPT_PGMCC_FEEDBACK option, length %u < %u]", opt_len, PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN)); return; } bp += 2; offset = EXTRACT_32BITS(bp); bp += 4; nla_afnum = EXTRACT_16BITS(bp); bp += 2+2; switch (nla_afnum) { case AFNUM_INET: if (opt_len != PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN + sizeof(struct in_addr)) { ND_PRINT((ndo, "[Bad OPT_PGMCC_FEEDBACK option, length %u != %u + address size]", opt_len, PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN)); return; } ND_TCHECK2(*bp, sizeof(struct in_addr)); addrtostr(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in_addr); opts_len -= PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN + sizeof(struct in_addr); break; case AFNUM_INET6: if (opt_len != PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN + sizeof(struct in6_addr)) { ND_PRINT((ndo, "[Bad OPT_PGMCC_FEEDBACK option, length %u != %u + address size]", opt_len, PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN)); return; } ND_TCHECK2(*bp, sizeof(struct in6_addr)); addrtostr6(bp, nla_buf, sizeof(nla_buf)); bp += sizeof(struct in6_addr); opts_len -= PGM_OPT_PGMCC_FEEDBACK_FIXED_LEN + sizeof(struct in6_addr); break; default: goto trunc; break; } ND_PRINT((ndo, " PGMCC FEEDBACK %u %s", offset, nla_buf)); break; default: ND_PRINT((ndo, " OPT_%02X [%d] ", opt_type, opt_len)); bp += opt_len; opts_len -= opt_len; break; } if (opt_type & PGM_OPT_END) break; } } ND_PRINT((ndo, " [%u]", length)); if (ndo->ndo_packettype == PT_PGM_ZMTP1 && (pgm->pgm_type == PGM_ODATA || pgm->pgm_type == PGM_RDATA)) zmtp1_print_datagram(ndo, bp, EXTRACT_16BITS(&pgm->pgm_length)); return; trunc: ND_PRINT((ndo, "[|pgm]")); if (ch != '\0') ND_PRINT((ndo, ">")); }
149,333,147,897,387,360,000,000,000,000,000,000,000
print-pgm.c
340,037,039,173,930,900,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2017-13034
The PGM parser in tcpdump before 4.9.2 has a buffer over-read in print-pgm.c:pgm_print().
https://nvd.nist.gov/vuln/detail/CVE-2017-13034
9,426
tcpdump
1bc78d795cd5cad5525498658f414a11ea0a7e9c
https://github.com/the-tcpdump-group/tcpdump
https://github.com/the-tcpdump-group/tcpdump/commit/1bc78d795cd5cad5525498658f414a11ea0a7e9c
CVE-2017-13032/RADIUS: Check whether a byte exists before testing its value. Reverse the test in a for loop to test the length before testing whether we have a null byte. This fixes a buffer over-read discovered by Bhargava Shastry. Add a test using the capture file supplied by the reporter(s), modified so the capture file won't be rejected as an invalid capture. Clean up other length tests while we're at it.
1
print_attr_string(netdissect_options *ndo, register const u_char *data, u_int length, u_short attr_code) { register u_int i; ND_TCHECK2(data[0],length); switch(attr_code) { case TUNNEL_PASS: if (length < 3) { ND_PRINT((ndo, "%s", tstr)); return; } if (*data && (*data <=0x1F) ) ND_PRINT((ndo, "Tag[%u] ", *data)); else ND_PRINT((ndo, "Tag[Unused] ")); data++; length--; ND_PRINT((ndo, "Salt %u ", EXTRACT_16BITS(data))); data+=2; length-=2; break; case TUNNEL_CLIENT_END: case TUNNEL_SERVER_END: case TUNNEL_PRIV_GROUP: case TUNNEL_ASSIGN_ID: case TUNNEL_CLIENT_AUTH: case TUNNEL_SERVER_AUTH: if (*data <= 0x1F) { if (length < 1) { ND_PRINT((ndo, "%s", tstr)); return; } if (*data) ND_PRINT((ndo, "Tag[%u] ", *data)); else ND_PRINT((ndo, "Tag[Unused] ")); data++; length--; } break; case EGRESS_VLAN_NAME: ND_PRINT((ndo, "%s (0x%02x) ", tok2str(rfc4675_tagged,"Unknown tag",*data), *data)); data++; length--; break; } for (i=0; *data && i < length ; i++, data++) ND_PRINT((ndo, "%c", (*data < 32 || *data > 126) ? '.' : *data)); return; trunc: ND_PRINT((ndo, "%s", tstr)); }
248,093,794,910,011,600,000,000,000,000,000,000,000
print-radius.c
205,335,897,441,003,700,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2017-13032
The RADIUS parser in tcpdump before 4.9.2 has a buffer over-read in print-radius.c:print_attr_string().
https://nvd.nist.gov/vuln/detail/CVE-2017-13032
9,428
tcpdump
7029d15f148ef24bb7c6668bc640f5470d085e5a
https://github.com/the-tcpdump-group/tcpdump
https://github.com/the-tcpdump-group/tcpdump/commit/7029d15f148ef24bb7c6668bc640f5470d085e5a
CVE-2017-13029/PPP: Fix a bounds check, and clean up other bounds checks. For configuration protocol options, use ND_TCHECK() and ND_TCHECK_nBITS() macros, passing them the appropriate pointer argument. This fixes one case where the ND_TCHECK2() call they replace was not checking enough bytes. This fixes a buffer over-read discovered by Bhargava Shastry, SecT/TU Berlin. Add a test using the capture file supplied by the reporter(s), modified so the capture file won't be rejected as an invalid capture.
1
print_ccp_config_options(netdissect_options *ndo, const u_char *p, int length) { int len, opt; if (length < 2) return 0; ND_TCHECK2(*p, 2); len = p[1]; opt = p[0]; if (length < len) return 0; if (len < 2) { ND_PRINT((ndo, "\n\t %s Option (0x%02x), length %u (length bogus, should be >= 2)", tok2str(ccpconfopts_values, "Unknown", opt), opt, len)); return 0; } ND_PRINT((ndo, "\n\t %s Option (0x%02x), length %u", tok2str(ccpconfopts_values, "Unknown", opt), opt, len)); switch (opt) { case CCPOPT_BSDCOMP: if (len < 3) { ND_PRINT((ndo, " (length bogus, should be >= 3)")); return len; } ND_TCHECK2(*(p + 2), 1); ND_PRINT((ndo, ": Version: %u, Dictionary Bits: %u", p[2] >> 5, p[2] & 0x1f)); break; case CCPOPT_MVRCA: if (len < 4) { ND_PRINT((ndo, " (length bogus, should be >= 4)")); return len; } ND_TCHECK2(*(p + 2), 1); ND_PRINT((ndo, ": Features: %u, PxP: %s, History: %u, #CTX-ID: %u", (p[2] & 0xc0) >> 6, (p[2] & 0x20) ? "Enabled" : "Disabled", p[2] & 0x1f, p[3])); break; case CCPOPT_DEFLATE: if (len < 4) { ND_PRINT((ndo, " (length bogus, should be >= 4)")); return len; } ND_TCHECK2(*(p + 2), 1); ND_PRINT((ndo, ": Window: %uK, Method: %s (0x%x), MBZ: %u, CHK: %u", (p[2] & 0xf0) >> 4, ((p[2] & 0x0f) == 8) ? "zlib" : "unknown", p[2] & 0x0f, (p[3] & 0xfc) >> 2, p[3] & 0x03)); break; /* XXX: to be supported */ #if 0 case CCPOPT_OUI: case CCPOPT_PRED1: case CCPOPT_PRED2: case CCPOPT_PJUMP: case CCPOPT_HPPPC: case CCPOPT_STACLZS: case CCPOPT_MPPC: case CCPOPT_GFZA: case CCPOPT_V42BIS: case CCPOPT_LZSDCP: case CCPOPT_DEC: case CCPOPT_RESV: break; #endif default: /* * Unknown option; dump it as raw bytes now if we're * not going to do so below. */ if (ndo->ndo_vflag < 2) print_unknown_data(ndo, &p[2], "\n\t ", len - 2); break; } if (ndo->ndo_vflag > 1) print_unknown_data(ndo, &p[2], "\n\t ", len - 2); /* exclude TLV header */ return len; trunc: ND_PRINT((ndo, "[|ccp]")); return 0; }
201,333,923,056,790,370,000,000,000,000,000,000,000
print-ppp.c
86,948,591,370,272,700,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2017-13029
The PPP parser in tcpdump before 4.9.2 has a buffer over-read in print-ppp.c:print_ccp_config_options().
https://nvd.nist.gov/vuln/detail/CVE-2017-13029
9,429
tcpdump
5edf405d7ed9fc92f4f43e8a3d44baa4c6387562
https://github.com/the-tcpdump-group/tcpdump
https://github.com/the-tcpdump-group/tcpdump/commit/5edf405d7ed9fc92f4f43e8a3d44baa4c6387562
CVE-2017-13008/IEEE 802.11: Fix TIM bitmap copy to copy from p + offset. offset has already been advanced to point to the bitmap; we shouldn't add the amount to advance again. This fixes a buffer over-read discovered by Brian 'geeknik' Carpenter. Add a test using the capture file supplied by the reporter(s). While we're at it, remove some redundant tests - we've already checked, before the case statement, whether we have captured the entire information element and whether the entire information element is present in the on-the-wire packet; in the cases for particular IEs, we only need to make sure we don't go past the end of the IE.
1
parse_elements(netdissect_options *ndo, struct mgmt_body_t *pbody, const u_char *p, int offset, u_int length) { u_int elementlen; struct ssid_t ssid; struct challenge_t challenge; struct rates_t rates; struct ds_t ds; struct cf_t cf; struct tim_t tim; /* * We haven't seen any elements yet. */ pbody->challenge_present = 0; pbody->ssid_present = 0; pbody->rates_present = 0; pbody->ds_present = 0; pbody->cf_present = 0; pbody->tim_present = 0; while (length != 0) { /* Make sure we at least have the element ID and length. */ if (!ND_TTEST2(*(p + offset), 2)) return 0; if (length < 2) return 0; elementlen = *(p + offset + 1); /* Make sure we have the entire element. */ if (!ND_TTEST2(*(p + offset + 2), elementlen)) return 0; if (length < elementlen + 2) return 0; switch (*(p + offset)) { case E_SSID: memcpy(&ssid, p + offset, 2); offset += 2; length -= 2; if (ssid.length != 0) { if (ssid.length > sizeof(ssid.ssid) - 1) return 0; if (!ND_TTEST2(*(p + offset), ssid.length)) return 0; if (length < ssid.length) return 0; memcpy(&ssid.ssid, p + offset, ssid.length); offset += ssid.length; length -= ssid.length; } ssid.ssid[ssid.length] = '\0'; /* * Present and not truncated. * * If we haven't already seen an SSID IE, * copy this one, otherwise ignore this one, * so we later report the first one we saw. */ if (!pbody->ssid_present) { pbody->ssid = ssid; pbody->ssid_present = 1; } break; case E_CHALLENGE: memcpy(&challenge, p + offset, 2); offset += 2; length -= 2; if (challenge.length != 0) { if (challenge.length > sizeof(challenge.text) - 1) return 0; if (!ND_TTEST2(*(p + offset), challenge.length)) return 0; if (length < challenge.length) return 0; memcpy(&challenge.text, p + offset, challenge.length); offset += challenge.length; length -= challenge.length; } challenge.text[challenge.length] = '\0'; /* * Present and not truncated. * * If we haven't already seen a challenge IE, * copy this one, otherwise ignore this one, * so we later report the first one we saw. */ if (!pbody->challenge_present) { pbody->challenge = challenge; pbody->challenge_present = 1; } break; case E_RATES: memcpy(&rates, p + offset, 2); offset += 2; length -= 2; if (rates.length != 0) { if (rates.length > sizeof rates.rate) return 0; if (!ND_TTEST2(*(p + offset), rates.length)) return 0; if (length < rates.length) return 0; memcpy(&rates.rate, p + offset, rates.length); offset += rates.length; length -= rates.length; } /* * Present and not truncated. * * If we haven't already seen a rates IE, * copy this one if it's not zero-length, * otherwise ignore this one, so we later * report the first one we saw. * * We ignore zero-length rates IEs as some * devices seem to put a zero-length rates * IE, followed by an SSID IE, followed by * a non-zero-length rates IE into frames, * even though IEEE Std 802.11-2007 doesn't * seem to indicate that a zero-length rates * IE is valid. */ if (!pbody->rates_present && rates.length != 0) { pbody->rates = rates; pbody->rates_present = 1; } break; case E_DS: memcpy(&ds, p + offset, 2); offset += 2; length -= 2; if (ds.length != 1) { offset += ds.length; length -= ds.length; break; } ds.channel = *(p + offset); offset += 1; length -= 1; /* * Present and not truncated. * * If we haven't already seen a DS IE, * copy this one, otherwise ignore this one, * so we later report the first one we saw. */ if (!pbody->ds_present) { pbody->ds = ds; pbody->ds_present = 1; } break; case E_CF: memcpy(&cf, p + offset, 2); offset += 2; length -= 2; if (cf.length != 6) { offset += cf.length; length -= cf.length; break; } memcpy(&cf.count, p + offset, 6); offset += 6; length -= 6; /* * Present and not truncated. * * If we haven't already seen a CF IE, * copy this one, otherwise ignore this one, * so we later report the first one we saw. */ if (!pbody->cf_present) { pbody->cf = cf; pbody->cf_present = 1; } break; case E_TIM: memcpy(&tim, p + offset, 2); offset += 2; length -= 2; if (tim.length <= 3) { offset += tim.length; length -= tim.length; break; } if (tim.length - 3 > (int)sizeof tim.bitmap) return 0; memcpy(&tim.count, p + offset, 3); offset += 3; length -= 3; memcpy(tim.bitmap, p + offset + 3, tim.length - 3); offset += tim.length - 3; length -= tim.length - 3; /* * Present and not truncated. * * If we haven't already seen a TIM IE, * copy this one, otherwise ignore this one, * so we later report the first one we saw. */ if (!pbody->tim_present) { pbody->tim = tim; pbody->tim_present = 1; } break; default: #if 0 ND_PRINT((ndo, "(1) unhandled element_id (%d) ", *(p + offset))); #endif offset += 2 + elementlen; length -= 2 + elementlen; break; } } /* No problems found. */ return 1; }
92,553,270,014,305,060,000,000,000,000,000,000,000
print-802_11.c
126,620,455,302,702,500,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2017-13008
The IEEE 802.11 parser in tcpdump before 4.9.2 has a buffer over-read in print-802_11.c:parse_elements().
https://nvd.nist.gov/vuln/detail/CVE-2017-13008
9,431
tcpdump
3b32029db354cbc875127869d9b12a9addc75b50
https://github.com/the-tcpdump-group/tcpdump
https://github.com/the-tcpdump-group/tcpdump/commit/3b32029db354cbc875127869d9b12a9addc75b50
CVE-2017-12999/IS-IS: Add a missing length check. This fixes a buffer over-read discovered by Forcepoint's security researchers Otto Airamo & Antti Levomäki. Add tests using the capture files supplied by the reporter(s).
1
isis_print(netdissect_options *ndo, const uint8_t *p, u_int length) { const struct isis_common_header *isis_header; const struct isis_iih_lan_header *header_iih_lan; const struct isis_iih_ptp_header *header_iih_ptp; const struct isis_lsp_header *header_lsp; const struct isis_csnp_header *header_csnp; const struct isis_psnp_header *header_psnp; const struct isis_tlv_lsp *tlv_lsp; const struct isis_tlv_ptp_adj *tlv_ptp_adj; const struct isis_tlv_is_reach *tlv_is_reach; const struct isis_tlv_es_reach *tlv_es_reach; uint8_t pdu_type, max_area, id_length, tlv_type, tlv_len, tmp, alen, lan_alen, prefix_len; uint8_t ext_is_len, ext_ip_len, mt_len; const uint8_t *optr, *pptr, *tptr; u_short packet_len,pdu_len, key_id; u_int i,vendor_id; int sigcheck; packet_len=length; optr = p; /* initialize the _o_riginal pointer to the packet start - need it for parsing the checksum TLV and authentication TLV verification */ isis_header = (const struct isis_common_header *)p; ND_TCHECK(*isis_header); if (length < ISIS_COMMON_HEADER_SIZE) goto trunc; pptr = p+(ISIS_COMMON_HEADER_SIZE); header_iih_lan = (const struct isis_iih_lan_header *)pptr; header_iih_ptp = (const struct isis_iih_ptp_header *)pptr; header_lsp = (const struct isis_lsp_header *)pptr; header_csnp = (const struct isis_csnp_header *)pptr; header_psnp = (const struct isis_psnp_header *)pptr; if (!ndo->ndo_eflag) ND_PRINT((ndo, "IS-IS")); /* * Sanity checking of the header. */ if (isis_header->version != ISIS_VERSION) { ND_PRINT((ndo, "version %d packet not supported", isis_header->version)); return (0); } if ((isis_header->id_length != SYSTEM_ID_LEN) && (isis_header->id_length != 0)) { ND_PRINT((ndo, "system ID length of %d is not supported", isis_header->id_length)); return (0); } if (isis_header->pdu_version != ISIS_VERSION) { ND_PRINT((ndo, "version %d packet not supported", isis_header->pdu_version)); return (0); } if (length < isis_header->fixed_len) { ND_PRINT((ndo, "fixed header length %u > packet length %u", isis_header->fixed_len, length)); return (0); } if (isis_header->fixed_len < ISIS_COMMON_HEADER_SIZE) { ND_PRINT((ndo, "fixed header length %u < minimum header size %u", isis_header->fixed_len, (u_int)ISIS_COMMON_HEADER_SIZE)); return (0); } max_area = isis_header->max_area; switch(max_area) { case 0: max_area = 3; /* silly shit */ break; case 255: ND_PRINT((ndo, "bad packet -- 255 areas")); return (0); default: break; } id_length = isis_header->id_length; switch(id_length) { case 0: id_length = 6; /* silly shit again */ break; case 1: /* 1-8 are valid sys-ID lenghts */ case 2: case 3: case 4: case 5: case 6: case 7: case 8: break; case 255: id_length = 0; /* entirely useless */ break; default: break; } /* toss any non 6-byte sys-ID len PDUs */ if (id_length != 6 ) { ND_PRINT((ndo, "bad packet -- illegal sys-ID length (%u)", id_length)); return (0); } pdu_type=isis_header->pdu_type; /* in non-verbose mode print the basic PDU Type plus PDU specific brief information*/ if (ndo->ndo_vflag == 0) { ND_PRINT((ndo, "%s%s", ndo->ndo_eflag ? "" : ", ", tok2str(isis_pdu_values, "unknown PDU-Type %u", pdu_type))); } else { /* ok they seem to want to know everything - lets fully decode it */ ND_PRINT((ndo, "%slength %u", ndo->ndo_eflag ? "" : ", ", length)); ND_PRINT((ndo, "\n\t%s, hlen: %u, v: %u, pdu-v: %u, sys-id-len: %u (%u), max-area: %u (%u)", tok2str(isis_pdu_values, "unknown, type %u", pdu_type), isis_header->fixed_len, isis_header->version, isis_header->pdu_version, id_length, isis_header->id_length, max_area, isis_header->max_area)); if (ndo->ndo_vflag > 1) { if (!print_unknown_data(ndo, optr, "\n\t", 8)) /* provide the _o_riginal pointer */ return (0); /* for optionally debugging the common header */ } } switch (pdu_type) { case ISIS_PDU_L1_LAN_IIH: case ISIS_PDU_L2_LAN_IIH: if (isis_header->fixed_len != (ISIS_COMMON_HEADER_SIZE+ISIS_IIH_LAN_HEADER_SIZE)) { ND_PRINT((ndo, ", bogus fixed header length %u should be %lu", isis_header->fixed_len, (unsigned long)(ISIS_COMMON_HEADER_SIZE+ISIS_IIH_LAN_HEADER_SIZE))); return (0); } ND_TCHECK(*header_iih_lan); if (length < ISIS_COMMON_HEADER_SIZE+ISIS_IIH_LAN_HEADER_SIZE) goto trunc; if (ndo->ndo_vflag == 0) { ND_PRINT((ndo, ", src-id %s", isis_print_id(header_iih_lan->source_id, SYSTEM_ID_LEN))); ND_PRINT((ndo, ", lan-id %s, prio %u", isis_print_id(header_iih_lan->lan_id,NODE_ID_LEN), header_iih_lan->priority)); ND_PRINT((ndo, ", length %u", length)); return (1); } pdu_len=EXTRACT_16BITS(header_iih_lan->pdu_len); if (packet_len>pdu_len) { packet_len=pdu_len; /* do TLV decoding as long as it makes sense */ length=pdu_len; } ND_PRINT((ndo, "\n\t source-id: %s, holding time: %us, Flags: [%s]", isis_print_id(header_iih_lan->source_id,SYSTEM_ID_LEN), EXTRACT_16BITS(header_iih_lan->holding_time), tok2str(isis_iih_circuit_type_values, "unknown circuit type 0x%02x", header_iih_lan->circuit_type))); ND_PRINT((ndo, "\n\t lan-id: %s, Priority: %u, PDU length: %u", isis_print_id(header_iih_lan->lan_id, NODE_ID_LEN), (header_iih_lan->priority) & ISIS_LAN_PRIORITY_MASK, pdu_len)); if (ndo->ndo_vflag > 1) { if (!print_unknown_data(ndo, pptr, "\n\t ", ISIS_IIH_LAN_HEADER_SIZE)) return (0); } packet_len -= (ISIS_COMMON_HEADER_SIZE+ISIS_IIH_LAN_HEADER_SIZE); pptr = p + (ISIS_COMMON_HEADER_SIZE+ISIS_IIH_LAN_HEADER_SIZE); break; case ISIS_PDU_PTP_IIH: if (isis_header->fixed_len != (ISIS_COMMON_HEADER_SIZE+ISIS_IIH_PTP_HEADER_SIZE)) { ND_PRINT((ndo, ", bogus fixed header length %u should be %lu", isis_header->fixed_len, (unsigned long)(ISIS_COMMON_HEADER_SIZE+ISIS_IIH_PTP_HEADER_SIZE))); return (0); } ND_TCHECK(*header_iih_ptp); if (length < ISIS_COMMON_HEADER_SIZE+ISIS_IIH_PTP_HEADER_SIZE) goto trunc; if (ndo->ndo_vflag == 0) { ND_PRINT((ndo, ", src-id %s", isis_print_id(header_iih_ptp->source_id, SYSTEM_ID_LEN))); ND_PRINT((ndo, ", length %u", length)); return (1); } pdu_len=EXTRACT_16BITS(header_iih_ptp->pdu_len); if (packet_len>pdu_len) { packet_len=pdu_len; /* do TLV decoding as long as it makes sense */ length=pdu_len; } ND_PRINT((ndo, "\n\t source-id: %s, holding time: %us, Flags: [%s]", isis_print_id(header_iih_ptp->source_id,SYSTEM_ID_LEN), EXTRACT_16BITS(header_iih_ptp->holding_time), tok2str(isis_iih_circuit_type_values, "unknown circuit type 0x%02x", header_iih_ptp->circuit_type))); ND_PRINT((ndo, "\n\t circuit-id: 0x%02x, PDU length: %u", header_iih_ptp->circuit_id, pdu_len)); if (ndo->ndo_vflag > 1) { if (!print_unknown_data(ndo, pptr, "\n\t ", ISIS_IIH_PTP_HEADER_SIZE)) return (0); } packet_len -= (ISIS_COMMON_HEADER_SIZE+ISIS_IIH_PTP_HEADER_SIZE); pptr = p + (ISIS_COMMON_HEADER_SIZE+ISIS_IIH_PTP_HEADER_SIZE); break; case ISIS_PDU_L1_LSP: case ISIS_PDU_L2_LSP: if (isis_header->fixed_len != (ISIS_COMMON_HEADER_SIZE+ISIS_LSP_HEADER_SIZE)) { ND_PRINT((ndo, ", bogus fixed header length %u should be %lu", isis_header->fixed_len, (unsigned long)ISIS_LSP_HEADER_SIZE)); return (0); } ND_TCHECK(*header_lsp); if (length < ISIS_COMMON_HEADER_SIZE+ISIS_LSP_HEADER_SIZE) goto trunc; if (ndo->ndo_vflag == 0) { ND_PRINT((ndo, ", lsp-id %s, seq 0x%08x, lifetime %5us", isis_print_id(header_lsp->lsp_id, LSP_ID_LEN), EXTRACT_32BITS(header_lsp->sequence_number), EXTRACT_16BITS(header_lsp->remaining_lifetime))); ND_PRINT((ndo, ", length %u", length)); return (1); } pdu_len=EXTRACT_16BITS(header_lsp->pdu_len); if (packet_len>pdu_len) { packet_len=pdu_len; /* do TLV decoding as long as it makes sense */ length=pdu_len; } ND_PRINT((ndo, "\n\t lsp-id: %s, seq: 0x%08x, lifetime: %5us\n\t chksum: 0x%04x", isis_print_id(header_lsp->lsp_id, LSP_ID_LEN), EXTRACT_32BITS(header_lsp->sequence_number), EXTRACT_16BITS(header_lsp->remaining_lifetime), EXTRACT_16BITS(header_lsp->checksum))); osi_print_cksum(ndo, (const uint8_t *)header_lsp->lsp_id, EXTRACT_16BITS(header_lsp->checksum), 12, length-12); ND_PRINT((ndo, ", PDU length: %u, Flags: [ %s", pdu_len, ISIS_MASK_LSP_OL_BIT(header_lsp->typeblock) ? "Overload bit set, " : "")); if (ISIS_MASK_LSP_ATT_BITS(header_lsp->typeblock)) { ND_PRINT((ndo, "%s", ISIS_MASK_LSP_ATT_DEFAULT_BIT(header_lsp->typeblock) ? "default " : "")); ND_PRINT((ndo, "%s", ISIS_MASK_LSP_ATT_DELAY_BIT(header_lsp->typeblock) ? "delay " : "")); ND_PRINT((ndo, "%s", ISIS_MASK_LSP_ATT_EXPENSE_BIT(header_lsp->typeblock) ? "expense " : "")); ND_PRINT((ndo, "%s", ISIS_MASK_LSP_ATT_ERROR_BIT(header_lsp->typeblock) ? "error " : "")); ND_PRINT((ndo, "ATT bit set, ")); } ND_PRINT((ndo, "%s", ISIS_MASK_LSP_PARTITION_BIT(header_lsp->typeblock) ? "P bit set, " : "")); ND_PRINT((ndo, "%s ]", tok2str(isis_lsp_istype_values, "Unknown(0x%x)", ISIS_MASK_LSP_ISTYPE_BITS(header_lsp->typeblock)))); if (ndo->ndo_vflag > 1) { if (!print_unknown_data(ndo, pptr, "\n\t ", ISIS_LSP_HEADER_SIZE)) return (0); } packet_len -= (ISIS_COMMON_HEADER_SIZE+ISIS_LSP_HEADER_SIZE); pptr = p + (ISIS_COMMON_HEADER_SIZE+ISIS_LSP_HEADER_SIZE); break; case ISIS_PDU_L1_CSNP: case ISIS_PDU_L2_CSNP: if (isis_header->fixed_len != (ISIS_COMMON_HEADER_SIZE+ISIS_CSNP_HEADER_SIZE)) { ND_PRINT((ndo, ", bogus fixed header length %u should be %lu", isis_header->fixed_len, (unsigned long)(ISIS_COMMON_HEADER_SIZE+ISIS_CSNP_HEADER_SIZE))); return (0); } ND_TCHECK(*header_csnp); if (length < ISIS_COMMON_HEADER_SIZE+ISIS_CSNP_HEADER_SIZE) goto trunc; if (ndo->ndo_vflag == 0) { ND_PRINT((ndo, ", src-id %s", isis_print_id(header_csnp->source_id, NODE_ID_LEN))); ND_PRINT((ndo, ", length %u", length)); return (1); } pdu_len=EXTRACT_16BITS(header_csnp->pdu_len); if (packet_len>pdu_len) { packet_len=pdu_len; /* do TLV decoding as long as it makes sense */ length=pdu_len; } ND_PRINT((ndo, "\n\t source-id: %s, PDU length: %u", isis_print_id(header_csnp->source_id, NODE_ID_LEN), pdu_len)); ND_PRINT((ndo, "\n\t start lsp-id: %s", isis_print_id(header_csnp->start_lsp_id, LSP_ID_LEN))); ND_PRINT((ndo, "\n\t end lsp-id: %s", isis_print_id(header_csnp->end_lsp_id, LSP_ID_LEN))); if (ndo->ndo_vflag > 1) { if (!print_unknown_data(ndo, pptr, "\n\t ", ISIS_CSNP_HEADER_SIZE)) return (0); } packet_len -= (ISIS_COMMON_HEADER_SIZE+ISIS_CSNP_HEADER_SIZE); pptr = p + (ISIS_COMMON_HEADER_SIZE+ISIS_CSNP_HEADER_SIZE); break; case ISIS_PDU_L1_PSNP: case ISIS_PDU_L2_PSNP: if (isis_header->fixed_len != (ISIS_COMMON_HEADER_SIZE+ISIS_PSNP_HEADER_SIZE)) { ND_PRINT((ndo, "- bogus fixed header length %u should be %lu", isis_header->fixed_len, (unsigned long)(ISIS_COMMON_HEADER_SIZE+ISIS_PSNP_HEADER_SIZE))); return (0); } ND_TCHECK(*header_psnp); if (length < ISIS_COMMON_HEADER_SIZE+ISIS_PSNP_HEADER_SIZE) goto trunc; if (ndo->ndo_vflag == 0) { ND_PRINT((ndo, ", src-id %s", isis_print_id(header_psnp->source_id, NODE_ID_LEN))); ND_PRINT((ndo, ", length %u", length)); return (1); } pdu_len=EXTRACT_16BITS(header_psnp->pdu_len); if (packet_len>pdu_len) { packet_len=pdu_len; /* do TLV decoding as long as it makes sense */ length=pdu_len; } ND_PRINT((ndo, "\n\t source-id: %s, PDU length: %u", isis_print_id(header_psnp->source_id, NODE_ID_LEN), pdu_len)); if (ndo->ndo_vflag > 1) { if (!print_unknown_data(ndo, pptr, "\n\t ", ISIS_PSNP_HEADER_SIZE)) return (0); } packet_len -= (ISIS_COMMON_HEADER_SIZE+ISIS_PSNP_HEADER_SIZE); pptr = p + (ISIS_COMMON_HEADER_SIZE+ISIS_PSNP_HEADER_SIZE); break; default: if (ndo->ndo_vflag == 0) { ND_PRINT((ndo, ", length %u", length)); return (1); } (void)print_unknown_data(ndo, pptr, "\n\t ", length); return (0); } /* * Now print the TLV's. */ while (packet_len > 0) { ND_TCHECK2(*pptr, 2); if (packet_len < 2) goto trunc; tlv_type = *pptr++; tlv_len = *pptr++; tmp =tlv_len; /* copy temporary len & pointer to packet data */ tptr = pptr; packet_len -= 2; /* first lets see if we know the TLVs name*/ ND_PRINT((ndo, "\n\t %s TLV #%u, length: %u", tok2str(isis_tlv_values, "unknown", tlv_type), tlv_type, tlv_len)); if (tlv_len == 0) /* something is invalid */ continue; if (packet_len < tlv_len) goto trunc; /* now check if we have a decoder otherwise do a hexdump at the end*/ switch (tlv_type) { case ISIS_TLV_AREA_ADDR: ND_TCHECK2(*tptr, 1); alen = *tptr++; while (tmp && alen < tmp) { ND_PRINT((ndo, "\n\t Area address (length: %u): %s", alen, isonsap_string(ndo, tptr, alen))); tptr += alen; tmp -= alen + 1; if (tmp==0) /* if this is the last area address do not attemt a boundary check */ break; ND_TCHECK2(*tptr, 1); alen = *tptr++; } break; case ISIS_TLV_ISNEIGH: while (tmp >= ETHER_ADDR_LEN) { ND_TCHECK2(*tptr, ETHER_ADDR_LEN); ND_PRINT((ndo, "\n\t SNPA: %s", isis_print_id(tptr, ETHER_ADDR_LEN))); tmp -= ETHER_ADDR_LEN; tptr += ETHER_ADDR_LEN; } break; case ISIS_TLV_ISNEIGH_VARLEN: if (!ND_TTEST2(*tptr, 1) || tmp < 3) /* min. TLV length */ goto trunctlv; lan_alen = *tptr++; /* LAN address length */ if (lan_alen == 0) { ND_PRINT((ndo, "\n\t LAN address length 0 bytes (invalid)")); break; } tmp --; ND_PRINT((ndo, "\n\t LAN address length %u bytes ", lan_alen)); while (tmp >= lan_alen) { ND_TCHECK2(*tptr, lan_alen); ND_PRINT((ndo, "\n\t\tIS Neighbor: %s", isis_print_id(tptr, lan_alen))); tmp -= lan_alen; tptr +=lan_alen; } break; case ISIS_TLV_PADDING: break; case ISIS_TLV_MT_IS_REACH: mt_len = isis_print_mtid(ndo, tptr, "\n\t "); if (mt_len == 0) /* did something go wrong ? */ goto trunctlv; tptr+=mt_len; tmp-=mt_len; while (tmp >= 2+NODE_ID_LEN+3+1) { ext_is_len = isis_print_ext_is_reach(ndo, tptr, "\n\t ", tlv_type); if (ext_is_len == 0) /* did something go wrong ? */ goto trunctlv; tmp-=ext_is_len; tptr+=ext_is_len; } break; case ISIS_TLV_IS_ALIAS_ID: while (tmp >= NODE_ID_LEN+1) { /* is it worth attempting a decode ? */ ext_is_len = isis_print_ext_is_reach(ndo, tptr, "\n\t ", tlv_type); if (ext_is_len == 0) /* did something go wrong ? */ goto trunctlv; tmp-=ext_is_len; tptr+=ext_is_len; } break; case ISIS_TLV_EXT_IS_REACH: while (tmp >= NODE_ID_LEN+3+1) { /* is it worth attempting a decode ? */ ext_is_len = isis_print_ext_is_reach(ndo, tptr, "\n\t ", tlv_type); if (ext_is_len == 0) /* did something go wrong ? */ goto trunctlv; tmp-=ext_is_len; tptr+=ext_is_len; } break; case ISIS_TLV_IS_REACH: ND_TCHECK2(*tptr,1); /* check if there is one byte left to read out the virtual flag */ ND_PRINT((ndo, "\n\t %s", tok2str(isis_is_reach_virtual_values, "bogus virtual flag 0x%02x", *tptr++))); tlv_is_reach = (const struct isis_tlv_is_reach *)tptr; while (tmp >= sizeof(struct isis_tlv_is_reach)) { ND_TCHECK(*tlv_is_reach); ND_PRINT((ndo, "\n\t IS Neighbor: %s", isis_print_id(tlv_is_reach->neighbor_nodeid, NODE_ID_LEN))); isis_print_metric_block(ndo, &tlv_is_reach->isis_metric_block); tmp -= sizeof(struct isis_tlv_is_reach); tlv_is_reach++; } break; case ISIS_TLV_ESNEIGH: tlv_es_reach = (const struct isis_tlv_es_reach *)tptr; while (tmp >= sizeof(struct isis_tlv_es_reach)) { ND_TCHECK(*tlv_es_reach); ND_PRINT((ndo, "\n\t ES Neighbor: %s", isis_print_id(tlv_es_reach->neighbor_sysid, SYSTEM_ID_LEN))); isis_print_metric_block(ndo, &tlv_es_reach->isis_metric_block); tmp -= sizeof(struct isis_tlv_es_reach); tlv_es_reach++; } break; /* those two TLVs share the same format */ case ISIS_TLV_INT_IP_REACH: case ISIS_TLV_EXT_IP_REACH: if (!isis_print_tlv_ip_reach(ndo, pptr, "\n\t ", tlv_len)) return (1); break; case ISIS_TLV_EXTD_IP_REACH: while (tmp>0) { ext_ip_len = isis_print_extd_ip_reach(ndo, tptr, "\n\t ", AF_INET); if (ext_ip_len == 0) /* did something go wrong ? */ goto trunctlv; tptr+=ext_ip_len; tmp-=ext_ip_len; } break; case ISIS_TLV_MT_IP_REACH: mt_len = isis_print_mtid(ndo, tptr, "\n\t "); if (mt_len == 0) { /* did something go wrong ? */ goto trunctlv; } tptr+=mt_len; tmp-=mt_len; while (tmp>0) { ext_ip_len = isis_print_extd_ip_reach(ndo, tptr, "\n\t ", AF_INET); if (ext_ip_len == 0) /* did something go wrong ? */ goto trunctlv; tptr+=ext_ip_len; tmp-=ext_ip_len; } break; case ISIS_TLV_IP6_REACH: while (tmp>0) { ext_ip_len = isis_print_extd_ip_reach(ndo, tptr, "\n\t ", AF_INET6); if (ext_ip_len == 0) /* did something go wrong ? */ goto trunctlv; tptr+=ext_ip_len; tmp-=ext_ip_len; } break; case ISIS_TLV_MT_IP6_REACH: mt_len = isis_print_mtid(ndo, tptr, "\n\t "); if (mt_len == 0) { /* did something go wrong ? */ goto trunctlv; } tptr+=mt_len; tmp-=mt_len; while (tmp>0) { ext_ip_len = isis_print_extd_ip_reach(ndo, tptr, "\n\t ", AF_INET6); if (ext_ip_len == 0) /* did something go wrong ? */ goto trunctlv; tptr+=ext_ip_len; tmp-=ext_ip_len; } break; case ISIS_TLV_IP6ADDR: while (tmp>=sizeof(struct in6_addr)) { ND_TCHECK2(*tptr, sizeof(struct in6_addr)); ND_PRINT((ndo, "\n\t IPv6 interface address: %s", ip6addr_string(ndo, tptr))); tptr += sizeof(struct in6_addr); tmp -= sizeof(struct in6_addr); } break; case ISIS_TLV_AUTH: ND_TCHECK2(*tptr, 1); ND_PRINT((ndo, "\n\t %s: ", tok2str(isis_subtlv_auth_values, "unknown Authentication type 0x%02x", *tptr))); switch (*tptr) { case ISIS_SUBTLV_AUTH_SIMPLE: if (fn_printzp(ndo, tptr + 1, tlv_len - 1, ndo->ndo_snapend)) goto trunctlv; break; case ISIS_SUBTLV_AUTH_MD5: for(i=1;i<tlv_len;i++) { ND_TCHECK2(*(tptr + i), 1); ND_PRINT((ndo, "%02x", *(tptr + i))); } if (tlv_len != ISIS_SUBTLV_AUTH_MD5_LEN+1) ND_PRINT((ndo, ", (invalid subTLV) ")); sigcheck = signature_verify(ndo, optr, length, tptr + 1, isis_clear_checksum_lifetime, header_lsp); ND_PRINT((ndo, " (%s)", tok2str(signature_check_values, "Unknown", sigcheck))); break; case ISIS_SUBTLV_AUTH_GENERIC: ND_TCHECK2(*(tptr + 1), 2); key_id = EXTRACT_16BITS((tptr+1)); ND_PRINT((ndo, "%u, password: ", key_id)); for(i=1 + sizeof(uint16_t);i<tlv_len;i++) { ND_TCHECK2(*(tptr + i), 1); ND_PRINT((ndo, "%02x", *(tptr + i))); } break; case ISIS_SUBTLV_AUTH_PRIVATE: default: if (!print_unknown_data(ndo, tptr + 1, "\n\t\t ", tlv_len - 1)) return(0); break; } break; case ISIS_TLV_PTP_ADJ: tlv_ptp_adj = (const struct isis_tlv_ptp_adj *)tptr; if(tmp>=1) { ND_TCHECK2(*tptr, 1); ND_PRINT((ndo, "\n\t Adjacency State: %s (%u)", tok2str(isis_ptp_adjancey_values, "unknown", *tptr), *tptr)); tmp--; } if(tmp>sizeof(tlv_ptp_adj->extd_local_circuit_id)) { ND_TCHECK(tlv_ptp_adj->extd_local_circuit_id); ND_PRINT((ndo, "\n\t Extended Local circuit-ID: 0x%08x", EXTRACT_32BITS(tlv_ptp_adj->extd_local_circuit_id))); tmp-=sizeof(tlv_ptp_adj->extd_local_circuit_id); } if(tmp>=SYSTEM_ID_LEN) { ND_TCHECK2(tlv_ptp_adj->neighbor_sysid, SYSTEM_ID_LEN); ND_PRINT((ndo, "\n\t Neighbor System-ID: %s", isis_print_id(tlv_ptp_adj->neighbor_sysid, SYSTEM_ID_LEN))); tmp-=SYSTEM_ID_LEN; } if(tmp>=sizeof(tlv_ptp_adj->neighbor_extd_local_circuit_id)) { ND_TCHECK(tlv_ptp_adj->neighbor_extd_local_circuit_id); ND_PRINT((ndo, "\n\t Neighbor Extended Local circuit-ID: 0x%08x", EXTRACT_32BITS(tlv_ptp_adj->neighbor_extd_local_circuit_id))); } break; case ISIS_TLV_PROTOCOLS: ND_PRINT((ndo, "\n\t NLPID(s): ")); while (tmp>0) { ND_TCHECK2(*(tptr), 1); ND_PRINT((ndo, "%s (0x%02x)", tok2str(nlpid_values, "unknown", *tptr), *tptr)); if (tmp>1) /* further NPLIDs ? - put comma */ ND_PRINT((ndo, ", ")); tptr++; tmp--; } break; case ISIS_TLV_MT_PORT_CAP: { ND_TCHECK2(*(tptr), 2); ND_PRINT((ndo, "\n\t RES: %d, MTID(s): %d", (EXTRACT_16BITS (tptr) >> 12), (EXTRACT_16BITS (tptr) & 0x0fff))); tmp = tmp-2; tptr = tptr+2; if (tmp) isis_print_mt_port_cap_subtlv(ndo, tptr, tmp); break; } case ISIS_TLV_MT_CAPABILITY: ND_TCHECK2(*(tptr), 2); ND_PRINT((ndo, "\n\t O: %d, RES: %d, MTID(s): %d", (EXTRACT_16BITS(tptr) >> 15) & 0x01, (EXTRACT_16BITS(tptr) >> 12) & 0x07, EXTRACT_16BITS(tptr) & 0x0fff)); tmp = tmp-2; tptr = tptr+2; if (tmp) isis_print_mt_capability_subtlv(ndo, tptr, tmp); break; case ISIS_TLV_TE_ROUTER_ID: ND_TCHECK2(*pptr, sizeof(struct in_addr)); ND_PRINT((ndo, "\n\t Traffic Engineering Router ID: %s", ipaddr_string(ndo, pptr))); break; case ISIS_TLV_IPADDR: while (tmp>=sizeof(struct in_addr)) { ND_TCHECK2(*tptr, sizeof(struct in_addr)); ND_PRINT((ndo, "\n\t IPv4 interface address: %s", ipaddr_string(ndo, tptr))); tptr += sizeof(struct in_addr); tmp -= sizeof(struct in_addr); } break; case ISIS_TLV_HOSTNAME: ND_PRINT((ndo, "\n\t Hostname: ")); if (fn_printzp(ndo, tptr, tmp, ndo->ndo_snapend)) goto trunctlv; break; case ISIS_TLV_SHARED_RISK_GROUP: if (tmp < NODE_ID_LEN) break; ND_TCHECK2(*tptr, NODE_ID_LEN); ND_PRINT((ndo, "\n\t IS Neighbor: %s", isis_print_id(tptr, NODE_ID_LEN))); tptr+=(NODE_ID_LEN); tmp-=(NODE_ID_LEN); if (tmp < 1) break; ND_TCHECK2(*tptr, 1); ND_PRINT((ndo, ", Flags: [%s]", ISIS_MASK_TLV_SHARED_RISK_GROUP(*tptr++) ? "numbered" : "unnumbered")); tmp--; if (tmp < sizeof(struct in_addr)) break; ND_TCHECK2(*tptr, sizeof(struct in_addr)); ND_PRINT((ndo, "\n\t IPv4 interface address: %s", ipaddr_string(ndo, tptr))); tptr+=sizeof(struct in_addr); tmp-=sizeof(struct in_addr); if (tmp < sizeof(struct in_addr)) break; ND_TCHECK2(*tptr, sizeof(struct in_addr)); ND_PRINT((ndo, "\n\t IPv4 neighbor address: %s", ipaddr_string(ndo, tptr))); tptr+=sizeof(struct in_addr); tmp-=sizeof(struct in_addr); while (tmp>=4) { ND_TCHECK2(*tptr, 4); ND_PRINT((ndo, "\n\t Link-ID: 0x%08x", EXTRACT_32BITS(tptr))); tptr+=4; tmp-=4; } break; case ISIS_TLV_LSP: tlv_lsp = (const struct isis_tlv_lsp *)tptr; while(tmp>=sizeof(struct isis_tlv_lsp)) { ND_TCHECK((tlv_lsp->lsp_id)[LSP_ID_LEN-1]); ND_PRINT((ndo, "\n\t lsp-id: %s", isis_print_id(tlv_lsp->lsp_id, LSP_ID_LEN))); ND_TCHECK2(tlv_lsp->sequence_number, 4); ND_PRINT((ndo, ", seq: 0x%08x", EXTRACT_32BITS(tlv_lsp->sequence_number))); ND_TCHECK2(tlv_lsp->remaining_lifetime, 2); ND_PRINT((ndo, ", lifetime: %5ds", EXTRACT_16BITS(tlv_lsp->remaining_lifetime))); ND_TCHECK2(tlv_lsp->checksum, 2); ND_PRINT((ndo, ", chksum: 0x%04x", EXTRACT_16BITS(tlv_lsp->checksum))); tmp-=sizeof(struct isis_tlv_lsp); tlv_lsp++; } break; case ISIS_TLV_CHECKSUM: if (tmp < ISIS_TLV_CHECKSUM_MINLEN) break; ND_TCHECK2(*tptr, ISIS_TLV_CHECKSUM_MINLEN); ND_PRINT((ndo, "\n\t checksum: 0x%04x ", EXTRACT_16BITS(tptr))); /* do not attempt to verify the checksum if it is zero * most likely a HMAC-MD5 TLV is also present and * to avoid conflicts the checksum TLV is zeroed. * see rfc3358 for details */ osi_print_cksum(ndo, optr, EXTRACT_16BITS(tptr), tptr-optr, length); break; case ISIS_TLV_POI: if (tlv_len >= SYSTEM_ID_LEN + 1) { ND_TCHECK2(*tptr, SYSTEM_ID_LEN + 1); ND_PRINT((ndo, "\n\t Purge Originator System-ID: %s", isis_print_id(tptr + 1, SYSTEM_ID_LEN))); } if (tlv_len == 2 * SYSTEM_ID_LEN + 1) { ND_TCHECK2(*tptr, 2 * SYSTEM_ID_LEN + 1); ND_PRINT((ndo, "\n\t Received from System-ID: %s", isis_print_id(tptr + SYSTEM_ID_LEN + 1, SYSTEM_ID_LEN))); } break; case ISIS_TLV_MT_SUPPORTED: if (tmp < ISIS_TLV_MT_SUPPORTED_MINLEN) break; while (tmp>1) { /* length can only be a multiple of 2, otherwise there is something broken -> so decode down until length is 1 */ if (tmp!=1) { mt_len = isis_print_mtid(ndo, tptr, "\n\t "); if (mt_len == 0) /* did something go wrong ? */ goto trunctlv; tptr+=mt_len; tmp-=mt_len; } else { ND_PRINT((ndo, "\n\t invalid MT-ID")); break; } } break; case ISIS_TLV_RESTART_SIGNALING: /* first attempt to decode the flags */ if (tmp < ISIS_TLV_RESTART_SIGNALING_FLAGLEN) break; ND_TCHECK2(*tptr, ISIS_TLV_RESTART_SIGNALING_FLAGLEN); ND_PRINT((ndo, "\n\t Flags [%s]", bittok2str(isis_restart_flag_values, "none", *tptr))); tptr+=ISIS_TLV_RESTART_SIGNALING_FLAGLEN; tmp-=ISIS_TLV_RESTART_SIGNALING_FLAGLEN; /* is there anything other than the flags field? */ if (tmp == 0) break; if (tmp < ISIS_TLV_RESTART_SIGNALING_HOLDTIMELEN) break; ND_TCHECK2(*tptr, ISIS_TLV_RESTART_SIGNALING_HOLDTIMELEN); ND_PRINT((ndo, ", Remaining holding time %us", EXTRACT_16BITS(tptr))); tptr+=ISIS_TLV_RESTART_SIGNALING_HOLDTIMELEN; tmp-=ISIS_TLV_RESTART_SIGNALING_HOLDTIMELEN; /* is there an additional sysid field present ?*/ if (tmp == SYSTEM_ID_LEN) { ND_TCHECK2(*tptr, SYSTEM_ID_LEN); ND_PRINT((ndo, ", for %s", isis_print_id(tptr,SYSTEM_ID_LEN))); } break; case ISIS_TLV_IDRP_INFO: if (tmp < ISIS_TLV_IDRP_INFO_MINLEN) break; ND_TCHECK2(*tptr, ISIS_TLV_IDRP_INFO_MINLEN); ND_PRINT((ndo, "\n\t Inter-Domain Information Type: %s", tok2str(isis_subtlv_idrp_values, "Unknown (0x%02x)", *tptr))); switch (*tptr++) { case ISIS_SUBTLV_IDRP_ASN: ND_TCHECK2(*tptr, 2); /* fetch AS number */ ND_PRINT((ndo, "AS Number: %u", EXTRACT_16BITS(tptr))); break; case ISIS_SUBTLV_IDRP_LOCAL: case ISIS_SUBTLV_IDRP_RES: default: if (!print_unknown_data(ndo, tptr, "\n\t ", tlv_len - 1)) return(0); break; } break; case ISIS_TLV_LSP_BUFFERSIZE: if (tmp < ISIS_TLV_LSP_BUFFERSIZE_MINLEN) break; ND_TCHECK2(*tptr, ISIS_TLV_LSP_BUFFERSIZE_MINLEN); ND_PRINT((ndo, "\n\t LSP Buffersize: %u", EXTRACT_16BITS(tptr))); break; case ISIS_TLV_PART_DIS: while (tmp >= SYSTEM_ID_LEN) { ND_TCHECK2(*tptr, SYSTEM_ID_LEN); ND_PRINT((ndo, "\n\t %s", isis_print_id(tptr, SYSTEM_ID_LEN))); tptr+=SYSTEM_ID_LEN; tmp-=SYSTEM_ID_LEN; } break; case ISIS_TLV_PREFIX_NEIGH: if (tmp < sizeof(struct isis_metric_block)) break; ND_TCHECK2(*tptr, sizeof(struct isis_metric_block)); ND_PRINT((ndo, "\n\t Metric Block")); isis_print_metric_block(ndo, (const struct isis_metric_block *)tptr); tptr+=sizeof(struct isis_metric_block); tmp-=sizeof(struct isis_metric_block); while(tmp>0) { ND_TCHECK2(*tptr, 1); prefix_len=*tptr++; /* read out prefix length in semioctets*/ if (prefix_len < 2) { ND_PRINT((ndo, "\n\t\tAddress: prefix length %u < 2", prefix_len)); break; } tmp--; if (tmp < prefix_len/2) break; ND_TCHECK2(*tptr, prefix_len / 2); ND_PRINT((ndo, "\n\t\tAddress: %s/%u", isonsap_string(ndo, tptr, prefix_len / 2), prefix_len * 4)); tptr+=prefix_len/2; tmp-=prefix_len/2; } break; case ISIS_TLV_IIH_SEQNR: if (tmp < ISIS_TLV_IIH_SEQNR_MINLEN) break; ND_TCHECK2(*tptr, ISIS_TLV_IIH_SEQNR_MINLEN); /* check if four bytes are on the wire */ ND_PRINT((ndo, "\n\t Sequence number: %u", EXTRACT_32BITS(tptr))); break; case ISIS_TLV_VENDOR_PRIVATE: if (tmp < ISIS_TLV_VENDOR_PRIVATE_MINLEN) break; ND_TCHECK2(*tptr, ISIS_TLV_VENDOR_PRIVATE_MINLEN); /* check if enough byte for a full oui */ vendor_id = EXTRACT_24BITS(tptr); ND_PRINT((ndo, "\n\t Vendor: %s (%u)", tok2str(oui_values, "Unknown", vendor_id), vendor_id)); tptr+=3; tmp-=3; if (tmp > 0) /* hexdump the rest */ if (!print_unknown_data(ndo, tptr, "\n\t\t", tmp)) return(0); break; /* * FIXME those are the defined TLVs that lack a decoder * you are welcome to contribute code ;-) */ case ISIS_TLV_DECNET_PHASE4: case ISIS_TLV_LUCENT_PRIVATE: case ISIS_TLV_IPAUTH: case ISIS_TLV_NORTEL_PRIVATE1: case ISIS_TLV_NORTEL_PRIVATE2: default: if (ndo->ndo_vflag <= 1) { if (!print_unknown_data(ndo, pptr, "\n\t\t", tlv_len)) return(0); } break; } /* do we want to see an additionally hexdump ? */ if (ndo->ndo_vflag> 1) { if (!print_unknown_data(ndo, pptr, "\n\t ", tlv_len)) return(0); } pptr += tlv_len; packet_len -= tlv_len; } if (packet_len != 0) { ND_PRINT((ndo, "\n\t %u straggler bytes", packet_len)); } return (1); trunc: ND_PRINT((ndo, "%s", tstr)); return (1); trunctlv: ND_PRINT((ndo, "\n\t\t")); ND_PRINT((ndo, "%s", tstr)); return(1); }
297,281,792,744,887,980,000,000,000,000,000,000,000
print-isoclns.c
89,784,539,627,020,220,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2017-12999
The IS-IS parser in tcpdump before 4.9.2 has a buffer over-read in print-isoclns.c:isis_print().
https://nvd.nist.gov/vuln/detail/CVE-2017-12999
9,432
tcpdump
34cec721d39c76be1e0a600829a7b17bdfb832b6
https://github.com/the-tcpdump-group/tcpdump
https://github.com/the-tcpdump-group/tcpdump/commit/34cec721d39c76be1e0a600829a7b17bdfb832b6
CVE-2017-12997/LLDP: Don't use an 8-bit loop counter. If you have a for (i = 0; i < N; i++) loop, you'd better make sure that i is big enough to hold N - not N-1, N. The TLV length here is 9 bits long, not 8 bits long, so an 8-bit loop counter will overflow and you can loop infinitely. This fixes an infinite loop discovered by Forcepoint's security researchers Otto Airamo & Antti Levomäki. Add tests using the capture files supplied by the reporter(s). Clean up the output a bit while we're at it.
1
lldp_private_8021_print(netdissect_options *ndo, const u_char *tptr, u_int tlv_len) { int subtype, hexdump = FALSE; u_int sublen; u_int tval; uint8_t i; if (tlv_len < 4) { return hexdump; } subtype = *(tptr+3); ND_PRINT((ndo, "\n\t %s Subtype (%u)", tok2str(lldp_8021_subtype_values, "unknown", subtype), subtype)); switch (subtype) { case LLDP_PRIVATE_8021_SUBTYPE_PORT_VLAN_ID: if (tlv_len < 6) { return hexdump; } ND_PRINT((ndo, "\n\t port vlan id (PVID): %u", EXTRACT_16BITS(tptr + 4))); break; case LLDP_PRIVATE_8021_SUBTYPE_PROTOCOL_VLAN_ID: if (tlv_len < 7) { return hexdump; } ND_PRINT((ndo, "\n\t port and protocol vlan id (PPVID): %u, flags [%s] (0x%02x)", EXTRACT_16BITS(tptr+5), bittok2str(lldp_8021_port_protocol_id_values, "none", *(tptr+4)), *(tptr + 4))); break; case LLDP_PRIVATE_8021_SUBTYPE_VLAN_NAME: if (tlv_len < 6) { return hexdump; } ND_PRINT((ndo, "\n\t vlan id (VID): %u", EXTRACT_16BITS(tptr + 4))); if (tlv_len < 7) { return hexdump; } sublen = *(tptr+6); if (tlv_len < 7+sublen) { return hexdump; } ND_PRINT((ndo, "\n\t vlan name: ")); safeputs(ndo, tptr + 7, sublen); break; case LLDP_PRIVATE_8021_SUBTYPE_PROTOCOL_IDENTITY: if (tlv_len < 5) { return hexdump; } sublen = *(tptr+4); if (tlv_len < 5+sublen) { return hexdump; } ND_PRINT((ndo, "\n\t protocol identity: ")); safeputs(ndo, tptr + 5, sublen); break; case LLDP_PRIVATE_8021_SUBTYPE_CONGESTION_NOTIFICATION: if(tlv_len<LLDP_PRIVATE_8021_SUBTYPE_CONGESTION_NOTIFICATION_LENGTH){ return hexdump; } tval=*(tptr+4); ND_PRINT((ndo, "\n\t Pre-Priority CNPV Indicator")); ND_PRINT((ndo, "\n\t Priority : 0 1 2 3 4 5 6 7")); ND_PRINT((ndo, "\n\t Value : ")); for(i=0;i<NO_OF_BITS;i++) ND_PRINT((ndo, "%-2d ", (tval >> i) & 0x01)); tval=*(tptr+5); ND_PRINT((ndo, "\n\t Pre-Priority Ready Indicator")); ND_PRINT((ndo, "\n\t Priority : 0 1 2 3 4 5 6 7")); ND_PRINT((ndo, "\n\t Value : ")); for(i=0;i<NO_OF_BITS;i++) ND_PRINT((ndo, "%-2d ", (tval >> i) & 0x01)); break; case LLDP_PRIVATE_8021_SUBTYPE_ETS_CONFIGURATION: if(tlv_len<LLDP_PRIVATE_8021_SUBTYPE_ETS_CONFIGURATION_LENGTH) { return hexdump; } tval=*(tptr+4); ND_PRINT((ndo, "\n\t Willing:%d, CBS:%d, RES:%d, Max TCs:%d", tval >> 7, (tval >> 6) & 0x02, (tval >> 3) & 0x07, tval & 0x07)); /*Print Priority Assignment Table*/ print_ets_priority_assignment_table(ndo, tptr + 5); /*Print TC Bandwidth Table*/ print_tc_bandwidth_table(ndo, tptr + 9); /* Print TSA Assignment Table */ print_tsa_assignment_table(ndo, tptr + 17); break; case LLDP_PRIVATE_8021_SUBTYPE_ETS_RECOMMENDATION: if(tlv_len<LLDP_PRIVATE_8021_SUBTYPE_ETS_RECOMMENDATION_LENGTH) { return hexdump; } ND_PRINT((ndo, "\n\t RES: %d", *(tptr + 4))); /*Print Priority Assignment Table */ print_ets_priority_assignment_table(ndo, tptr + 5); /*Print TC Bandwidth Table */ print_tc_bandwidth_table(ndo, tptr + 9); /* Print TSA Assignment Table */ print_tsa_assignment_table(ndo, tptr + 17); break; case LLDP_PRIVATE_8021_SUBTYPE_PFC_CONFIGURATION: if(tlv_len<LLDP_PRIVATE_8021_SUBTYPE_PFC_CONFIGURATION_LENGTH) { return hexdump; } tval=*(tptr+4); ND_PRINT((ndo, "\n\t Willing: %d, MBC: %d, RES: %d, PFC cap:%d ", tval >> 7, (tval >> 6) & 0x01, (tval >> 4) & 0x03, (tval & 0x0f))); ND_PRINT((ndo, "\n\t PFC Enable")); tval=*(tptr+5); ND_PRINT((ndo, "\n\t Priority : 0 1 2 3 4 5 6 7")); ND_PRINT((ndo, "\n\t Value : ")); for(i=0;i<NO_OF_BITS;i++) ND_PRINT((ndo, "%-2d ", (tval >> i) & 0x01)); break; case LLDP_PRIVATE_8021_SUBTYPE_APPLICATION_PRIORITY: if(tlv_len<LLDP_PRIVATE_8021_SUBTYPE_APPLICATION_PRIORITY_MIN_LENGTH) { return hexdump; } ND_PRINT((ndo, "\n\t RES: %d", *(tptr + 4))); if(tlv_len<=LLDP_PRIVATE_8021_SUBTYPE_APPLICATION_PRIORITY_MIN_LENGTH){ return hexdump; } /* Length of Application Priority Table */ sublen=tlv_len-5; if(sublen%3!=0){ return hexdump; } i=0; ND_PRINT((ndo, "\n\t Application Priority Table")); while(i<sublen) { tval=*(tptr+i+5); ND_PRINT((ndo, "\n\t Priority: %d, RES: %d, Sel: %d", tval >> 5, (tval >> 3) & 0x03, (tval & 0x07))); ND_PRINT((ndo, "Protocol ID: %d", EXTRACT_16BITS(tptr + i + 5))); i=i+3; } break; case LLDP_PRIVATE_8021_SUBTYPE_EVB: if(tlv_len<LLDP_PRIVATE_8021_SUBTYPE_EVB_LENGTH){ return hexdump; } ND_PRINT((ndo, "\n\t EVB Bridge Status")); tval=*(tptr+4); ND_PRINT((ndo, "\n\t RES: %d, BGID: %d, RRCAP: %d, RRCTR: %d", tval >> 3, (tval >> 2) & 0x01, (tval >> 1) & 0x01, tval & 0x01)); ND_PRINT((ndo, "\n\t EVB Station Status")); tval=*(tptr+5); ND_PRINT((ndo, "\n\t RES: %d, SGID: %d, RRREQ: %d,RRSTAT: %d", tval >> 4, (tval >> 3) & 0x01, (tval >> 2) & 0x01, tval & 0x03)); tval=*(tptr+6); ND_PRINT((ndo, "\n\t R: %d, RTE: %d, ",tval >> 5, tval & 0x1f)); tval=*(tptr+7); ND_PRINT((ndo, "EVB Mode: %s [%d]", tok2str(lldp_evb_mode_values, "unknown", tval >> 6), tval >> 6)); ND_PRINT((ndo, "\n\t ROL: %d, RWD: %d, ", (tval >> 5) & 0x01, tval & 0x1f)); tval=*(tptr+8); ND_PRINT((ndo, "RES: %d, ROL: %d, RKA: %d", tval >> 6, (tval >> 5) & 0x01, tval & 0x1f)); break; case LLDP_PRIVATE_8021_SUBTYPE_CDCP: if(tlv_len<LLDP_PRIVATE_8021_SUBTYPE_CDCP_MIN_LENGTH){ return hexdump; } tval=*(tptr+4); ND_PRINT((ndo, "\n\t Role: %d, RES: %d, Scomp: %d ", tval >> 7, (tval >> 4) & 0x07, (tval >> 3) & 0x01)); ND_PRINT((ndo, "ChnCap: %d", EXTRACT_16BITS(tptr + 6) & 0x0fff)); sublen=tlv_len-8; if(sublen%3!=0) { return hexdump; } i=0; while(i<sublen) { tval=EXTRACT_24BITS(tptr+i+8); ND_PRINT((ndo, "\n\t SCID: %d, SVID: %d", tval >> 12, tval & 0x000fff)); i=i+3; } break; default: hexdump = TRUE; break; } return hexdump; }
206,271,957,265,320,640,000,000,000,000,000,000,000
print-lldp.c
67,823,449,492,692,070,000,000,000,000,000,000,000
[ "CWE-835" ]
CVE-2017-12997
The LLDP parser in tcpdump before 4.9.2 could enter an infinite loop due to a bug in print-lldp.c:lldp_private_8021_print().
https://nvd.nist.gov/vuln/detail/CVE-2017-12997
9,434
tcpdump
66df248b49095c261138b5a5e34d341a6bf9ac7f
https://github.com/the-tcpdump-group/tcpdump
https://github.com/the-tcpdump-group/tcpdump/commit/66df248b49095c261138b5a5e34d341a6bf9ac7f
CVE-2017-12985/IPv6: Check for print routines returning -1 when running past the end. rt6_print(), ah_print(), and esp_print() return -1 if they run up against the end of the packet while dissecting; if that happens, stop dissecting, don't try to fetch the next header value, because 1) *it* might be past the end of the packet and 2) we won't be using it in any case, as we'll be exiting the loop. Also, change mobility_print() to return -1 if it runs up against the end of the packet, and stop dissecting if it does so. This fixes a buffer over-read discovered by Brian 'geeknik' Carpenter. Add tests using the capture files supplied by the reporter(s).
1
ip6_print(netdissect_options *ndo, const u_char *bp, u_int length) { register const struct ip6_hdr *ip6; register int advance; u_int len; const u_char *ipend; register const u_char *cp; register u_int payload_len; int nh; int fragmented = 0; u_int flow; ip6 = (const struct ip6_hdr *)bp; ND_TCHECK(*ip6); if (length < sizeof (struct ip6_hdr)) { ND_PRINT((ndo, "truncated-ip6 %u", length)); return; } if (!ndo->ndo_eflag) ND_PRINT((ndo, "IP6 ")); if (IP6_VERSION(ip6) != 6) { ND_PRINT((ndo,"version error: %u != 6", IP6_VERSION(ip6))); return; } payload_len = EXTRACT_16BITS(&ip6->ip6_plen); len = payload_len + sizeof(struct ip6_hdr); if (length < len) ND_PRINT((ndo, "truncated-ip6 - %u bytes missing!", len - length)); if (ndo->ndo_vflag) { flow = EXTRACT_32BITS(&ip6->ip6_flow); ND_PRINT((ndo, "(")); #if 0 /* rfc1883 */ if (flow & 0x0f000000) ND_PRINT((ndo, "pri 0x%02x, ", (flow & 0x0f000000) >> 24)); if (flow & 0x00ffffff) ND_PRINT((ndo, "flowlabel 0x%06x, ", flow & 0x00ffffff)); #else /* RFC 2460 */ if (flow & 0x0ff00000) ND_PRINT((ndo, "class 0x%02x, ", (flow & 0x0ff00000) >> 20)); if (flow & 0x000fffff) ND_PRINT((ndo, "flowlabel 0x%05x, ", flow & 0x000fffff)); #endif ND_PRINT((ndo, "hlim %u, next-header %s (%u) payload length: %u) ", ip6->ip6_hlim, tok2str(ipproto_values,"unknown",ip6->ip6_nxt), ip6->ip6_nxt, payload_len)); } /* * Cut off the snapshot length to the end of the IP payload. */ ipend = bp + len; if (ipend < ndo->ndo_snapend) ndo->ndo_snapend = ipend; cp = (const u_char *)ip6; advance = sizeof(struct ip6_hdr); nh = ip6->ip6_nxt; while (cp < ndo->ndo_snapend && advance > 0) { cp += advance; len -= advance; if (cp == (const u_char *)(ip6 + 1) && nh != IPPROTO_TCP && nh != IPPROTO_UDP && nh != IPPROTO_DCCP && nh != IPPROTO_SCTP) { ND_PRINT((ndo, "%s > %s: ", ip6addr_string(ndo, &ip6->ip6_src), ip6addr_string(ndo, &ip6->ip6_dst))); } switch (nh) { case IPPROTO_HOPOPTS: advance = hbhopt_print(ndo, cp); if (advance < 0) return; nh = *cp; break; case IPPROTO_DSTOPTS: advance = dstopt_print(ndo, cp); if (advance < 0) return; nh = *cp; break; case IPPROTO_FRAGMENT: advance = frag6_print(ndo, cp, (const u_char *)ip6); if (advance < 0 || ndo->ndo_snapend <= cp + advance) return; nh = *cp; fragmented = 1; break; case IPPROTO_MOBILITY_OLD: case IPPROTO_MOBILITY: /* * XXX - we don't use "advance"; RFC 3775 says that * the next header field in a mobility header * should be IPPROTO_NONE, but speaks of * the possiblity of a future extension in * which payload can be piggybacked atop a * mobility header. */ advance = mobility_print(ndo, cp, (const u_char *)ip6); nh = *cp; return; case IPPROTO_ROUTING: advance = rt6_print(ndo, cp, (const u_char *)ip6); nh = *cp; break; case IPPROTO_SCTP: sctp_print(ndo, cp, (const u_char *)ip6, len); return; case IPPROTO_DCCP: dccp_print(ndo, cp, (const u_char *)ip6, len); return; case IPPROTO_TCP: tcp_print(ndo, cp, len, (const u_char *)ip6, fragmented); return; case IPPROTO_UDP: udp_print(ndo, cp, len, (const u_char *)ip6, fragmented); return; case IPPROTO_ICMPV6: icmp6_print(ndo, cp, len, (const u_char *)ip6, fragmented); return; case IPPROTO_AH: advance = ah_print(ndo, cp); nh = *cp; break; case IPPROTO_ESP: { int enh, padlen; advance = esp_print(ndo, cp, len, (const u_char *)ip6, &enh, &padlen); nh = enh & 0xff; len -= padlen; break; } case IPPROTO_IPCOMP: { ipcomp_print(ndo, cp); /* * Either this has decompressed the payload and * printed it, in which case there's nothing more * to do, or it hasn't, in which case there's * nothing more to do. */ advance = -1; break; } case IPPROTO_PIM: pim_print(ndo, cp, len, (const u_char *)ip6); return; case IPPROTO_OSPF: ospf6_print(ndo, cp, len); return; case IPPROTO_IPV6: ip6_print(ndo, cp, len); return; case IPPROTO_IPV4: ip_print(ndo, cp, len); return; case IPPROTO_PGM: pgm_print(ndo, cp, len, (const u_char *)ip6); return; case IPPROTO_GRE: gre_print(ndo, cp, len); return; case IPPROTO_RSVP: rsvp_print(ndo, cp, len); return; case IPPROTO_NONE: ND_PRINT((ndo, "no next header")); return; default: ND_PRINT((ndo, "ip-proto-%d %d", nh, len)); return; } } return; trunc: ND_PRINT((ndo, "[|ip6]")); }
263,310,729,271,621,080,000,000,000,000,000,000
print-ip6.c
214,404,332,817,975,570,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2017-12985
The IPv6 parser in tcpdump before 4.9.2 has a buffer over-read in print-ip6.c:ip6_print().
https://nvd.nist.gov/vuln/detail/CVE-2017-12985
9,438
ImageMagick
45aeda5da9eb328689afc221fa3b7dfa5cdea54d
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/45aeda5da9eb328689afc221fa3b7dfa5cdea54d
None
1
static MagickBooleanType WriteINLINEImage(const ImageInfo *image_info, Image *image) { char *base64, message[MaxTextExtent]; const MagickInfo *magick_info; ExceptionInfo *exception; Image *write_image; ImageInfo *write_info; MagickBooleanType status; size_t blob_length, encode_length; unsigned char *blob; /* Convert image to base64-encoding. */ assert(image_info != (const ImageInfo *) NULL); assert(image_info->signature == MagickSignature); assert(image != (Image *) NULL); assert(image->signature == MagickSignature); if (image->debug != MagickFalse) (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",image->filename); exception=(&image->exception); write_info=CloneImageInfo(image_info); (void) SetImageInfo(write_info,1,exception); if (LocaleCompare(write_info->magick,"INLINE") == 0) (void) CopyMagickString(write_info->magick,image->magick,MaxTextExtent); magick_info=GetMagickInfo(write_info->magick,exception); if ((magick_info == (const MagickInfo *) NULL) || (GetMagickMimeType(magick_info) == (const char *) NULL)) ThrowWriterException(CorruptImageError,"ImageTypeNotSupported"); (void) CopyMagickString(image->filename,write_info->filename,MaxTextExtent); blob_length=2048; write_image=CloneImage(image,0,0,MagickTrue,exception); if (write_image == (Image *) NULL) { write_info=DestroyImageInfo(write_info); return(MagickTrue); } blob=(unsigned char *) ImageToBlob(write_info,write_image,&blob_length, exception); write_image=DestroyImage(write_image); write_info=DestroyImageInfo(write_info); if (blob == (unsigned char *) NULL) return(MagickFalse); encode_length=0; base64=Base64Encode(blob,blob_length,&encode_length); blob=(unsigned char *) RelinquishMagickMemory(blob); if (base64 == (char *) NULL) ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed"); /* Write base64-encoded image. */ status=OpenBlob(image_info,image,WriteBinaryBlobMode,exception); if (status == MagickFalse) { base64=DestroyString(base64); return(status); } (void) FormatLocaleString(message,MaxTextExtent,"data:%s;base64,", GetMagickMimeType(magick_info)); (void) WriteBlobString(image,message); (void) WriteBlobString(image,base64); base64=DestroyString(base64); return(MagickTrue); }
91,114,626,347,775,820,000,000,000,000,000,000,000
None
null
[ "CWE-772" ]
CVE-2017-12666
ImageMagick 7.0.6-2 has a memory leak vulnerability in WriteINLINEImage in coders/inline.c.
https://nvd.nist.gov/vuln/detail/CVE-2017-12666
9,439
ImageMagick
9f375e7080a2c1044cd546854d0548b4bfb429d0
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/9f375e7080a2c1044cd546854d0548b4bfb429d0
None
1
static Image *ReadDCMImage(const ImageInfo *image_info,ExceptionInfo *exception) { #define ThrowDCMException(exception,message) \ { \ if (data != (unsigned char *) NULL) \ data=(unsigned char *) RelinquishMagickMemory(data); \ if (stream_info != (DCMStreamInfo *) NULL) \ stream_info=(DCMStreamInfo *) RelinquishMagickMemory(stream_info); \ ThrowReaderException((exception),(message)); \ } char explicit_vr[MaxTextExtent], implicit_vr[MaxTextExtent], magick[MaxTextExtent], photometric[MaxTextExtent]; DCMInfo info; DCMStreamInfo *stream_info; Image *image; int *bluemap, datum, *greenmap, *graymap, *redmap; MagickBooleanType explicit_file, explicit_retry, sequence, use_explicit; MagickOffsetType offset; register unsigned char *p; register ssize_t i; size_t colors, height, length, number_scenes, quantum, status, width; ssize_t count, scene; unsigned char *data; unsigned short group, element; /* Open image file. */ assert(image_info != (const ImageInfo *) NULL); assert(image_info->signature == MagickSignature); if (image_info->debug != MagickFalse) (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s", image_info->filename); assert(exception != (ExceptionInfo *) NULL); assert(exception->signature == MagickSignature); image=AcquireImage(image_info); status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception); if (status == MagickFalse) { image=DestroyImageList(image); return((Image *) NULL); } image->depth=8UL; image->endian=LSBEndian; /* Read DCM preamble. */ data=(unsigned char *) NULL; stream_info=(DCMStreamInfo *) AcquireMagickMemory(sizeof(*stream_info)); if (stream_info == (DCMStreamInfo *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); (void) ResetMagickMemory(stream_info,0,sizeof(*stream_info)); count=ReadBlob(image,128,(unsigned char *) magick); if (count != 128) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); count=ReadBlob(image,4,(unsigned char *) magick); if ((count != 4) || (LocaleNCompare(magick,"DICM",4) != 0)) { offset=SeekBlob(image,0L,SEEK_SET); if (offset < 0) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); } /* Read DCM Medical image. */ (void) CopyMagickString(photometric,"MONOCHROME1 ",MaxTextExtent); info.polarity=MagickFalse; info.scale=(Quantum *) NULL; info.bits_allocated=8; info.bytes_per_pixel=1; info.depth=8; info.mask=0xffff; info.max_value=255UL; info.samples_per_pixel=1; info.signed_data=(~0UL); info.significant_bits=0; info.rescale=MagickFalse; info.rescale_intercept=0.0; info.rescale_slope=1.0; info.window_center=0.0; info.window_width=0.0; data=(unsigned char *) NULL; element=0; explicit_vr[2]='\0'; explicit_file=MagickFalse; colors=0; redmap=(int *) NULL; greenmap=(int *) NULL; bluemap=(int *) NULL; graymap=(int *) NULL; height=0; number_scenes=1; sequence=MagickFalse; use_explicit=MagickFalse; explicit_retry = MagickFalse; width=0; for (group=0; (group != 0x7FE0) || (element != 0x0010) || (sequence != MagickFalse); ) { /* Read a group. */ image->offset=(ssize_t) TellBlob(image); group=ReadBlobLSBShort(image); element=ReadBlobLSBShort(image); if ((group != 0x0002) && (image->endian == MSBEndian)) { group=(unsigned short) ((group << 8) | ((group >> 8) & 0xFF)); element=(unsigned short) ((element << 8) | ((element >> 8) & 0xFF)); } quantum=0; /* Find corresponding VR for this group and element. */ for (i=0; dicom_info[i].group < 0xffff; i++) if ((group == dicom_info[i].group) && (element == dicom_info[i].element)) break; (void) CopyMagickString(implicit_vr,dicom_info[i].vr,MaxTextExtent); count=ReadBlob(image,2,(unsigned char *) explicit_vr); if (count != 2) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); /* Check for "explicitness", but meta-file headers always explicit. */ if ((explicit_file == MagickFalse) && (group != 0x0002)) explicit_file=(isupper((unsigned char) *explicit_vr) != MagickFalse) && (isupper((unsigned char) *(explicit_vr+1)) != MagickFalse) ? MagickTrue : MagickFalse; use_explicit=((group == 0x0002) && (explicit_retry == MagickFalse)) || (explicit_file != MagickFalse) ? MagickTrue : MagickFalse; if ((use_explicit != MagickFalse) && (strncmp(implicit_vr,"xs",2) == 0)) (void) CopyMagickString(implicit_vr,explicit_vr,MaxTextExtent); if ((use_explicit == MagickFalse) || (strncmp(implicit_vr,"!!",2) == 0)) { offset=SeekBlob(image,(MagickOffsetType) -2,SEEK_CUR); if (offset < 0) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); quantum=4; } else { /* Assume explicit type. */ quantum=2; if ((strncmp(explicit_vr,"OB",2) == 0) || (strncmp(explicit_vr,"UN",2) == 0) || (strncmp(explicit_vr,"OW",2) == 0) || (strncmp(explicit_vr,"SQ",2) == 0)) { (void) ReadBlobLSBShort(image); quantum=4; } } datum=0; if (quantum == 4) { if (group == 0x0002) datum=ReadBlobLSBSignedLong(image); else datum=ReadBlobSignedLong(image); } else if (quantum == 2) { if (group == 0x0002) datum=ReadBlobLSBSignedShort(image); else datum=ReadBlobSignedShort(image); } quantum=0; length=1; if (datum != 0) { if ((strncmp(implicit_vr,"SS",2) == 0) || (strncmp(implicit_vr,"US",2) == 0)) quantum=2; else if ((strncmp(implicit_vr,"UL",2) == 0) || (strncmp(implicit_vr,"SL",2) == 0) || (strncmp(implicit_vr,"FL",2) == 0)) quantum=4; else if (strncmp(implicit_vr,"FD",2) != 0) quantum=1; else quantum=8; if (datum != ~0) length=(size_t) datum/quantum; else { /* Sequence and item of undefined length. */ quantum=0; length=0; } } if (image_info->verbose != MagickFalse) { /* Display Dicom info. */ if (use_explicit == MagickFalse) explicit_vr[0]='\0'; for (i=0; dicom_info[i].description != (char *) NULL; i++) if ((group == dicom_info[i].group) && (element == dicom_info[i].element)) break; (void) FormatLocaleFile(stdout,"0x%04lX %4ld %s-%s (0x%04lx,0x%04lx)", (unsigned long) image->offset,(long) length,implicit_vr,explicit_vr, (unsigned long) group,(unsigned long) element); if (dicom_info[i].description != (char *) NULL) (void) FormatLocaleFile(stdout," %s",dicom_info[i].description); (void) FormatLocaleFile(stdout,": "); } if ((sequence == MagickFalse) && (group == 0x7FE0) && (element == 0x0010)) { if (image_info->verbose != MagickFalse) (void) FormatLocaleFile(stdout,"\n"); break; } /* Allocate space and read an array. */ data=(unsigned char *) NULL; if ((length == 1) && (quantum == 1)) datum=ReadBlobByte(image); else if ((length == 1) && (quantum == 2)) { if (group == 0x0002) datum=ReadBlobLSBSignedShort(image); else datum=ReadBlobSignedShort(image); } else if ((length == 1) && (quantum == 4)) { if (group == 0x0002) datum=ReadBlobLSBSignedLong(image); else datum=ReadBlobSignedLong(image); } else if ((quantum != 0) && (length != 0)) { if (length > GetBlobSize(image)) ThrowReaderException(CorruptImageError, "InsufficientImageDataInFile"); if (~length >= 1) data=(unsigned char *) AcquireQuantumMemory(length+1,quantum* sizeof(*data)); if (data == (unsigned char *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); count=ReadBlob(image,(size_t) quantum*length,data); if (count != (ssize_t) (quantum*length)) { if (image_info->verbose != MagickFalse) (void) FormatLocaleFile(stdout,"count=%d quantum=%d " "length=%d group=%d\n",(int) count,(int) quantum,(int) length,(int) group); ThrowDCMException(CorruptImageError, "InsufficientImageDataInFile"); } data[length*quantum]='\0'; } else if ((unsigned int) datum == 0xFFFFFFFFU) { sequence=MagickTrue; continue; } if ((unsigned int) ((group << 16) | element) == 0xFFFEE0DD) { if (data != (unsigned char *) NULL) data=(unsigned char *) RelinquishMagickMemory(data); sequence=MagickFalse; continue; } if (sequence != MagickFalse) { if (data != (unsigned char *) NULL) data=(unsigned char *) RelinquishMagickMemory(data); continue; } switch (group) { case 0x0002: { switch (element) { case 0x0010: { char transfer_syntax[MaxTextExtent]; /* Transfer Syntax. */ if ((datum == 0) && (explicit_retry == MagickFalse)) { explicit_retry=MagickTrue; (void) SeekBlob(image,(MagickOffsetType) 0,SEEK_SET); group=0; element=0; if (image_info->verbose != MagickFalse) (void) FormatLocaleFile(stdout, "Corrupted image - trying explicit format\n"); break; } *transfer_syntax='\0'; if (data != (unsigned char *) NULL) (void) CopyMagickString(transfer_syntax,(char *) data, MaxTextExtent); if (image_info->verbose != MagickFalse) (void) FormatLocaleFile(stdout,"transfer_syntax=%s\n", (const char *) transfer_syntax); if (strncmp(transfer_syntax,"1.2.840.10008.1.2",17) == 0) { int count, subtype, type; type=1; subtype=0; if (strlen(transfer_syntax) > 17) { count=sscanf(transfer_syntax+17,".%d.%d",&type,&subtype); if (count < 1) ThrowDCMException(CorruptImageError, "ImproperImageHeader"); } switch (type) { case 1: { image->endian=LSBEndian; break; } case 2: { image->endian=MSBEndian; break; } case 4: { if ((subtype >= 80) && (subtype <= 81)) image->compression=JPEGCompression; else if ((subtype >= 90) && (subtype <= 93)) image->compression=JPEG2000Compression; else image->compression=JPEGCompression; break; } case 5: { image->compression=RLECompression; break; } } } break; } default: break; } break; } case 0x0028: { switch (element) { case 0x0002: { /* Samples per pixel. */ info.samples_per_pixel=(size_t) datum; break; } case 0x0004: { /* Photometric interpretation. */ if (data == (unsigned char *) NULL) break; for (i=0; i < (ssize_t) MagickMin(length,MaxTextExtent-1); i++) photometric[i]=(char) data[i]; photometric[i]='\0'; info.polarity=LocaleCompare(photometric,"MONOCHROME1 ") == 0 ? MagickTrue : MagickFalse; break; } case 0x0006: { /* Planar configuration. */ if (datum == 1) image->interlace=PlaneInterlace; break; } case 0x0008: { /* Number of frames. */ if (data == (unsigned char *) NULL) break; number_scenes=StringToUnsignedLong((char *) data); break; } case 0x0010: { /* Image rows. */ height=(size_t) datum; break; } case 0x0011: { /* Image columns. */ width=(size_t) datum; break; } case 0x0100: { /* Bits allocated. */ info.bits_allocated=(size_t) datum; info.bytes_per_pixel=1; if (datum > 8) info.bytes_per_pixel=2; info.depth=info.bits_allocated; if (info.depth > 32) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); info.max_value=(1UL << info.bits_allocated)-1; image->depth=info.depth; break; } case 0x0101: { /* Bits stored. */ info.significant_bits=(size_t) datum; info.bytes_per_pixel=1; if (info.significant_bits > 8) info.bytes_per_pixel=2; info.depth=info.significant_bits; if (info.depth > 32) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); info.max_value=(1UL << info.significant_bits)-1; info.mask=(size_t) GetQuantumRange(info.significant_bits); image->depth=info.depth; break; } case 0x0102: { /* High bit. */ break; } case 0x0103: { /* Pixel representation. */ info.signed_data=(size_t) datum; break; } case 0x1050: { /* Visible pixel range: center. */ if (data != (unsigned char *) NULL) info.window_center=StringToDouble((char *) data, (char **) NULL); break; } case 0x1051: { /* Visible pixel range: width. */ if (data != (unsigned char *) NULL) info.window_width=StringToDouble((char *) data, (char **) NULL); break; } case 0x1052: { /* Rescale intercept */ if (data != (unsigned char *) NULL) info.rescale_intercept=StringToDouble((char *) data, (char **) NULL); break; } case 0x1053: { /* Rescale slope */ if (data != (unsigned char *) NULL) info.rescale_slope=StringToDouble((char *) data, (char **) NULL); break; } case 0x1200: case 0x3006: { /* Populate graymap. */ if (data == (unsigned char *) NULL) break; colors=(size_t) (length/info.bytes_per_pixel); datum=(int) colors; graymap=(int *) AcquireQuantumMemory((size_t) colors, sizeof(*graymap)); if (graymap == (int *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); for (i=0; i < (ssize_t) colors; i++) if (info.bytes_per_pixel == 1) graymap[i]=(int) data[i]; else graymap[i]=(int) ((short *) data)[i]; break; } case 0x1201: { unsigned short index; /* Populate redmap. */ if (data == (unsigned char *) NULL) break; colors=(size_t) (length/2); datum=(int) colors; redmap=(int *) AcquireQuantumMemory((size_t) colors, sizeof(*redmap)); if (redmap == (int *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); p=data; for (i=0; i < (ssize_t) colors; i++) { if (image->endian == MSBEndian) index=(unsigned short) ((*p << 8) | *(p+1)); else index=(unsigned short) (*p | (*(p+1) << 8)); redmap[i]=(int) index; p+=2; } break; } case 0x1202: { unsigned short index; /* Populate greenmap. */ if (data == (unsigned char *) NULL) break; colors=(size_t) (length/2); datum=(int) colors; greenmap=(int *) AcquireQuantumMemory((size_t) colors, sizeof(*greenmap)); if (greenmap == (int *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); p=data; for (i=0; i < (ssize_t) colors; i++) { if (image->endian == MSBEndian) index=(unsigned short) ((*p << 8) | *(p+1)); else index=(unsigned short) (*p | (*(p+1) << 8)); greenmap[i]=(int) index; p+=2; } break; } case 0x1203: { unsigned short index; /* Populate bluemap. */ if (data == (unsigned char *) NULL) break; colors=(size_t) (length/2); datum=(int) colors; bluemap=(int *) AcquireQuantumMemory((size_t) colors, sizeof(*bluemap)); if (bluemap == (int *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); p=data; for (i=0; i < (ssize_t) colors; i++) { if (image->endian == MSBEndian) index=(unsigned short) ((*p << 8) | *(p+1)); else index=(unsigned short) (*p | (*(p+1) << 8)); bluemap[i]=(int) index; p+=2; } break; } default: break; } break; } case 0x2050: { switch (element) { case 0x0020: { if ((data != (unsigned char *) NULL) && (strncmp((char *) data,"INVERSE",7) == 0)) info.polarity=MagickTrue; break; } default: break; } break; } default: break; } if (data != (unsigned char *) NULL) { char *attribute; for (i=0; dicom_info[i].description != (char *) NULL; i++) if ((group == dicom_info[i].group) && (element == dicom_info[i].element)) break; if (dicom_info[i].description != (char *) NULL) { attribute=AcquireString("dcm:"); (void) ConcatenateString(&attribute,dicom_info[i].description); for (i=0; i < (ssize_t) MagickMax(length,4); i++) if (isprint((int) data[i]) == MagickFalse) break; if ((i == (ssize_t) length) || (length > 4)) { (void) SubstituteString(&attribute," ",""); (void) SetImageProperty(image,attribute,(char *) data); } attribute=DestroyString(attribute); } } if (image_info->verbose != MagickFalse) { if (data == (unsigned char *) NULL) (void) FormatLocaleFile(stdout,"%d\n",datum); else { /* Display group data. */ for (i=0; i < (ssize_t) MagickMax(length,4); i++) if (isprint((int) data[i]) == MagickFalse) break; if ((i != (ssize_t) length) && (length <= 4)) { ssize_t j; datum=0; for (j=(ssize_t) length-1; j >= 0; j--) datum=(256*datum+data[j]); (void) FormatLocaleFile(stdout,"%d",datum); } else for (i=0; i < (ssize_t) length; i++) if (isprint((int) data[i]) != MagickFalse) (void) FormatLocaleFile(stdout,"%c",data[i]); else (void) FormatLocaleFile(stdout,"%c",'.'); (void) FormatLocaleFile(stdout,"\n"); } } if (data != (unsigned char *) NULL) data=(unsigned char *) RelinquishMagickMemory(data); if (EOFBlob(image) != MagickFalse) { ThrowFileException(exception,CorruptImageError,"UnexpectedEndOfFile", image->filename); break; } } if ((width == 0) || (height == 0)) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); image->columns=(size_t) width; image->rows=(size_t) height; if (info.signed_data == 0xffff) info.signed_data=(size_t) (info.significant_bits == 16 ? 1 : 0); if ((image->compression == JPEGCompression) || (image->compression == JPEG2000Compression)) { Image *images; ImageInfo *read_info; int c; size_t length; unsigned int tag; /* Read offset table. */ for (i=0; i < (ssize_t) stream_info->remaining; i++) (void) ReadBlobByte(image); tag=(ReadBlobLSBShort(image) << 16) | ReadBlobLSBShort(image); (void) tag; length=(size_t) ReadBlobLSBLong(image); stream_info->offset_count=length >> 2; if (stream_info->offset_count != 0) { MagickOffsetType offset; stream_info->offsets=(ssize_t *) AcquireQuantumMemory( stream_info->offset_count,sizeof(*stream_info->offsets)); if (stream_info->offsets == (ssize_t *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); for (i=0; i < (ssize_t) stream_info->offset_count; i++) stream_info->offsets[i]=(ssize_t) ReadBlobLSBSignedLong(image); offset=TellBlob(image); for (i=0; i < (ssize_t) stream_info->offset_count; i++) stream_info->offsets[i]+=offset; } /* Handle non-native image formats. */ read_info=CloneImageInfo(image_info); SetImageInfoBlob(read_info,(void *) NULL,0); images=NewImageList(); for (scene=0; scene < (ssize_t) number_scenes; scene++) { char filename[MaxTextExtent]; const char *property; FILE *file; Image *jpeg_image; int unique_file; unsigned int tag; tag=(ReadBlobLSBShort(image) << 16) | ReadBlobLSBShort(image); length=(size_t) ReadBlobLSBLong(image); if (tag == 0xFFFEE0DD) break; /* sequence delimiter tag */ if (tag != 0xFFFEE000) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); file=(FILE *) NULL; unique_file=AcquireUniqueFileResource(filename); if (unique_file != -1) file=fdopen(unique_file,"wb"); if (file == (FILE *) NULL) { (void) RelinquishUniqueFileResource(filename); ThrowFileException(exception,FileOpenError, "UnableToCreateTemporaryFile",filename); break; } for ( ; length != 0; length--) { c=ReadBlobByte(image); if (c == EOF) { ThrowFileException(exception,CorruptImageError, "UnexpectedEndOfFile",image->filename); break; } (void) fputc(c,file); } (void) fclose(file); (void) FormatLocaleString(read_info->filename,MaxTextExtent,"jpeg:%s", filename); if (image->compression == JPEG2000Compression) (void) FormatLocaleString(read_info->filename,MaxTextExtent,"j2k:%s", filename); jpeg_image=ReadImage(read_info,exception); if (jpeg_image != (Image *) NULL) { ResetImagePropertyIterator(image); property=GetNextImageProperty(image); while (property != (const char *) NULL) { (void) SetImageProperty(jpeg_image,property, GetImageProperty(image,property)); property=GetNextImageProperty(image); } AppendImageToList(&images,jpeg_image); } (void) RelinquishUniqueFileResource(filename); } read_info=DestroyImageInfo(read_info); image=DestroyImage(image); return(GetFirstImageInList(images)); } if (info.depth != (1UL*MAGICKCORE_QUANTUM_DEPTH)) { QuantumAny range; size_t length; /* Compute pixel scaling table. */ length=(size_t) (GetQuantumRange(info.depth)+1); info.scale=(Quantum *) AcquireQuantumMemory(length,sizeof(*info.scale)); if (info.scale == (Quantum *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); range=GetQuantumRange(info.depth); for (i=0; i <= (ssize_t) GetQuantumRange(info.depth); i++) info.scale[i]=ScaleAnyToQuantum((size_t) i,range); } if (image->compression == RLECompression) { size_t length; unsigned int tag; /* Read RLE offset table. */ for (i=0; i < (ssize_t) stream_info->remaining; i++) (void) ReadBlobByte(image); tag=(ReadBlobLSBShort(image) << 16) | ReadBlobLSBShort(image); (void) tag; length=(size_t) ReadBlobLSBLong(image); stream_info->offset_count=length >> 2; if (stream_info->offset_count != 0) { MagickOffsetType offset; stream_info->offsets=(ssize_t *) AcquireQuantumMemory( stream_info->offset_count,sizeof(*stream_info->offsets)); if (stream_info->offsets == (ssize_t *) NULL) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); for (i=0; i < (ssize_t) stream_info->offset_count; i++) stream_info->offsets[i]=(ssize_t) ReadBlobLSBSignedLong(image); offset=TellBlob(image)+8; for (i=0; i < (ssize_t) stream_info->offset_count; i++) stream_info->offsets[i]+=offset; } } for (scene=0; scene < (ssize_t) number_scenes; scene++) { if (image_info->ping != MagickFalse) break; image->columns=(size_t) width; image->rows=(size_t) height; image->depth=info.depth; status=SetImageExtent(image,image->columns,image->rows); if (status == MagickFalse) { InheritException(exception,&image->exception); break; } image->colorspace=RGBColorspace; if ((image->colormap == (PixelPacket *) NULL) && (info.samples_per_pixel == 1)) { int index; size_t one; one=1; if (colors == 0) colors=one << info.depth; if (AcquireImageColormap(image,colors) == MagickFalse) ThrowDCMException(ResourceLimitError,"MemoryAllocationFailed"); if (redmap != (int *) NULL) for (i=0; i < (ssize_t) colors; i++) { index=redmap[i]; if ((info.scale != (Quantum *) NULL) && (index <= (int) info.max_value)) index=(int) info.scale[index]; image->colormap[i].red=(Quantum) index; } if (greenmap != (int *) NULL) for (i=0; i < (ssize_t) colors; i++) { index=greenmap[i]; if ((info.scale != (Quantum *) NULL) && (index <= (int) info.max_value)) index=(int) info.scale[index]; image->colormap[i].green=(Quantum) index; } if (bluemap != (int *) NULL) for (i=0; i < (ssize_t) colors; i++) { index=bluemap[i]; if ((info.scale != (Quantum *) NULL) && (index <= (int) info.max_value)) index=(int) info.scale[index]; image->colormap[i].blue=(Quantum) index; } if (graymap != (int *) NULL) for (i=0; i < (ssize_t) colors; i++) { index=graymap[i]; if ((info.scale != (Quantum *) NULL) && (index <= (int) info.max_value)) index=(int) info.scale[index]; image->colormap[i].red=(Quantum) index; image->colormap[i].green=(Quantum) index; image->colormap[i].blue=(Quantum) index; } } if (image->compression == RLECompression) { unsigned int tag; /* Read RLE segment table. */ for (i=0; i < (ssize_t) stream_info->remaining; i++) (void) ReadBlobByte(image); tag=(ReadBlobLSBShort(image) << 16) | ReadBlobLSBShort(image); stream_info->remaining=(size_t) ReadBlobLSBLong(image); if ((tag != 0xFFFEE000) || (stream_info->remaining <= 64) || (EOFBlob(image) != MagickFalse)) ThrowDCMException(CorruptImageError,"ImproperImageHeader"); stream_info->count=0; stream_info->segment_count=ReadBlobLSBLong(image); for (i=0; i < 15; i++) stream_info->segments[i]=(ssize_t) ReadBlobLSBSignedLong(image); stream_info->remaining-=64; if (stream_info->segment_count > 1) { info.bytes_per_pixel=1; info.depth=8; if (stream_info->offset_count > 0) (void) SeekBlob(image,stream_info->offsets[0]+ stream_info->segments[0],SEEK_SET); } } if ((info.samples_per_pixel > 1) && (image->interlace == PlaneInterlace)) { register ssize_t x; register PixelPacket *q; ssize_t y; /* Convert Planar RGB DCM Medical image to pixel packets. */ for (i=0; i < (ssize_t) info.samples_per_pixel; i++) { for (y=0; y < (ssize_t) image->rows; y++) { q=GetAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (PixelPacket *) NULL) break; for (x=0; x < (ssize_t) image->columns; x++) { switch ((int) i) { case 0: { SetPixelRed(q,ScaleCharToQuantum((unsigned char) ReadDCMByte(stream_info,image))); break; } case 1: { SetPixelGreen(q,ScaleCharToQuantum((unsigned char) ReadDCMByte(stream_info,image))); break; } case 2: { SetPixelBlue(q,ScaleCharToQuantum((unsigned char) ReadDCMByte(stream_info,image))); break; } case 3: { SetPixelAlpha(q,ScaleCharToQuantum((unsigned char) ReadDCMByte(stream_info,image))); break; } default: break; } q++; } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } } } else { const char *option; /* Convert DCM Medical image to pixel packets. */ option=GetImageOption(image_info,"dcm:display-range"); if (option != (const char *) NULL) { if (LocaleCompare(option,"reset") == 0) info.window_width=0; } option=GetImageOption(image_info,"dcm:window"); if (option != (char *) NULL) { GeometryInfo geometry_info; MagickStatusType flags; flags=ParseGeometry(option,&geometry_info); if (flags & RhoValue) info.window_center=geometry_info.rho; if (flags & SigmaValue) info.window_width=geometry_info.sigma; info.rescale=MagickTrue; } option=GetImageOption(image_info,"dcm:rescale"); if (option != (char *) NULL) info.rescale=IsStringTrue(option); if ((info.window_center != 0) && (info.window_width == 0)) info.window_width=info.window_center; status=ReadDCMPixels(image,&info,stream_info,MagickTrue,exception); if ((status != MagickFalse) && (stream_info->segment_count > 1)) { if (stream_info->offset_count > 0) (void) SeekBlob(image,stream_info->offsets[0]+ stream_info->segments[1],SEEK_SET); (void) ReadDCMPixels(image,&info,stream_info,MagickFalse,exception); } } if (SetImageGray(image,exception) != MagickFalse) (void) SetImageColorspace(image,GRAYColorspace); if (EOFBlob(image) != MagickFalse) { ThrowFileException(exception,CorruptImageError,"UnexpectedEndOfFile", image->filename); break; } /* Proceed to next image. */ if (image_info->number_scenes != 0) if (image->scene >= (image_info->scene+image_info->number_scenes-1)) break; if (scene < (ssize_t) (number_scenes-1)) { /* Allocate next image structure. */ AcquireNextImage(image_info,image); if (GetNextImageInList(image) == (Image *) NULL) { image=DestroyImageList(image); return((Image *) NULL); } image=SyncNextImageInList(image); status=SetImageProgress(image,LoadImagesTag,TellBlob(image), GetBlobSize(image)); if (status == MagickFalse) break; } } /* Free resources. */ if (stream_info->offsets != (ssize_t *) NULL) stream_info->offsets=(ssize_t *) RelinquishMagickMemory(stream_info->offsets); stream_info=(DCMStreamInfo *) RelinquishMagickMemory(stream_info); if (info.scale != (Quantum *) NULL) info.scale=(Quantum *) RelinquishMagickMemory(info.scale); if (graymap != (int *) NULL) graymap=(int *) RelinquishMagickMemory(graymap); if (bluemap != (int *) NULL) bluemap=(int *) RelinquishMagickMemory(bluemap); if (greenmap != (int *) NULL) greenmap=(int *) RelinquishMagickMemory(greenmap); if (redmap != (int *) NULL) redmap=(int *) RelinquishMagickMemory(redmap); (void) CloseBlob(image); return(GetFirstImageInList(image)); }
250,223,957,382,125,260,000,000,000,000,000,000,000
None
null
[ "CWE-772" ]
CVE-2017-12644
ImageMagick 7.0.6-1 has a memory leak vulnerability in ReadDCMImage in coders\dcm.c.
https://nvd.nist.gov/vuln/detail/CVE-2017-12644
9,440
linux
9e3f7a29694049edd728e2400ab57ad7553e5aa9
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/9e3f7a29694049edd728e2400ab57ad7553e5aa9
arm64: KVM: pmu: Fix AArch32 cycle counter access We're missing the handling code for the cycle counter accessed from a 32bit guest, leading to unexpected results. Cc: stable@vger.kernel.org # 4.6+ Signed-off-by: Wei Huang <wei@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
1
static bool access_pmu_evcntr(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { u64 idx; if (!kvm_arm_pmu_v3_ready(vcpu)) return trap_raz_wi(vcpu, p, r); if (r->CRn == 9 && r->CRm == 13) { if (r->Op2 == 2) { /* PMXEVCNTR_EL0 */ if (pmu_access_event_counter_el0_disabled(vcpu)) return false; idx = vcpu_sys_reg(vcpu, PMSELR_EL0) & ARMV8_PMU_COUNTER_MASK; } else if (r->Op2 == 0) { /* PMCCNTR_EL0 */ if (pmu_access_cycle_counter_el0_disabled(vcpu)) return false; idx = ARMV8_PMU_CYCLE_IDX; } else { BUG(); } } else if (r->CRn == 14 && (r->CRm & 12) == 8) { /* PMEVCNTRn_EL0 */ if (pmu_access_event_counter_el0_disabled(vcpu)) return false; idx = ((r->CRm & 3) << 3) | (r->Op2 & 7); } else { BUG(); } if (!pmu_counter_idx_valid(vcpu, idx)) return false; if (p->is_write) { if (pmu_access_el0_disabled(vcpu)) return false; kvm_pmu_set_counter_value(vcpu, idx, p->regval); } else { p->regval = kvm_pmu_get_counter_value(vcpu, idx); } return true; }
109,885,700,626,088,160,000,000,000,000,000,000,000
sys_regs.c
47,091,922,259,694,350,000,000,000,000,000,000,000
[ "CWE-617" ]
CVE-2017-12168
The access_pmu_evcntr function in arch/arm64/kvm/sys_regs.c in the Linux kernel before 4.8.11 allows privileged KVM guest OS users to cause a denial of service (assertion failure and host OS crash) by accessing the Performance Monitors Cycle Count Register (PMCCNTR).
https://nvd.nist.gov/vuln/detail/CVE-2017-12168
9,443
FFmpeg
ba4beaf6149f7241c8bd85fe853318c2f6837ad0
https://github.com/FFmpeg/FFmpeg
https://github.com/FFmpeg/FFmpeg/commit/ba4beaf6149f7241c8bd85fe853318c2f6837ad0
avcodec/apedec: Fix integer overflow Fixes: out of array access Fixes: PoC.ape and others Found-by: Bingchang, Liu@VARAS of IIE Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
1
static int ape_decode_frame(AVCodecContext *avctx, void *data, int *got_frame_ptr, AVPacket *avpkt) { AVFrame *frame = data; const uint8_t *buf = avpkt->data; APEContext *s = avctx->priv_data; uint8_t *sample8; int16_t *sample16; int32_t *sample24; int i, ch, ret; int blockstodecode; /* this should never be negative, but bad things will happen if it is, so check it just to make sure. */ av_assert0(s->samples >= 0); if(!s->samples){ uint32_t nblocks, offset; int buf_size; if (!avpkt->size) { *got_frame_ptr = 0; return 0; } if (avpkt->size < 8) { av_log(avctx, AV_LOG_ERROR, "Packet is too small\n"); return AVERROR_INVALIDDATA; } buf_size = avpkt->size & ~3; if (buf_size != avpkt->size) { av_log(avctx, AV_LOG_WARNING, "packet size is not a multiple of 4. " "extra bytes at the end will be skipped.\n"); } if (s->fileversion < 3950) // previous versions overread two bytes buf_size += 2; av_fast_padded_malloc(&s->data, &s->data_size, buf_size); if (!s->data) return AVERROR(ENOMEM); s->bdsp.bswap_buf((uint32_t *) s->data, (const uint32_t *) buf, buf_size >> 2); memset(s->data + (buf_size & ~3), 0, buf_size & 3); s->ptr = s->data; s->data_end = s->data + buf_size; nblocks = bytestream_get_be32(&s->ptr); offset = bytestream_get_be32(&s->ptr); if (s->fileversion >= 3900) { if (offset > 3) { av_log(avctx, AV_LOG_ERROR, "Incorrect offset passed\n"); s->data = NULL; return AVERROR_INVALIDDATA; } if (s->data_end - s->ptr < offset) { av_log(avctx, AV_LOG_ERROR, "Packet is too small\n"); return AVERROR_INVALIDDATA; } s->ptr += offset; } else { if ((ret = init_get_bits8(&s->gb, s->ptr, s->data_end - s->ptr)) < 0) return ret; if (s->fileversion > 3800) skip_bits_long(&s->gb, offset * 8); else skip_bits_long(&s->gb, offset); } if (!nblocks || nblocks > INT_MAX) { av_log(avctx, AV_LOG_ERROR, "Invalid sample count: %"PRIu32".\n", nblocks); return AVERROR_INVALIDDATA; } /* Initialize the frame decoder */ if (init_frame_decoder(s) < 0) { av_log(avctx, AV_LOG_ERROR, "Error reading frame header\n"); return AVERROR_INVALIDDATA; } s->samples = nblocks; } if (!s->data) { *got_frame_ptr = 0; return avpkt->size; } blockstodecode = FFMIN(s->blocks_per_loop, s->samples); if (s->fileversion < 3930) blockstodecode = s->samples; /* reallocate decoded sample buffer if needed */ av_fast_malloc(&s->decoded_buffer, &s->decoded_size, 2 * FFALIGN(blockstodecode, 8) * sizeof(*s->decoded_buffer)); if (!s->decoded_buffer) return AVERROR(ENOMEM); memset(s->decoded_buffer, 0, s->decoded_size); s->decoded[0] = s->decoded_buffer; s->decoded[1] = s->decoded_buffer + FFALIGN(blockstodecode, 8); /* get output buffer */ frame->nb_samples = blockstodecode; if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) return ret; s->error=0; if ((s->channels == 1) || (s->frameflags & APE_FRAMECODE_PSEUDO_STEREO)) ape_unpack_mono(s, blockstodecode); else ape_unpack_stereo(s, blockstodecode); emms_c(); if (s->error) { s->samples=0; av_log(avctx, AV_LOG_ERROR, "Error decoding frame\n"); return AVERROR_INVALIDDATA; } switch (s->bps) { case 8: for (ch = 0; ch < s->channels; ch++) { sample8 = (uint8_t *)frame->data[ch]; for (i = 0; i < blockstodecode; i++) *sample8++ = (s->decoded[ch][i] + 0x80) & 0xff; } break; case 16: for (ch = 0; ch < s->channels; ch++) { sample16 = (int16_t *)frame->data[ch]; for (i = 0; i < blockstodecode; i++) *sample16++ = s->decoded[ch][i]; } break; case 24: for (ch = 0; ch < s->channels; ch++) { sample24 = (int32_t *)frame->data[ch]; for (i = 0; i < blockstodecode; i++) *sample24++ = s->decoded[ch][i] << 8; } break; } s->samples -= blockstodecode; *got_frame_ptr = 1; return !s->samples ? avpkt->size : 0; }
225,537,218,437,466,820,000,000,000,000,000,000,000
None
null
[ "CWE-125" ]
CVE-2017-11399
Integer overflow in the ape_decode_frame function in libavcodec/apedec.c in FFmpeg 2.4 through 3.3.2 allows remote attackers to cause a denial of service (out-of-array access and application crash) or possibly have unspecified other impact via a crafted APE file.
https://nvd.nist.gov/vuln/detail/CVE-2017-11399
9,448
FFmpeg
31c1c0b46a7021802c3d1d18039fca30dba5a14e
https://github.com/FFmpeg/FFmpeg
https://github.com/FFmpeg/FFmpeg/commit/31c1c0b46a7021802c3d1d18039fca30dba5a14e
avcodec/dnxhd_parser: Do not return invalid value from dnxhd_find_frame_end() on error Fixes: Null pointer dereference Fixes: CVE-2017-9608 Found-by: Yihan Lian Signed-off-by: Michael Niedermayer <michael@niedermayer.cc> (cherry picked from commit 611b35627488a8d0763e75c25ee0875c5b7987dd) Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
1
static int dnxhd_find_frame_end(DNXHDParserContext *dctx, const uint8_t *buf, int buf_size) { ParseContext *pc = &dctx->pc; uint64_t state = pc->state64; int pic_found = pc->frame_start_found; int i = 0; int interlaced = dctx->interlaced; int cur_field = dctx->cur_field; if (!pic_found) { for (i = 0; i < buf_size; i++) { state = (state << 8) | buf[i]; if (ff_dnxhd_check_header_prefix(state & 0xffffffffff00LL) != 0) { i++; pic_found = 1; interlaced = (state&2)>>1; /* byte following the 5-byte header prefix */ cur_field = state&1; dctx->cur_byte = 0; dctx->remaining = 0; break; } } } if (pic_found && !dctx->remaining) { if (!buf_size) /* EOF considered as end of frame */ return 0; for (; i < buf_size; i++) { dctx->cur_byte++; state = (state << 8) | buf[i]; if (dctx->cur_byte == 24) { dctx->h = (state >> 32) & 0xFFFF; } else if (dctx->cur_byte == 26) { dctx->w = (state >> 32) & 0xFFFF; } else if (dctx->cur_byte == 42) { int cid = (state >> 32) & 0xFFFFFFFF; if (cid <= 0) continue; dctx->remaining = avpriv_dnxhd_get_frame_size(cid); if (dctx->remaining <= 0) { dctx->remaining = dnxhd_get_hr_frame_size(cid, dctx->w, dctx->h); if (dctx->remaining <= 0) return dctx->remaining; } if (buf_size - i >= dctx->remaining && (!dctx->interlaced || dctx->cur_field)) { int remaining = dctx->remaining; pc->frame_start_found = 0; pc->state64 = -1; dctx->interlaced = interlaced; dctx->cur_field = 0; dctx->cur_byte = 0; dctx->remaining = 0; return remaining; } else { dctx->remaining -= buf_size; } } } } else if (pic_found) { if (dctx->remaining > buf_size) { dctx->remaining -= buf_size; } else { int remaining = dctx->remaining; pc->frame_start_found = 0; pc->state64 = -1; dctx->interlaced = interlaced; dctx->cur_field = 0; dctx->cur_byte = 0; dctx->remaining = 0; return remaining; } } pc->frame_start_found = pic_found; pc->state64 = state; dctx->interlaced = interlaced; dctx->cur_field = cur_field; return END_NOT_FOUND; }
28,683,453,471,711,494,000,000,000,000,000,000,000
dnxhd_parser.c
78,021,258,009,879,840,000,000,000,000,000,000,000
[ "CWE-476" ]
CVE-2017-9608
The dnxhd decoder in FFmpeg before 3.2.6, and 3.3.x before 3.3.3 allows remote attackers to cause a denial of service (NULL pointer dereference) via a crafted mov file.
https://nvd.nist.gov/vuln/detail/CVE-2017-9608
9,449
yara
925bcf3c3b0a28b5b78e25d9efda5c0bf27ae699
https://github.com/VirusTotal/yara
https://github.com/VirusTotal/yara/commit/925bcf3c3b0a28b5b78e25d9efda5c0bf27ae699
Fix issue #674. Move regexp limits to limits.h.
1
int yr_re_ast_create( RE_AST** re_ast) { *re_ast = (RE_AST*) yr_malloc(sizeof(RE_AST)); if (*re_ast == NULL) return ERROR_INSUFFICIENT_MEMORY; (*re_ast)->flags = 0; (*re_ast)->root_node = NULL; return ERROR_SUCCESS; }
28,548,031,463,136,045,000,000,000,000,000,000,000
re.c
312,264,943,192,667,120,000,000,000,000,000,000,000
[ "CWE-674" ]
CVE-2017-9304
libyara/re.c in the regexp module in YARA 3.5.0 allows remote attackers to cause a denial of service (stack consumption) via a crafted rule that is mishandled in the _yr_re_emit function.
https://nvd.nist.gov/vuln/detail/CVE-2017-9304
9,459
linux
c4baad50297d84bde1a7ad45e50c73adae4a2192
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/c4baad50297d84bde1a7ad45e50c73adae4a2192
virtio-console: avoid DMA from stack put_chars() stuffs the buffer it gets into an sg, but that buffer may be on the stack. This breaks with CONFIG_VMAP_STACK=y (for me, it manifested as printks getting turned into NUL bytes). Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com>
1
static int put_chars(u32 vtermno, const char *buf, int count) { struct port *port; struct scatterlist sg[1]; if (unlikely(early_put_chars)) return early_put_chars(vtermno, buf, count); port = find_port_by_vtermno(vtermno); if (!port) return -EPIPE; sg_init_one(sg, buf, count); return __send_to_port(port, sg, 1, count, (void *)buf, false); }
208,943,279,536,157,350,000,000,000,000,000,000,000
virtio_console.c
9,053,507,988,298,926,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2017-8067
drivers/char/virtio_console.c in the Linux kernel 4.9.x and 4.10.x before 4.10.12 interacts incorrectly with the CONFIG_VMAP_STACK option, which allows local users to cause a denial of service (system crash or memory corruption) or possibly have unspecified other impact by leveraging use of more than one virtual page for a DMA scatterlist.
https://nvd.nist.gov/vuln/detail/CVE-2017-8067
9,465
php-src
bab0b99f376dac9170ac81382a5ed526938d595a
https://github.com/php/php-src
https://github.com/php/php-src/commit/bab0b99f376dac9170ac81382a5ed526938d595a
Detect invalid port in xp_socket parse ip address For historical reasons, fsockopen() accepts the port and hostname separately: fsockopen('127.0.0.1', 80) However, with the introdcution of stream transports in PHP 4.3, it became possible to include the port in the hostname specifier: fsockopen('127.0.0.1:80') Or more formally: fsockopen('tcp://127.0.0.1:80') Confusing results when these two forms are combined, however. fsockopen('127.0.0.1:80', 443) results in fsockopen() attempting to connect to '127.0.0.1:80:443' which any reasonable stack would consider invalid. Unfortunately, PHP parses the address looking for the first colon (with special handling for IPv6, don't worry) and calls atoi() from there. atoi() in turn, simply stops parsing at the first non-numeric character and returns the value so far. The end result is that the explicitly supplied port is treated as ignored garbage, rather than producing an error. This diff replaces atoi() with strtol() and inspects the stop character. If additional "garbage" of any kind is found, it fails and returns an error.
1
static inline char *parse_ip_address_ex(const char *str, size_t str_len, int *portno, int get_err, zend_string **err) { char *colon; char *host = NULL; #ifdef HAVE_IPV6 char *p; if (*(str) == '[' && str_len > 1) { /* IPV6 notation to specify raw address with port (i.e. [fe80::1]:80) */ p = memchr(str + 1, ']', str_len - 2); if (!p || *(p + 1) != ':') { if (get_err) { *err = strpprintf(0, "Failed to parse IPv6 address \"%s\"", str); } return NULL; } *portno = atoi(p + 2); return estrndup(str + 1, p - str - 1); } #endif if (str_len) { colon = memchr(str, ':', str_len - 1); } else { colon = NULL; } if (colon) { *portno = atoi(colon + 1); host = estrndup(str, colon - str); } else { if (get_err) { *err = strpprintf(0, "Failed to parse address \"%s\"", str); } return NULL; } return host; }
168,256,484,098,446,520,000,000,000,000,000,000,000
xp_socket.c
256,034,985,988,711,000,000,000,000,000,000,000,000
[ "CWE-918" ]
CVE-2017-7189
main/streams/xp_socket.c in PHP 7.x before 2017-03-07 misparses fsockopen calls, such as by interpreting fsockopen('127.0.0.1:80', 443) as if the address/port were 127.0.0.1:80:443, which is later truncated to 127.0.0.1:80. This behavior has a security risk if the explicitly provided port number (i.e., 443 in this example) is hardcoded into an application as a security policy, but the hostname argument (i.e., 127.0.0.1:80 in this example) is obtained from untrusted input.
https://nvd.nist.gov/vuln/detail/CVE-2017-7189
9,470
libsndfile
f833c53cb596e9e1792949f762e0b33661822748
https://github.com/erikd/libsndfile
https://github.com/erikd/libsndfile/commit/f833c53cb596e9e1792949f762e0b33661822748
src/aiff.c: Fix a buffer read overflow Secunia Advisory SA76717. Found by: Laurent Delosieres, Secunia Research at Flexera Software
1
aiff_read_chanmap (SF_PRIVATE * psf, unsigned dword) { const AIFF_CAF_CHANNEL_MAP * map_info ; unsigned channel_bitmap, channel_decriptions, bytesread ; int layout_tag ; bytesread = psf_binheader_readf (psf, "444", &layout_tag, &channel_bitmap, &channel_decriptions) ; if ((map_info = aiff_caf_of_channel_layout_tag (layout_tag)) == NULL) return 0 ; psf_log_printf (psf, " Tag : %x\n", layout_tag) ; if (map_info) psf_log_printf (psf, " Layout : %s\n", map_info->name) ; if (bytesread < dword) psf_binheader_readf (psf, "j", dword - bytesread) ; if (map_info->channel_map != NULL) { size_t chanmap_size = psf->sf.channels * sizeof (psf->channel_map [0]) ; free (psf->channel_map) ; if ((psf->channel_map = malloc (chanmap_size)) == NULL) return SFE_MALLOC_FAILED ; memcpy (psf->channel_map, map_info->channel_map, chanmap_size) ; } ; return 0 ; } /* aiff_read_chanmap */
229,784,367,109,779,380,000,000,000,000,000,000,000
aiff.c
100,715,852,571,463,750,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2017-6892
In libsndfile version 1.0.28, an error in the "aiff_read_chanmap()" function (aiff.c) can be exploited to cause an out-of-bounds read memory access via a specially crafted AIFF file.
https://nvd.nist.gov/vuln/detail/CVE-2017-6892
9,471
ImageMagick
65f75a32a93ae4044c528a987a68366ecd4b46b9
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/65f75a32a93ae4044c528a987a68366ecd4b46b9
None
1
static MagickBooleanType WriteTGAImage(const ImageInfo *image_info,Image *image) { CompressionType compression; const char *value; const double midpoint = QuantumRange/2.0; MagickBooleanType status; QuantumAny range; register const IndexPacket *indexes; register const PixelPacket *p; register ssize_t x; register ssize_t i; register unsigned char *q; ssize_t count, y; TGAInfo tga_info; /* Open output image file. */ assert(image_info != (const ImageInfo *) NULL); assert(image_info->signature == MagickSignature); assert(image != (Image *) NULL); assert(image->signature == MagickSignature); if (image->debug != MagickFalse) (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s",image->filename); status=OpenBlob(image_info,image,WriteBinaryBlobMode,&image->exception); if (status == MagickFalse) return(status); /* Initialize TGA raster file header. */ if ((image->columns > 65535L) || (image->rows > 65535L)) ThrowWriterException(ImageError,"WidthOrHeightExceedsLimit"); (void) TransformImageColorspace(image,sRGBColorspace); compression=image->compression; if (image_info->compression != UndefinedCompression) compression=image_info->compression; range=GetQuantumRange(5UL); tga_info.id_length=0; value=GetImageProperty(image,"comment"); if (value != (const char *) NULL) tga_info.id_length=(unsigned char) MagickMin(strlen(value),255); tga_info.colormap_type=0; tga_info.colormap_index=0; tga_info.colormap_length=0; tga_info.colormap_size=0; tga_info.x_origin=0; tga_info.y_origin=0; tga_info.width=(unsigned short) image->columns; tga_info.height=(unsigned short) image->rows; tga_info.bits_per_pixel=8; tga_info.attributes=0; if ((image_info->type != TrueColorType) && (image_info->type != TrueColorMatteType) && (image_info->type != PaletteType) && (image->matte == MagickFalse) && (SetImageGray(image,&image->exception) != MagickFalse)) tga_info.image_type=compression == RLECompression ? TGARLEMonochrome : TGAMonochrome; else if ((image->storage_class == DirectClass) || (image->colors > 256)) { /* Full color TGA raster. */ tga_info.image_type=compression == RLECompression ? TGARLERGB : TGARGB; if (image_info->depth == 5) { tga_info.bits_per_pixel=16; if (image->matte != MagickFalse) tga_info.attributes=1; /* # of alpha bits */ } else { tga_info.bits_per_pixel=24; if (image->matte != MagickFalse) { tga_info.bits_per_pixel=32; tga_info.attributes=8; /* # of alpha bits */ } } } else { /* Colormapped TGA raster. */ tga_info.image_type=compression == RLECompression ? TGARLEColormap : TGAColormap; tga_info.colormap_type=1; tga_info.colormap_length=(unsigned short) image->colors; if (image_info->depth == 5) tga_info.colormap_size=16; else tga_info.colormap_size=24; } value=GetImageArtifact(image,"tga:image-origin"); if (value != (const char *) NULL) { OrientationType origin; origin=(OrientationType) ParseCommandOption(MagickOrientationOptions, MagickFalse,value); if (origin == BottomRightOrientation || origin == TopRightOrientation) tga_info.attributes|=(1UL << 4); if (origin == TopLeftOrientation || origin == TopRightOrientation) tga_info.attributes|=(1UL << 5); } /* Write TGA header. */ (void) WriteBlobByte(image,tga_info.id_length); (void) WriteBlobByte(image,tga_info.colormap_type); (void) WriteBlobByte(image,(unsigned char) tga_info.image_type); (void) WriteBlobLSBShort(image,tga_info.colormap_index); (void) WriteBlobLSBShort(image,tga_info.colormap_length); (void) WriteBlobByte(image,tga_info.colormap_size); (void) WriteBlobLSBShort(image,tga_info.x_origin); (void) WriteBlobLSBShort(image,tga_info.y_origin); (void) WriteBlobLSBShort(image,tga_info.width); (void) WriteBlobLSBShort(image,tga_info.height); (void) WriteBlobByte(image,tga_info.bits_per_pixel); (void) WriteBlobByte(image,tga_info.attributes); if (tga_info.id_length != 0) (void) WriteBlob(image,tga_info.id_length,(unsigned char *) value); if (tga_info.colormap_type != 0) { unsigned char green, *targa_colormap; /* Dump colormap to file (blue, green, red byte order). */ targa_colormap=(unsigned char *) AcquireQuantumMemory((size_t) tga_info.colormap_length,(tga_info.colormap_size/8)*sizeof( *targa_colormap)); if (targa_colormap == (unsigned char *) NULL) ThrowWriterException(ResourceLimitError,"MemoryAllocationFailed"); q=targa_colormap; for (i=0; i < (ssize_t) image->colors; i++) { if (image_info->depth == 5) { green=(unsigned char) ScaleQuantumToAny(image->colormap[i].green, range); *q++=((unsigned char) ScaleQuantumToAny(image->colormap[i].blue, range)) | ((green & 0x07) << 5); *q++=(((image->matte != MagickFalse) && ( (double) image->colormap[i].opacity < midpoint)) ? 0x80 : 0) | ((unsigned char) ScaleQuantumToAny(image->colormap[i].red, range) << 2) | ((green & 0x18) >> 3); } else { *q++=ScaleQuantumToChar(image->colormap[i].blue); *q++=ScaleQuantumToChar(image->colormap[i].green); *q++=ScaleQuantumToChar(image->colormap[i].red); } } (void) WriteBlob(image,(size_t) ((tga_info.colormap_size/8)* tga_info.colormap_length),targa_colormap); targa_colormap=(unsigned char *) RelinquishMagickMemory(targa_colormap); } /* Convert MIFF to TGA raster pixels. */ for (y=(ssize_t) (image->rows-1); y >= 0; y--) { p=GetVirtualPixels(image,0,y,image->columns,1,&image->exception); if (p == (const PixelPacket *) NULL) break; indexes=GetVirtualIndexQueue(image); if (compression == RLECompression) { x=0; count=0; while (x < (ssize_t) image->columns) { i=1; while ((i < 128) && (count + i < 128) && ((x + i) < (ssize_t) image->columns)) { if (tga_info.image_type == TGARLEColormap) { if (GetPixelIndex(indexes+i) != GetPixelIndex(indexes+(i-1))) break; } else if (tga_info.image_type == TGARLEMonochrome) { if (GetPixelLuma(image,p+i) != GetPixelLuma(image,p+(i-1))) break; } else { if ((GetPixelBlue(p+i) != GetPixelBlue(p+(i-1))) || (GetPixelGreen(p+i) != GetPixelGreen(p+(i-1))) || (GetPixelRed(p+i) != GetPixelRed(p+(i-1)))) break; if ((image->matte != MagickFalse) && (GetPixelAlpha(p+i) != GetPixelAlpha(p+(i-1)))) break; } i++; } if (i < 3) { count+=i; p+=i; indexes+=i; } if ((i >= 3) || (count == 128) || ((x + i) == (ssize_t) image->columns)) { if (count > 0) { (void) WriteBlobByte(image,(unsigned char) (--count)); while (count >= 0) { WriteTGAPixel(image,tga_info.image_type,indexes-(count+1), p-(count+1),range,midpoint); count--; } count=0; } } if (i >= 3) { (void) WriteBlobByte(image,(unsigned char) ((i-1) | 0x80)); WriteTGAPixel(image,tga_info.image_type,indexes,p,range,midpoint); p+=i; indexes+=i; } x+=i; } } else { for (x=0; x < (ssize_t) image->columns; x++) WriteTGAPixel(image,tga_info.image_type,indexes+x,p++,range,midpoint); } if (image->previous == (Image *) NULL) { status=SetImageProgress(image,SaveImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } (void) CloseBlob(image); return(MagickTrue); }
256,636,415,044,259,280,000,000,000,000,000,000,000
None
null
[ "CWE-20" ]
CVE-2017-6498
An issue was discovered in ImageMagick 6.9.7. Incorrect TGA files could trigger assertion failures, thus leading to DoS.
https://nvd.nist.gov/vuln/detail/CVE-2017-6498
9,473
radare2
252afb1cff9676f3ae1f341a28448bf2c8b6e308
https://github.com/radare/radare2
https://github.com/radare/radare2/commit/252afb1cff9676f3ae1f341a28448bf2c8b6e308
None
1
static void dex_parse_debug_item(RBinFile *binfile, RBinDexObj *bin, RBinDexClass *c, int MI, int MA, int paddr, int ins_size, int insns_size, char *class_name, int regsz, int debug_info_off) { struct r_bin_t *rbin = binfile->rbin; const ut8 *p4 = r_buf_get_at (binfile->buf, debug_info_off, NULL); const ut8 *p4_end = p4 + binfile->buf->length - debug_info_off; ut64 line_start; ut64 parameters_size; ut64 param_type_idx; ut16 argReg = regsz - ins_size; ut64 source_file_idx = c->source_file; RList *params, *debug_positions, *emitted_debug_locals = NULL; bool keep = true; if (argReg > regsz) { return; // this return breaks tests } p4 = r_uleb128 (p4, p4_end - p4, &line_start); p4 = r_uleb128 (p4, p4_end - p4, &parameters_size); ut32 address = 0; ut32 line = line_start; if (!(debug_positions = r_list_newf ((RListFree)free))) { return; } if (!(emitted_debug_locals = r_list_newf ((RListFree)free))) { r_list_free (debug_positions); return; } struct dex_debug_local_t debug_locals[regsz]; memset (debug_locals, 0, sizeof (struct dex_debug_local_t) * regsz); if (!(MA & 0x0008)) { debug_locals[argReg].name = "this"; debug_locals[argReg].descriptor = r_str_newf("%s;", class_name); debug_locals[argReg].startAddress = 0; debug_locals[argReg].signature = NULL; debug_locals[argReg].live = true; argReg++; } if (!(params = dex_method_signature2 (bin, MI))) { r_list_free (debug_positions); r_list_free (emitted_debug_locals); return; } RListIter *iter = r_list_iterator (params); char *name; char *type; int reg; r_list_foreach (params, iter, type) { if ((argReg >= regsz) || !type || parameters_size <= 0) { r_list_free (debug_positions); r_list_free (params); r_list_free (emitted_debug_locals); return; } p4 = r_uleb128 (p4, p4_end - p4, &param_type_idx); // read uleb128p1 param_type_idx -= 1; name = getstr (bin, param_type_idx); reg = argReg; switch (type[0]) { case 'D': case 'J': argReg += 2; break; default: argReg += 1; break; } if (name) { debug_locals[reg].name = name; debug_locals[reg].descriptor = type; debug_locals[reg].signature = NULL; debug_locals[reg].startAddress = address; debug_locals[reg].live = true; } --parameters_size; } ut8 opcode = *(p4++) & 0xff; while (keep) { switch (opcode) { case 0x0: // DBG_END_SEQUENCE keep = false; break; case 0x1: // DBG_ADVANCE_PC { ut64 addr_diff; p4 = r_uleb128 (p4, p4_end - p4, &addr_diff); address += addr_diff; } break; case 0x2: // DBG_ADVANCE_LINE { st64 line_diff = r_sleb128 (&p4, p4_end); line += line_diff; } break; case 0x3: // DBG_START_LOCAL { ut64 register_num; ut64 name_idx; ut64 type_idx; p4 = r_uleb128 (p4, p4_end - p4, &register_num); p4 = r_uleb128 (p4, p4_end - p4, &name_idx); name_idx -= 1; p4 = r_uleb128 (p4, p4_end - p4, &type_idx); type_idx -= 1; if (register_num >= regsz) { r_list_free (debug_positions); r_list_free (params); return; } if (debug_locals[register_num].live) { struct dex_debug_local_t *local = malloc ( sizeof (struct dex_debug_local_t)); if (!local) { keep = false; break; } local->name = debug_locals[register_num].name; local->descriptor = debug_locals[register_num].descriptor; local->startAddress = debug_locals[register_num].startAddress; local->signature = debug_locals[register_num].signature; local->live = true; local->reg = register_num; local->endAddress = address; r_list_append (emitted_debug_locals, local); } debug_locals[register_num].name = getstr (bin, name_idx); debug_locals[register_num].descriptor = dex_type_descriptor (bin, type_idx); debug_locals[register_num].startAddress = address; debug_locals[register_num].signature = NULL; debug_locals[register_num].live = true; } break; case 0x4: //DBG_START_LOCAL_EXTENDED { ut64 register_num; ut64 name_idx; ut64 type_idx; ut64 sig_idx; p4 = r_uleb128 (p4, p4_end - p4, &register_num); p4 = r_uleb128 (p4, p4_end - p4, &name_idx); name_idx -= 1; p4 = r_uleb128 (p4, p4_end - p4, &type_idx); type_idx -= 1; p4 = r_uleb128 (p4, p4_end - p4, &sig_idx); sig_idx -= 1; if (register_num >= regsz) { r_list_free (debug_positions); r_list_free (params); return; } if (debug_locals[register_num].live) { struct dex_debug_local_t *local = malloc ( sizeof (struct dex_debug_local_t)); if (!local) { keep = false; break; } local->name = debug_locals[register_num].name; local->descriptor = debug_locals[register_num].descriptor; local->startAddress = debug_locals[register_num].startAddress; local->signature = debug_locals[register_num].signature; local->live = true; local->reg = register_num; local->endAddress = address; r_list_append (emitted_debug_locals, local); } debug_locals[register_num].name = getstr (bin, name_idx); debug_locals[register_num].descriptor = dex_type_descriptor (bin, type_idx); debug_locals[register_num].startAddress = address; debug_locals[register_num].signature = getstr (bin, sig_idx); debug_locals[register_num].live = true; } break; case 0x5: // DBG_END_LOCAL { ut64 register_num; p4 = r_uleb128 (p4, p4_end - p4, &register_num); if (debug_locals[register_num].live) { struct dex_debug_local_t *local = malloc ( sizeof (struct dex_debug_local_t)); if (!local) { keep = false; break; } local->name = debug_locals[register_num].name; local->descriptor = debug_locals[register_num].descriptor; local->startAddress = debug_locals[register_num].startAddress; local->signature = debug_locals[register_num].signature; local->live = true; local->reg = register_num; local->endAddress = address; r_list_append (emitted_debug_locals, local); } debug_locals[register_num].live = false; } break; case 0x6: // DBG_RESTART_LOCAL { ut64 register_num; p4 = r_uleb128 (p4, p4_end - p4, &register_num); if (!debug_locals[register_num].live) { debug_locals[register_num].startAddress = address; debug_locals[register_num].live = true; } } break; case 0x7: //DBG_SET_PROLOGUE_END break; case 0x8: //DBG_SET_PROLOGUE_BEGIN break; case 0x9: { p4 = r_uleb128 (p4, p4_end - p4, &source_file_idx); source_file_idx--; } break; default: { int adjusted_opcode = opcode - 0x0a; address += (adjusted_opcode / 15); line += -4 + (adjusted_opcode % 15); struct dex_debug_position_t *position = malloc (sizeof (struct dex_debug_position_t)); if (!position) { keep = false; break; } position->source_file_idx = source_file_idx; position->address = address; position->line = line; r_list_append (debug_positions, position); } break; } opcode = *(p4++) & 0xff; } if (!binfile->sdb_addrinfo) { binfile->sdb_addrinfo = sdb_new0 (); } char *fileline; char offset[64]; char *offset_ptr; RListIter *iter1; struct dex_debug_position_t *pos; r_list_foreach (debug_positions, iter1, pos) { fileline = r_str_newf ("%s|%"PFMT64d, getstr (bin, pos->source_file_idx), pos->line); offset_ptr = sdb_itoa (pos->address + paddr, offset, 16); sdb_set (binfile->sdb_addrinfo, offset_ptr, fileline, 0); sdb_set (binfile->sdb_addrinfo, fileline, offset_ptr, 0); } if (!dexdump) { r_list_free (debug_positions); r_list_free (emitted_debug_locals); r_list_free (params); return; } RListIter *iter2; struct dex_debug_position_t *position; rbin->cb_printf (" positions :\n"); r_list_foreach (debug_positions, iter2, position) { rbin->cb_printf (" 0x%04llx line=%llu\n", position->address, position->line); } rbin->cb_printf (" locals :\n"); RListIter *iter3; struct dex_debug_local_t *local; r_list_foreach (emitted_debug_locals, iter3, local) { if (local->signature) { rbin->cb_printf ( " 0x%04x - 0x%04x reg=%d %s %s %s\n", local->startAddress, local->endAddress, local->reg, local->name, local->descriptor, local->signature); } else { rbin->cb_printf ( " 0x%04x - 0x%04x reg=%d %s %s\n", local->startAddress, local->endAddress, local->reg, local->name, local->descriptor); } } for (reg = 0; reg < regsz; reg++) { if (debug_locals[reg].live) { if (debug_locals[reg].signature) { rbin->cb_printf ( " 0x%04x - 0x%04x reg=%d %s %s " "%s\n", debug_locals[reg].startAddress, insns_size, reg, debug_locals[reg].name, debug_locals[reg].descriptor, debug_locals[reg].signature); } else { rbin->cb_printf ( " 0x%04x - 0x%04x reg=%d %s %s" "\n", debug_locals[reg].startAddress, insns_size, reg, debug_locals[reg].name, debug_locals[reg].descriptor); } } } r_list_free (debug_positions); r_list_free (emitted_debug_locals); r_list_free (params); }
49,497,050,964,486,830,000,000,000,000,000,000,000
None
null
[ "CWE-476" ]
CVE-2017-6415
The dex_parse_debug_item function in libr/bin/p/bin_dex.c in radare2 1.2.1 allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via a crafted DEX file.
https://nvd.nist.gov/vuln/detail/CVE-2017-6415
9,475
tnef
1a17af1ed0c791aec44dbdc9eab91218cc1e335a
https://github.com/verdammelt/tnef
https://github.com/verdammelt/tnef/commit/1a17af1ed0c791aec44dbdc9eab91218cc1e335a
Use asserts on lengths to prevent invalid reads/writes.
1
mapi_attr_read (size_t len, unsigned char *buf) { size_t idx = 0; uint32 i,j; assert(len > 4); uint32 num_properties = GETINT32(buf+idx); MAPI_Attr** attrs = CHECKED_XMALLOC (MAPI_Attr*, (num_properties + 1)); idx += 4; if (!attrs) return NULL; for (i = 0; i < num_properties; i++) { MAPI_Attr* a = attrs[i] = CHECKED_XCALLOC(MAPI_Attr, 1); MAPI_Value* v = NULL; CHECKINT16(idx, len); a->type = GETINT16(buf+idx); idx += 2; CHECKINT16(idx, len); a->name = GETINT16(buf+idx); idx += 2; /* handle special case of GUID prefixed properties */ if (a->name & GUID_EXISTS_FLAG) { /* copy GUID */ a->guid = CHECKED_XMALLOC(GUID, 1); copy_guid_from_buf(a->guid, buf+idx, len); idx += sizeof (GUID); CHECKINT32(idx, len); a->num_names = GETINT32(buf+idx); idx += 4; if (a->num_names > 0) { /* FIXME: do something useful here! */ size_t i; a->names = CHECKED_XCALLOC(VarLenData, a->num_names); for (i = 0; i < a->num_names; i++) { size_t j; CHECKINT32(idx, len); a->names[i].len = GETINT32(buf+idx); idx += 4; /* read the data into a buffer */ a->names[i].data = CHECKED_XMALLOC(unsigned char, a->names[i].len); for (j = 0; j < (a->names[i].len >> 1); j++) a->names[i].data[j] = (buf+idx)[j*2]; /* But what are we going to do with it? */ idx += pad_to_4byte(a->names[i].len); } } else { /* get the 'real' name */ CHECKINT32(idx, len); a->name = GETINT32(buf+idx); idx+= 4; } } /* * Multi-value types and string/object/binary types have * multiple values */ if (a->type & MULTI_VALUE_FLAG || a->type == szMAPI_STRING || a->type == szMAPI_UNICODE_STRING || a->type == szMAPI_OBJECT || a->type == szMAPI_BINARY) { CHECKINT32(idx, len); a->num_values = GETINT32(buf+idx); idx += 4; } else { a->num_values = 1; } /* Amend the type in case of multi-value type */ if (a->type & MULTI_VALUE_FLAG) { a->type -= MULTI_VALUE_FLAG; } v = alloc_mapi_values (a); for (j = 0; j < a->num_values; j++) { switch (a->type) { case szMAPI_SHORT: /* 2 bytes */ v->len = 2; CHECKINT16(idx, len); v->data.bytes2 = GETINT16(buf+idx); idx += 4; /* assume padding of 2, advance by 4! */ break; case szMAPI_INT: /* 4 bytes */ v->len = 4; CHECKINT32(idx, len); v->data.bytes4 = GETINT32(buf+idx); idx += 4; v++; break; case szMAPI_FLOAT: /* 4 bytes */ case szMAPI_BOOLEAN: /* this should be 2 bytes + 2 padding */ v->len = 4; CHECKINT32(idx, len); v->data.bytes4 = GETINT32(buf+idx); idx += v->len; break; case szMAPI_SYSTIME: /* 8 bytes */ v->len = 8; CHECKINT32(idx, len); v->data.bytes8[0] = GETINT32(buf+idx); CHECKINT32(idx+4, len); v->data.bytes8[1] = GETINT32(buf+idx+4); idx += 8; v++; break; case szMAPI_DOUBLE: /* 8 bytes */ case szMAPI_APPTIME: case szMAPI_CURRENCY: case szMAPI_INT8BYTE: v->len = 8; CHECKINT32(idx, len); v->data.bytes8[0] = GETINT32(buf+idx); CHECKINT32(idx+4, len); v->data.bytes8[1] = GETINT32(buf+idx+4); idx += v->len; break; case szMAPI_CLSID: v->len = sizeof (GUID); copy_guid_from_buf(&v->data.guid, buf+idx, len); idx += v->len; break; case szMAPI_STRING: case szMAPI_UNICODE_STRING: case szMAPI_OBJECT: case szMAPI_BINARY: CHECKINT32(idx, len); v->len = GETINT32(buf+idx); idx += 4; if (a->type == szMAPI_UNICODE_STRING) { v->data.buf = (unsigned char*)unicode_to_utf8(v->len, buf+idx); } else { v->data.buf = CHECKED_XMALLOC(unsigned char, v->len); memmove (v->data.buf, buf+idx, v->len); } idx += pad_to_4byte(v->len); v++; break; case szMAPI_NULL: /* illegal in input tnef streams */ case szMAPI_ERROR: case szMAPI_UNSPECIFIED: fprintf (stderr, "Invalid attribute, input file may be corrupted\n"); if (!ENCODE_SKIP) exit (1); return NULL; default: /* should never get here */ fprintf (stderr, "Undefined attribute, input file may be corrupted\n"); if (!ENCODE_SKIP) exit (1); return NULL; } if (DEBUG_ON) mapi_attr_dump (attrs[i]); } } attrs[i] = NULL; return attrs; }
201,006,663,046,884,500,000,000,000,000,000,000,000
None
null
[ "CWE-125" ]
CVE-2017-6307
An issue was discovered in tnef before 1.4.13. Two OOB Writes have been identified in src/mapi_attr.c:mapi_attr_read(). These might lead to invalid read and write operations, controlled by an attacker.
https://nvd.nist.gov/vuln/detail/CVE-2017-6307
9,477
linux
2dcab598484185dea7ec22219c76dcdd59e3cb90
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/2dcab598484185dea7ec22219c76dcdd59e3cb90
sctp: avoid BUG_ON on sctp_wait_for_sndbuf Alexander Popov reported that an application may trigger a BUG_ON in sctp_wait_for_sndbuf if the socket tx buffer is full, a thread is waiting on it to queue more data and meanwhile another thread peels off the association being used by the first thread. This patch replaces the BUG_ON call with a proper error handling. It will return -EPIPE to the original sendmsg call, similarly to what would have been done if the association wasn't found in the first place. Acked-by: Alexander Popov <alex.popov@linux.com> Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Reviewed-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1
static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p, size_t msg_len) { struct sock *sk = asoc->base.sk; int err = 0; long current_timeo = *timeo_p; DEFINE_WAIT(wait); pr_debug("%s: asoc:%p, timeo:%ld, msg_len:%zu\n", __func__, asoc, *timeo_p, msg_len); /* Increment the association's refcnt. */ sctp_association_hold(asoc); /* Wait on the association specific sndbuf space. */ for (;;) { prepare_to_wait_exclusive(&asoc->wait, &wait, TASK_INTERRUPTIBLE); if (!*timeo_p) goto do_nonblock; if (sk->sk_err || asoc->state >= SCTP_STATE_SHUTDOWN_PENDING || asoc->base.dead) goto do_error; if (signal_pending(current)) goto do_interrupted; if (msg_len <= sctp_wspace(asoc)) break; /* Let another process have a go. Since we are going * to sleep anyway. */ release_sock(sk); current_timeo = schedule_timeout(current_timeo); BUG_ON(sk != asoc->base.sk); lock_sock(sk); *timeo_p = current_timeo; } out: finish_wait(&asoc->wait, &wait); /* Release the association's refcnt. */ sctp_association_put(asoc); return err; do_error: err = -EPIPE; goto out; do_interrupted: err = sock_intr_errno(*timeo_p); goto out; do_nonblock: err = -EAGAIN; goto out; }
77,742,281,099,885,130,000,000,000,000,000,000,000
socket.c
35,503,354,525,738,835,000,000,000,000,000,000,000
[ "CWE-362" ]
CVE-2017-5986
Race condition in the sctp_wait_for_sndbuf function in net/sctp/socket.c in the Linux kernel before 4.9.11 allows local users to cause a denial of service (assertion failure and panic) via a multithreaded application that peels off an association in a certain buffer-full state.
https://nvd.nist.gov/vuln/detail/CVE-2017-5986
9,479
yara
ab906da53ff2a68c6fd6d1fa73f2b7c7bf0bc636
https://github.com/VirusTotal/yara
https://github.com/VirusTotal/yara/commit/ab906da53ff2a68c6fd6d1fa73f2b7c7bf0bc636
Fix issue #597
1
yyparse (void *yyscanner, YR_COMPILER* compiler) { /* The lookahead symbol. */ int yychar; /* The semantic value of the lookahead symbol. */ /* Default value used for initialization, for pacifying older GCCs or non-GCC compilers. */ YY_INITIAL_VALUE (static YYSTYPE yyval_default;) YYSTYPE yylval YY_INITIAL_VALUE (= yyval_default); /* Number of syntax errors so far. */ int yynerrs; int yystate; /* Number of tokens to shift before error messages enabled. */ int yyerrstatus; /* The stacks and their tools: 'yyss': related to states. 'yyvs': related to semantic values. Refer to the stacks through separate pointers, to allow yyoverflow to reallocate them elsewhere. */ /* The state stack. */ yytype_int16 yyssa[YYINITDEPTH]; yytype_int16 *yyss; yytype_int16 *yyssp; /* The semantic value stack. */ YYSTYPE yyvsa[YYINITDEPTH]; YYSTYPE *yyvs; YYSTYPE *yyvsp; YYSIZE_T yystacksize; int yyn; int yyresult; /* Lookahead token as an internal (translated) token number. */ int yytoken = 0; /* The variables used to return semantic value and location from the action routines. */ YYSTYPE yyval; #if YYERROR_VERBOSE /* Buffer for error messages, and its allocated size. */ char yymsgbuf[128]; char *yymsg = yymsgbuf; YYSIZE_T yymsg_alloc = sizeof yymsgbuf; #endif #define YYPOPSTACK(N) (yyvsp -= (N), yyssp -= (N)) /* The number of symbols on the RHS of the reduced rule. Keep to zero when no symbol should be popped. */ int yylen = 0; yyssp = yyss = yyssa; yyvsp = yyvs = yyvsa; yystacksize = YYINITDEPTH; YYDPRINTF ((stderr, "Starting parse\n")); yystate = 0; yyerrstatus = 0; yynerrs = 0; yychar = YYEMPTY; /* Cause a token to be read. */ goto yysetstate; /*------------------------------------------------------------. | yynewstate -- Push a new state, which is found in yystate. | `------------------------------------------------------------*/ yynewstate: /* In all cases, when you get here, the value and location stacks have just been pushed. So pushing a state here evens the stacks. */ yyssp++; yysetstate: *yyssp = yystate; if (yyss + yystacksize - 1 <= yyssp) { /* Get the current used size of the three stacks, in elements. */ YYSIZE_T yysize = yyssp - yyss + 1; #ifdef yyoverflow { /* Give user a chance to reallocate the stack. Use copies of these so that the &'s don't force the real ones into memory. */ YYSTYPE *yyvs1 = yyvs; yytype_int16 *yyss1 = yyss; /* Each stack pointer address is followed by the size of the data in use in that stack, in bytes. This used to be a conditional around just the two extra args, but that might be undefined if yyoverflow is a macro. */ yyoverflow (YY_("memory exhausted"), &yyss1, yysize * sizeof (*yyssp), &yyvs1, yysize * sizeof (*yyvsp), &yystacksize); yyss = yyss1; yyvs = yyvs1; } #else /* no yyoverflow */ # ifndef YYSTACK_RELOCATE goto yyexhaustedlab; # else /* Extend the stack our own way. */ if (YYMAXDEPTH <= yystacksize) goto yyexhaustedlab; yystacksize *= 2; if (YYMAXDEPTH < yystacksize) yystacksize = YYMAXDEPTH; { yytype_int16 *yyss1 = yyss; union yyalloc *yyptr = (union yyalloc *) YYSTACK_ALLOC (YYSTACK_BYTES (yystacksize)); if (! yyptr) goto yyexhaustedlab; YYSTACK_RELOCATE (yyss_alloc, yyss); YYSTACK_RELOCATE (yyvs_alloc, yyvs); # undef YYSTACK_RELOCATE if (yyss1 != yyssa) YYSTACK_FREE (yyss1); } # endif #endif /* no yyoverflow */ yyssp = yyss + yysize - 1; yyvsp = yyvs + yysize - 1; YYDPRINTF ((stderr, "Stack size increased to %lu\n", (unsigned long int) yystacksize)); if (yyss + yystacksize - 1 <= yyssp) YYABORT; } YYDPRINTF ((stderr, "Entering state %d\n", yystate)); if (yystate == YYFINAL) YYACCEPT; goto yybackup; /*-----------. | yybackup. | `-----------*/ yybackup: /* Do appropriate processing given the current state. Read a lookahead token if we need one and don't already have one. */ /* First try to decide what to do without reference to lookahead token. */ yyn = yypact[yystate]; if (yypact_value_is_default (yyn)) goto yydefault; /* Not known => get a lookahead token if don't already have one. */ /* YYCHAR is either YYEMPTY or YYEOF or a valid lookahead symbol. */ if (yychar == YYEMPTY) { YYDPRINTF ((stderr, "Reading a token: ")); yychar = yylex (&yylval, yyscanner, compiler); } if (yychar <= YYEOF) { yychar = yytoken = YYEOF; YYDPRINTF ((stderr, "Now at end of input.\n")); } else { yytoken = YYTRANSLATE (yychar); YY_SYMBOL_PRINT ("Next token is", yytoken, &yylval, &yylloc); } /* If the proper action on seeing token YYTOKEN is to reduce or to detect an error, take that action. */ yyn += yytoken; if (yyn < 0 || YYLAST < yyn || yycheck[yyn] != yytoken) goto yydefault; yyn = yytable[yyn]; if (yyn <= 0) { if (yytable_value_is_error (yyn)) goto yyerrlab; yyn = -yyn; goto yyreduce; } /* Count tokens shifted since error; after three, turn off error status. */ if (yyerrstatus) yyerrstatus--; /* Shift the lookahead token. */ YY_SYMBOL_PRINT ("Shifting", yytoken, &yylval, &yylloc); /* Discard the shifted token. */ yychar = YYEMPTY; yystate = yyn; YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN *++yyvsp = yylval; YY_IGNORE_MAYBE_UNINITIALIZED_END goto yynewstate; /*-----------------------------------------------------------. | yydefault -- do the default action for the current state. | `-----------------------------------------------------------*/ yydefault: yyn = yydefact[yystate]; if (yyn == 0) goto yyerrlab; goto yyreduce; /*-----------------------------. | yyreduce -- Do a reduction. | `-----------------------------*/ yyreduce: /* yyn is the number of a rule to reduce with. */ yylen = yyr2[yyn]; /* If YYLEN is nonzero, implement the default value of the action: '$$ = $1'. Otherwise, the following line sets YYVAL to garbage. This behavior is undocumented and Bison users should not rely upon it. Assigning to YYVAL unconditionally makes the parser a bit smaller, and it avoids a GCC warning that YYVAL may be used uninitialized. */ yyval = yyvsp[1-yylen]; YY_REDUCE_PRINT (yyn); switch (yyn) { case 8: #line 230 "grammar.y" /* yacc.c:1646 */ { int result = yr_parser_reduce_import(yyscanner, (yyvsp[0].sized_string)); yr_free((yyvsp[0].sized_string)); ERROR_IF(result != ERROR_SUCCESS); } #line 1661 "grammar.c" /* yacc.c:1646 */ break; case 9: #line 242 "grammar.y" /* yacc.c:1646 */ { YR_RULE* rule = yr_parser_reduce_rule_declaration_phase_1( yyscanner, (int32_t) (yyvsp[-2].integer), (yyvsp[0].c_string)); ERROR_IF(rule == NULL); (yyval.rule) = rule; } #line 1674 "grammar.c" /* yacc.c:1646 */ break; case 10: #line 251 "grammar.y" /* yacc.c:1646 */ { YR_RULE* rule = (yyvsp[-4].rule); // rule created in phase 1 rule->tags = (yyvsp[-3].c_string); rule->metas = (yyvsp[-1].meta); rule->strings = (yyvsp[0].string); } #line 1686 "grammar.c" /* yacc.c:1646 */ break; case 11: #line 259 "grammar.y" /* yacc.c:1646 */ { YR_RULE* rule = (yyvsp[-7].rule); // rule created in phase 1 compiler->last_result = yr_parser_reduce_rule_declaration_phase_2( yyscanner, rule); yr_free((yyvsp[-8].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 1701 "grammar.c" /* yacc.c:1646 */ break; case 12: #line 274 "grammar.y" /* yacc.c:1646 */ { (yyval.meta) = NULL; } #line 1709 "grammar.c" /* yacc.c:1646 */ break; case 13: #line 278 "grammar.y" /* yacc.c:1646 */ { YR_META null_meta; memset(&null_meta, 0xFF, sizeof(YR_META)); null_meta.type = META_TYPE_NULL; compiler->last_result = yr_arena_write_data( compiler->metas_arena, &null_meta, sizeof(YR_META), NULL); (yyval.meta) = (yyvsp[0].meta); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 1736 "grammar.c" /* yacc.c:1646 */ break; case 14: #line 305 "grammar.y" /* yacc.c:1646 */ { (yyval.string) = NULL; } #line 1744 "grammar.c" /* yacc.c:1646 */ break; case 15: #line 309 "grammar.y" /* yacc.c:1646 */ { YR_STRING null_string; memset(&null_string, 0xFF, sizeof(YR_STRING)); null_string.g_flags = STRING_GFLAGS_NULL; compiler->last_result = yr_arena_write_data( compiler->strings_arena, &null_string, sizeof(YR_STRING), NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.string) = (yyvsp[0].string); } #line 1771 "grammar.c" /* yacc.c:1646 */ break; case 17: #line 340 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = 0; } #line 1777 "grammar.c" /* yacc.c:1646 */ break; case 18: #line 341 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = (yyvsp[-1].integer) | (yyvsp[0].integer); } #line 1783 "grammar.c" /* yacc.c:1646 */ break; case 19: #line 346 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = RULE_GFLAGS_PRIVATE; } #line 1789 "grammar.c" /* yacc.c:1646 */ break; case 20: #line 347 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = RULE_GFLAGS_GLOBAL; } #line 1795 "grammar.c" /* yacc.c:1646 */ break; case 21: #line 353 "grammar.y" /* yacc.c:1646 */ { (yyval.c_string) = NULL; } #line 1803 "grammar.c" /* yacc.c:1646 */ break; case 22: #line 357 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_arena_write_string( yyget_extra(yyscanner)->sz_arena, "", NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.c_string) = (yyvsp[0].c_string); } #line 1821 "grammar.c" /* yacc.c:1646 */ break; case 23: #line 375 "grammar.y" /* yacc.c:1646 */ { char* identifier; compiler->last_result = yr_arena_write_string( yyget_extra(yyscanner)->sz_arena, (yyvsp[0].c_string), &identifier); yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.c_string) = identifier; } #line 1838 "grammar.c" /* yacc.c:1646 */ break; case 24: #line 388 "grammar.y" /* yacc.c:1646 */ { char* tag_name = (yyvsp[-1].c_string); size_t tag_length = tag_name != NULL ? strlen(tag_name) : 0; while (tag_length > 0) { if (strcmp(tag_name, (yyvsp[0].c_string)) == 0) { yr_compiler_set_error_extra_info(compiler, tag_name); compiler->last_result = ERROR_DUPLICATED_TAG_IDENTIFIER; break; } tag_name = (char*) yr_arena_next_address( yyget_extra(yyscanner)->sz_arena, tag_name, tag_length + 1); tag_length = tag_name != NULL ? strlen(tag_name) : 0; } if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_arena_write_string( yyget_extra(yyscanner)->sz_arena, (yyvsp[0].c_string), NULL); yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.c_string) = (yyvsp[-1].c_string); } #line 1874 "grammar.c" /* yacc.c:1646 */ break; case 25: #line 424 "grammar.y" /* yacc.c:1646 */ { (yyval.meta) = (yyvsp[0].meta); } #line 1880 "grammar.c" /* yacc.c:1646 */ break; case 26: #line 425 "grammar.y" /* yacc.c:1646 */ { (yyval.meta) = (yyvsp[-1].meta); } #line 1886 "grammar.c" /* yacc.c:1646 */ break; case 27: #line 431 "grammar.y" /* yacc.c:1646 */ { SIZED_STRING* sized_string = (yyvsp[0].sized_string); (yyval.meta) = yr_parser_reduce_meta_declaration( yyscanner, META_TYPE_STRING, (yyvsp[-2].c_string), sized_string->c_string, 0); yr_free((yyvsp[-2].c_string)); yr_free((yyvsp[0].sized_string)); ERROR_IF((yyval.meta) == NULL); } #line 1906 "grammar.c" /* yacc.c:1646 */ break; case 28: #line 447 "grammar.y" /* yacc.c:1646 */ { (yyval.meta) = yr_parser_reduce_meta_declaration( yyscanner, META_TYPE_INTEGER, (yyvsp[-2].c_string), NULL, (yyvsp[0].integer)); yr_free((yyvsp[-2].c_string)); ERROR_IF((yyval.meta) == NULL); } #line 1923 "grammar.c" /* yacc.c:1646 */ break; case 29: #line 460 "grammar.y" /* yacc.c:1646 */ { (yyval.meta) = yr_parser_reduce_meta_declaration( yyscanner, META_TYPE_INTEGER, (yyvsp[-3].c_string), NULL, -(yyvsp[0].integer)); yr_free((yyvsp[-3].c_string)); ERROR_IF((yyval.meta) == NULL); } #line 1940 "grammar.c" /* yacc.c:1646 */ break; case 30: #line 473 "grammar.y" /* yacc.c:1646 */ { (yyval.meta) = yr_parser_reduce_meta_declaration( yyscanner, META_TYPE_BOOLEAN, (yyvsp[-2].c_string), NULL, TRUE); yr_free((yyvsp[-2].c_string)); ERROR_IF((yyval.meta) == NULL); } #line 1957 "grammar.c" /* yacc.c:1646 */ break; case 31: #line 486 "grammar.y" /* yacc.c:1646 */ { (yyval.meta) = yr_parser_reduce_meta_declaration( yyscanner, META_TYPE_BOOLEAN, (yyvsp[-2].c_string), NULL, FALSE); yr_free((yyvsp[-2].c_string)); ERROR_IF((yyval.meta) == NULL); } #line 1974 "grammar.c" /* yacc.c:1646 */ break; case 32: #line 502 "grammar.y" /* yacc.c:1646 */ { (yyval.string) = (yyvsp[0].string); } #line 1980 "grammar.c" /* yacc.c:1646 */ break; case 33: #line 503 "grammar.y" /* yacc.c:1646 */ { (yyval.string) = (yyvsp[-1].string); } #line 1986 "grammar.c" /* yacc.c:1646 */ break; case 34: #line 509 "grammar.y" /* yacc.c:1646 */ { compiler->error_line = yyget_lineno(yyscanner); } #line 1994 "grammar.c" /* yacc.c:1646 */ break; case 35: #line 513 "grammar.y" /* yacc.c:1646 */ { (yyval.string) = yr_parser_reduce_string_declaration( yyscanner, (int32_t) (yyvsp[0].integer), (yyvsp[-4].c_string), (yyvsp[-1].sized_string)); yr_free((yyvsp[-4].c_string)); yr_free((yyvsp[-1].sized_string)); ERROR_IF((yyval.string) == NULL); compiler->error_line = 0; } #line 2009 "grammar.c" /* yacc.c:1646 */ break; case 36: #line 524 "grammar.y" /* yacc.c:1646 */ { compiler->error_line = yyget_lineno(yyscanner); } #line 2017 "grammar.c" /* yacc.c:1646 */ break; case 37: #line 528 "grammar.y" /* yacc.c:1646 */ { (yyval.string) = yr_parser_reduce_string_declaration( yyscanner, (int32_t) (yyvsp[0].integer) | STRING_GFLAGS_REGEXP, (yyvsp[-4].c_string), (yyvsp[-1].sized_string)); yr_free((yyvsp[-4].c_string)); yr_free((yyvsp[-1].sized_string)); ERROR_IF((yyval.string) == NULL); compiler->error_line = 0; } #line 2033 "grammar.c" /* yacc.c:1646 */ break; case 38: #line 540 "grammar.y" /* yacc.c:1646 */ { (yyval.string) = yr_parser_reduce_string_declaration( yyscanner, STRING_GFLAGS_HEXADECIMAL, (yyvsp[-2].c_string), (yyvsp[0].sized_string)); yr_free((yyvsp[-2].c_string)); yr_free((yyvsp[0].sized_string)); ERROR_IF((yyval.string) == NULL); } #line 2047 "grammar.c" /* yacc.c:1646 */ break; case 39: #line 553 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = 0; } #line 2053 "grammar.c" /* yacc.c:1646 */ break; case 40: #line 554 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = (yyvsp[-1].integer) | (yyvsp[0].integer); } #line 2059 "grammar.c" /* yacc.c:1646 */ break; case 41: #line 559 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = STRING_GFLAGS_WIDE; } #line 2065 "grammar.c" /* yacc.c:1646 */ break; case 42: #line 560 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = STRING_GFLAGS_ASCII; } #line 2071 "grammar.c" /* yacc.c:1646 */ break; case 43: #line 561 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = STRING_GFLAGS_NO_CASE; } #line 2077 "grammar.c" /* yacc.c:1646 */ break; case 44: #line 562 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = STRING_GFLAGS_FULL_WORD; } #line 2083 "grammar.c" /* yacc.c:1646 */ break; case 45: #line 568 "grammar.y" /* yacc.c:1646 */ { int var_index = yr_parser_lookup_loop_variable(yyscanner, (yyvsp[0].c_string)); if (var_index >= 0) { compiler->last_result = yr_parser_emit_with_arg( yyscanner, OP_PUSH_M, LOOP_LOCAL_VARS * var_index, NULL, NULL); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; (yyval.expression).identifier = compiler->loop_identifier[var_index]; } else { YR_OBJECT* object = (YR_OBJECT*) yr_hash_table_lookup( compiler->objects_table, (yyvsp[0].c_string), NULL); if (object == NULL) { char* ns = compiler->current_namespace->name; object = (YR_OBJECT*) yr_hash_table_lookup( compiler->objects_table, (yyvsp[0].c_string), ns); } if (object != NULL) { char* id; compiler->last_result = yr_arena_write_string( compiler->sz_arena, (yyvsp[0].c_string), &id); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_parser_emit_with_arg_reloc( yyscanner, OP_OBJ_LOAD, id, NULL, NULL); (yyval.expression).type = EXPRESSION_TYPE_OBJECT; (yyval.expression).value.object = object; (yyval.expression).identifier = object->identifier; } else { YR_RULE* rule = (YR_RULE*) yr_hash_table_lookup( compiler->rules_table, (yyvsp[0].c_string), compiler->current_namespace->name); if (rule != NULL) { compiler->last_result = yr_parser_emit_with_arg_reloc( yyscanner, OP_PUSH_RULE, rule, NULL, NULL); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; (yyval.expression).value.integer = UNDEFINED; (yyval.expression).identifier = rule->identifier; } else { yr_compiler_set_error_extra_info(compiler, (yyvsp[0].c_string)); compiler->last_result = ERROR_UNDEFINED_IDENTIFIER; } } } yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 2172 "grammar.c" /* yacc.c:1646 */ break; case 46: #line 653 "grammar.y" /* yacc.c:1646 */ { YR_OBJECT* field = NULL; if ((yyvsp[-2].expression).type == EXPRESSION_TYPE_OBJECT && (yyvsp[-2].expression).value.object->type == OBJECT_TYPE_STRUCTURE) { field = yr_object_lookup_field((yyvsp[-2].expression).value.object, (yyvsp[0].c_string)); if (field != NULL) { char* ident; compiler->last_result = yr_arena_write_string( compiler->sz_arena, (yyvsp[0].c_string), &ident); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_parser_emit_with_arg_reloc( yyscanner, OP_OBJ_FIELD, ident, NULL, NULL); (yyval.expression).type = EXPRESSION_TYPE_OBJECT; (yyval.expression).value.object = field; (yyval.expression).identifier = field->identifier; } else { yr_compiler_set_error_extra_info(compiler, (yyvsp[0].c_string)); compiler->last_result = ERROR_INVALID_FIELD_NAME; } } else { yr_compiler_set_error_extra_info( compiler, (yyvsp[-2].expression).identifier); compiler->last_result = ERROR_NOT_A_STRUCTURE; } yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 2222 "grammar.c" /* yacc.c:1646 */ break; case 47: #line 699 "grammar.y" /* yacc.c:1646 */ { YR_OBJECT_ARRAY* array; YR_OBJECT_DICTIONARY* dict; if ((yyvsp[-3].expression).type == EXPRESSION_TYPE_OBJECT && (yyvsp[-3].expression).value.object->type == OBJECT_TYPE_ARRAY) { if ((yyvsp[-1].expression).type != EXPRESSION_TYPE_INTEGER) { yr_compiler_set_error_extra_info( compiler, "array indexes must be of integer type"); compiler->last_result = ERROR_WRONG_TYPE; } ERROR_IF(compiler->last_result != ERROR_SUCCESS); compiler->last_result = yr_parser_emit( yyscanner, OP_INDEX_ARRAY, NULL); array = (YR_OBJECT_ARRAY*) (yyvsp[-3].expression).value.object; (yyval.expression).type = EXPRESSION_TYPE_OBJECT; (yyval.expression).value.object = array->prototype_item; (yyval.expression).identifier = array->identifier; } else if ((yyvsp[-3].expression).type == EXPRESSION_TYPE_OBJECT && (yyvsp[-3].expression).value.object->type == OBJECT_TYPE_DICTIONARY) { if ((yyvsp[-1].expression).type != EXPRESSION_TYPE_STRING) { yr_compiler_set_error_extra_info( compiler, "dictionary keys must be of string type"); compiler->last_result = ERROR_WRONG_TYPE; } ERROR_IF(compiler->last_result != ERROR_SUCCESS); compiler->last_result = yr_parser_emit( yyscanner, OP_LOOKUP_DICT, NULL); dict = (YR_OBJECT_DICTIONARY*) (yyvsp[-3].expression).value.object; (yyval.expression).type = EXPRESSION_TYPE_OBJECT; (yyval.expression).value.object = dict->prototype_item; (yyval.expression).identifier = dict->identifier; } else { yr_compiler_set_error_extra_info( compiler, (yyvsp[-3].expression).identifier); compiler->last_result = ERROR_NOT_INDEXABLE; } ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 2283 "grammar.c" /* yacc.c:1646 */ break; case 48: #line 757 "grammar.y" /* yacc.c:1646 */ { YR_OBJECT_FUNCTION* function; char* args_fmt; if ((yyvsp[-3].expression).type == EXPRESSION_TYPE_OBJECT && (yyvsp[-3].expression).value.object->type == OBJECT_TYPE_FUNCTION) { compiler->last_result = yr_parser_check_types( compiler, (YR_OBJECT_FUNCTION*) (yyvsp[-3].expression).value.object, (yyvsp[-1].c_string)); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_arena_write_string( compiler->sz_arena, (yyvsp[-1].c_string), &args_fmt); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_parser_emit_with_arg_reloc( yyscanner, OP_CALL, args_fmt, NULL, NULL); function = (YR_OBJECT_FUNCTION*) (yyvsp[-3].expression).value.object; (yyval.expression).type = EXPRESSION_TYPE_OBJECT; (yyval.expression).value.object = function->return_obj; (yyval.expression).identifier = function->identifier; } else { yr_compiler_set_error_extra_info( compiler, (yyvsp[-3].expression).identifier); compiler->last_result = ERROR_NOT_A_FUNCTION; } yr_free((yyvsp[-1].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 2328 "grammar.c" /* yacc.c:1646 */ break; case 49: #line 801 "grammar.y" /* yacc.c:1646 */ { (yyval.c_string) = yr_strdup(""); } #line 2334 "grammar.c" /* yacc.c:1646 */ break; case 50: #line 802 "grammar.y" /* yacc.c:1646 */ { (yyval.c_string) = (yyvsp[0].c_string); } #line 2340 "grammar.c" /* yacc.c:1646 */ break; case 51: #line 807 "grammar.y" /* yacc.c:1646 */ { (yyval.c_string) = (char*) yr_malloc(MAX_FUNCTION_ARGS + 1); switch((yyvsp[0].expression).type) { case EXPRESSION_TYPE_INTEGER: strlcpy((yyval.c_string), "i", MAX_FUNCTION_ARGS); break; case EXPRESSION_TYPE_FLOAT: strlcpy((yyval.c_string), "f", MAX_FUNCTION_ARGS); break; case EXPRESSION_TYPE_BOOLEAN: strlcpy((yyval.c_string), "b", MAX_FUNCTION_ARGS); break; case EXPRESSION_TYPE_STRING: strlcpy((yyval.c_string), "s", MAX_FUNCTION_ARGS); break; case EXPRESSION_TYPE_REGEXP: strlcpy((yyval.c_string), "r", MAX_FUNCTION_ARGS); break; } ERROR_IF((yyval.c_string) == NULL); } #line 2369 "grammar.c" /* yacc.c:1646 */ break; case 52: #line 832 "grammar.y" /* yacc.c:1646 */ { if (strlen((yyvsp[-2].c_string)) == MAX_FUNCTION_ARGS) { compiler->last_result = ERROR_TOO_MANY_ARGUMENTS; } else { switch((yyvsp[0].expression).type) { case EXPRESSION_TYPE_INTEGER: strlcat((yyvsp[-2].c_string), "i", MAX_FUNCTION_ARGS); break; case EXPRESSION_TYPE_FLOAT: strlcat((yyvsp[-2].c_string), "f", MAX_FUNCTION_ARGS); break; case EXPRESSION_TYPE_BOOLEAN: strlcat((yyvsp[-2].c_string), "b", MAX_FUNCTION_ARGS); break; case EXPRESSION_TYPE_STRING: strlcat((yyvsp[-2].c_string), "s", MAX_FUNCTION_ARGS); break; case EXPRESSION_TYPE_REGEXP: strlcat((yyvsp[-2].c_string), "r", MAX_FUNCTION_ARGS); break; } } ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.c_string) = (yyvsp[-2].c_string); } #line 2405 "grammar.c" /* yacc.c:1646 */ break; case 53: #line 868 "grammar.y" /* yacc.c:1646 */ { SIZED_STRING* sized_string = (yyvsp[0].sized_string); RE* re; RE_ERROR error; int re_flags = 0; if (sized_string->flags & SIZED_STRING_FLAGS_NO_CASE) re_flags |= RE_FLAGS_NO_CASE; if (sized_string->flags & SIZED_STRING_FLAGS_DOT_ALL) re_flags |= RE_FLAGS_DOT_ALL; compiler->last_result = yr_re_compile( sized_string->c_string, re_flags, compiler->re_code_arena, &re, &error); yr_free((yyvsp[0].sized_string)); if (compiler->last_result == ERROR_INVALID_REGULAR_EXPRESSION) yr_compiler_set_error_extra_info(compiler, error.message); ERROR_IF(compiler->last_result != ERROR_SUCCESS); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_parser_emit_with_arg_reloc( yyscanner, OP_PUSH, re->root_node->forward_code, NULL, NULL); yr_re_destroy(re); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_REGEXP; } #line 2451 "grammar.c" /* yacc.c:1646 */ break; case 54: #line 914 "grammar.y" /* yacc.c:1646 */ { if ((yyvsp[0].expression).type == EXPRESSION_TYPE_STRING) { if ((yyvsp[0].expression).value.sized_string != NULL) { yywarning(yyscanner, "Using literal string \"%s\" in a boolean operation.", (yyvsp[0].expression).value.sized_string->c_string); } compiler->last_result = yr_parser_emit( yyscanner, OP_STR_TO_BOOL, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2474 "grammar.c" /* yacc.c:1646 */ break; case 55: #line 936 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_emit_with_arg( yyscanner, OP_PUSH, 1, NULL, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2487 "grammar.c" /* yacc.c:1646 */ break; case 56: #line 945 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_emit_with_arg( yyscanner, OP_PUSH, 0, NULL, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2500 "grammar.c" /* yacc.c:1646 */ break; case 57: #line 954 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-2].expression), EXPRESSION_TYPE_STRING, "matches"); CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_REGEXP, "matches"); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_parser_emit( yyscanner, OP_MATCHES, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2519 "grammar.c" /* yacc.c:1646 */ break; case 58: #line 969 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-2].expression), EXPRESSION_TYPE_STRING, "contains"); CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_STRING, "contains"); compiler->last_result = yr_parser_emit( yyscanner, OP_CONTAINS, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2535 "grammar.c" /* yacc.c:1646 */ break; case 59: #line 981 "grammar.y" /* yacc.c:1646 */ { int result = yr_parser_reduce_string_identifier( yyscanner, (yyvsp[0].c_string), OP_FOUND, UNDEFINED); yr_free((yyvsp[0].c_string)); ERROR_IF(result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2553 "grammar.c" /* yacc.c:1646 */ break; case 60: #line 995 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER, "at"); compiler->last_result = yr_parser_reduce_string_identifier( yyscanner, (yyvsp[-2].c_string), OP_FOUND_AT, (yyvsp[0].expression).value.integer); yr_free((yyvsp[-2].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2570 "grammar.c" /* yacc.c:1646 */ break; case 61: #line 1008 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_string_identifier( yyscanner, (yyvsp[-2].c_string), OP_FOUND_IN, UNDEFINED); yr_free((yyvsp[-2].c_string)); ERROR_IF(compiler->last_result!= ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2585 "grammar.c" /* yacc.c:1646 */ break; case 62: #line 1019 "grammar.y" /* yacc.c:1646 */ { if (compiler->loop_depth > 0) { compiler->loop_depth--; compiler->loop_identifier[compiler->loop_depth] = NULL; } } #line 2597 "grammar.c" /* yacc.c:1646 */ break; case 63: #line 1027 "grammar.y" /* yacc.c:1646 */ { int var_index; if (compiler->loop_depth == MAX_LOOP_NESTING) compiler->last_result = \ ERROR_LOOP_NESTING_LIMIT_EXCEEDED; ERROR_IF(compiler->last_result != ERROR_SUCCESS); var_index = yr_parser_lookup_loop_variable( yyscanner, (yyvsp[-1].c_string)); if (var_index >= 0) { yr_compiler_set_error_extra_info( compiler, (yyvsp[-1].c_string)); compiler->last_result = \ ERROR_DUPLICATED_LOOP_IDENTIFIER; } ERROR_IF(compiler->last_result != ERROR_SUCCESS); compiler->last_result = yr_parser_emit_with_arg( yyscanner, OP_PUSH, UNDEFINED, NULL, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 2631 "grammar.c" /* yacc.c:1646 */ break; case 64: #line 1057 "grammar.y" /* yacc.c:1646 */ { int mem_offset = LOOP_LOCAL_VARS * compiler->loop_depth; uint8_t* addr; yr_parser_emit_with_arg( yyscanner, OP_CLEAR_M, mem_offset + 1, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_CLEAR_M, mem_offset + 2, NULL, NULL); if ((yyvsp[-1].integer) == INTEGER_SET_ENUMERATION) { yr_parser_emit_with_arg( yyscanner, OP_POP_M, mem_offset, &addr, NULL); } else // INTEGER_SET_RANGE { yr_parser_emit_with_arg( yyscanner, OP_POP_M, mem_offset + 3, &addr, NULL); yr_parser_emit_with_arg( yyscanner, OP_POP_M, mem_offset, NULL, NULL); } compiler->loop_address[compiler->loop_depth] = addr; compiler->loop_identifier[compiler->loop_depth] = (yyvsp[-4].c_string); compiler->loop_depth++; } #line 2670 "grammar.c" /* yacc.c:1646 */ break; case 65: #line 1092 "grammar.y" /* yacc.c:1646 */ { int mem_offset; compiler->loop_depth--; mem_offset = LOOP_LOCAL_VARS * compiler->loop_depth; yr_parser_emit_with_arg( yyscanner, OP_ADD_M, mem_offset + 1, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_INCR_M, mem_offset + 2, NULL, NULL); if ((yyvsp[-5].integer) == INTEGER_SET_ENUMERATION) { yr_parser_emit_with_arg_reloc( yyscanner, OP_JNUNDEF, compiler->loop_address[compiler->loop_depth], NULL, NULL); } else // INTEGER_SET_RANGE { yr_parser_emit_with_arg( yyscanner, OP_INCR_M, mem_offset, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_PUSH_M, mem_offset, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_PUSH_M, mem_offset + 3, NULL, NULL); yr_parser_emit_with_arg_reloc( yyscanner, OP_JLE, compiler->loop_address[compiler->loop_depth], NULL, NULL); yr_parser_emit(yyscanner, OP_POP, NULL); yr_parser_emit(yyscanner, OP_POP, NULL); } yr_parser_emit(yyscanner, OP_POP, NULL); yr_parser_emit_with_arg( yyscanner, OP_SWAPUNDEF, mem_offset + 2, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_PUSH_M, mem_offset + 1, NULL, NULL); yr_parser_emit(yyscanner, OP_INT_LE, NULL); compiler->loop_identifier[compiler->loop_depth] = NULL; yr_free((yyvsp[-8].c_string)); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2753 "grammar.c" /* yacc.c:1646 */ break; case 66: #line 1171 "grammar.y" /* yacc.c:1646 */ { int mem_offset = LOOP_LOCAL_VARS * compiler->loop_depth; uint8_t* addr; if (compiler->loop_depth == MAX_LOOP_NESTING) compiler->last_result = \ ERROR_LOOP_NESTING_LIMIT_EXCEEDED; if (compiler->loop_for_of_mem_offset != -1) compiler->last_result = \ ERROR_NESTED_FOR_OF_LOOP; ERROR_IF(compiler->last_result != ERROR_SUCCESS); yr_parser_emit_with_arg( yyscanner, OP_CLEAR_M, mem_offset + 1, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_CLEAR_M, mem_offset + 2, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_POP_M, mem_offset, &addr, NULL); compiler->loop_for_of_mem_offset = mem_offset; compiler->loop_address[compiler->loop_depth] = addr; compiler->loop_identifier[compiler->loop_depth] = NULL; compiler->loop_depth++; } #line 2787 "grammar.c" /* yacc.c:1646 */ break; case 67: #line 1201 "grammar.y" /* yacc.c:1646 */ { int mem_offset; compiler->loop_depth--; compiler->loop_for_of_mem_offset = -1; mem_offset = LOOP_LOCAL_VARS * compiler->loop_depth; yr_parser_emit_with_arg( yyscanner, OP_ADD_M, mem_offset + 1, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_INCR_M, mem_offset + 2, NULL, NULL); yr_parser_emit_with_arg_reloc( yyscanner, OP_JNUNDEF, compiler->loop_address[compiler->loop_depth], NULL, NULL); yr_parser_emit(yyscanner, OP_POP, NULL); yr_parser_emit_with_arg( yyscanner, OP_SWAPUNDEF, mem_offset + 2, NULL, NULL); yr_parser_emit_with_arg( yyscanner, OP_PUSH_M, mem_offset + 1, NULL, NULL); yr_parser_emit(yyscanner, OP_INT_LE, NULL); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2840 "grammar.c" /* yacc.c:1646 */ break; case 68: #line 1250 "grammar.y" /* yacc.c:1646 */ { yr_parser_emit(yyscanner, OP_OF, NULL); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2850 "grammar.c" /* yacc.c:1646 */ break; case 69: #line 1256 "grammar.y" /* yacc.c:1646 */ { yr_parser_emit(yyscanner, OP_NOT, NULL); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2860 "grammar.c" /* yacc.c:1646 */ break; case 70: #line 1262 "grammar.y" /* yacc.c:1646 */ { YR_FIXUP* fixup; void* jmp_destination_addr; compiler->last_result = yr_parser_emit_with_arg_reloc( yyscanner, OP_JFALSE, 0, // still don't know the jump destination NULL, &jmp_destination_addr); ERROR_IF(compiler->last_result != ERROR_SUCCESS); fixup = (YR_FIXUP*) yr_malloc(sizeof(YR_FIXUP)); if (fixup == NULL) compiler->last_error = ERROR_INSUFFICIENT_MEMORY; ERROR_IF(compiler->last_result != ERROR_SUCCESS); fixup->address = jmp_destination_addr; fixup->next = compiler->fixup_stack_head; compiler->fixup_stack_head = fixup; } #line 2890 "grammar.c" /* yacc.c:1646 */ break; case 71: #line 1288 "grammar.y" /* yacc.c:1646 */ { YR_FIXUP* fixup; uint8_t* and_addr; compiler->last_result = yr_arena_reserve_memory( compiler->code_arena, 2); ERROR_IF(compiler->last_result != ERROR_SUCCESS); compiler->last_result = yr_parser_emit(yyscanner, OP_AND, &and_addr); ERROR_IF(compiler->last_result != ERROR_SUCCESS); fixup = compiler->fixup_stack_head; *(void**)(fixup->address) = (void*)(and_addr + 1); compiler->fixup_stack_head = fixup->next; yr_free(fixup); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2930 "grammar.c" /* yacc.c:1646 */ break; case 72: #line 1324 "grammar.y" /* yacc.c:1646 */ { YR_FIXUP* fixup; void* jmp_destination_addr; compiler->last_result = yr_parser_emit_with_arg_reloc( yyscanner, OP_JTRUE, 0, // still don't know the jump destination NULL, &jmp_destination_addr); ERROR_IF(compiler->last_result != ERROR_SUCCESS); fixup = (YR_FIXUP*) yr_malloc(sizeof(YR_FIXUP)); if (fixup == NULL) compiler->last_error = ERROR_INSUFFICIENT_MEMORY; ERROR_IF(compiler->last_result != ERROR_SUCCESS); fixup->address = jmp_destination_addr; fixup->next = compiler->fixup_stack_head; compiler->fixup_stack_head = fixup; } #line 2959 "grammar.c" /* yacc.c:1646 */ break; case 73: #line 1349 "grammar.y" /* yacc.c:1646 */ { YR_FIXUP* fixup; uint8_t* or_addr; compiler->last_result = yr_arena_reserve_memory( compiler->code_arena, 2); ERROR_IF(compiler->last_result != ERROR_SUCCESS); compiler->last_result = yr_parser_emit(yyscanner, OP_OR, &or_addr); ERROR_IF(compiler->last_result != ERROR_SUCCESS); fixup = compiler->fixup_stack_head; *(void**)(fixup->address) = (void*)(or_addr + 1); compiler->fixup_stack_head = fixup->next; yr_free(fixup); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 2999 "grammar.c" /* yacc.c:1646 */ break; case 74: #line 1385 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, "<", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 3012 "grammar.c" /* yacc.c:1646 */ break; case 75: #line 1394 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, ">", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 3025 "grammar.c" /* yacc.c:1646 */ break; case 76: #line 1403 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, "<=", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 3038 "grammar.c" /* yacc.c:1646 */ break; case 77: #line 1412 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, ">=", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 3051 "grammar.c" /* yacc.c:1646 */ break; case 78: #line 1421 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, "==", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 3064 "grammar.c" /* yacc.c:1646 */ break; case 79: #line 1430 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, "!=", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; } #line 3077 "grammar.c" /* yacc.c:1646 */ break; case 80: #line 1439 "grammar.y" /* yacc.c:1646 */ { (yyval.expression) = (yyvsp[0].expression); } #line 3085 "grammar.c" /* yacc.c:1646 */ break; case 81: #line 1443 "grammar.y" /* yacc.c:1646 */ { (yyval.expression) = (yyvsp[-1].expression); } #line 3093 "grammar.c" /* yacc.c:1646 */ break; case 82: #line 1450 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = INTEGER_SET_ENUMERATION; } #line 3099 "grammar.c" /* yacc.c:1646 */ break; case 83: #line 1451 "grammar.y" /* yacc.c:1646 */ { (yyval.integer) = INTEGER_SET_RANGE; } #line 3105 "grammar.c" /* yacc.c:1646 */ break; case 84: #line 1457 "grammar.y" /* yacc.c:1646 */ { if ((yyvsp[-3].expression).type != EXPRESSION_TYPE_INTEGER) { yr_compiler_set_error_extra_info( compiler, "wrong type for range's lower bound"); compiler->last_result = ERROR_WRONG_TYPE; } if ((yyvsp[-1].expression).type != EXPRESSION_TYPE_INTEGER) { yr_compiler_set_error_extra_info( compiler, "wrong type for range's upper bound"); compiler->last_result = ERROR_WRONG_TYPE; } ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 3127 "grammar.c" /* yacc.c:1646 */ break; case 85: #line 1479 "grammar.y" /* yacc.c:1646 */ { if ((yyvsp[0].expression).type != EXPRESSION_TYPE_INTEGER) { yr_compiler_set_error_extra_info( compiler, "wrong type for enumeration item"); compiler->last_result = ERROR_WRONG_TYPE; } ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 3143 "grammar.c" /* yacc.c:1646 */ break; case 86: #line 1491 "grammar.y" /* yacc.c:1646 */ { if ((yyvsp[0].expression).type != EXPRESSION_TYPE_INTEGER) { yr_compiler_set_error_extra_info( compiler, "wrong type for enumeration item"); compiler->last_result = ERROR_WRONG_TYPE; } ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 3158 "grammar.c" /* yacc.c:1646 */ break; case 87: #line 1506 "grammar.y" /* yacc.c:1646 */ { yr_parser_emit_with_arg(yyscanner, OP_PUSH, UNDEFINED, NULL, NULL); } #line 3167 "grammar.c" /* yacc.c:1646 */ break; case 89: #line 1512 "grammar.y" /* yacc.c:1646 */ { yr_parser_emit_with_arg(yyscanner, OP_PUSH, UNDEFINED, NULL, NULL); yr_parser_emit_pushes_for_strings(yyscanner, "$*"); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 3178 "grammar.c" /* yacc.c:1646 */ break; case 92: #line 1529 "grammar.y" /* yacc.c:1646 */ { yr_parser_emit_pushes_for_strings(yyscanner, (yyvsp[0].c_string)); yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 3189 "grammar.c" /* yacc.c:1646 */ break; case 93: #line 1536 "grammar.y" /* yacc.c:1646 */ { yr_parser_emit_pushes_for_strings(yyscanner, (yyvsp[0].c_string)); yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 3200 "grammar.c" /* yacc.c:1646 */ break; case 95: #line 1548 "grammar.y" /* yacc.c:1646 */ { yr_parser_emit_with_arg(yyscanner, OP_PUSH, UNDEFINED, NULL, NULL); } #line 3208 "grammar.c" /* yacc.c:1646 */ break; case 96: #line 1552 "grammar.y" /* yacc.c:1646 */ { yr_parser_emit_with_arg(yyscanner, OP_PUSH, 1, NULL, NULL); } #line 3216 "grammar.c" /* yacc.c:1646 */ break; case 97: #line 1560 "grammar.y" /* yacc.c:1646 */ { (yyval.expression) = (yyvsp[-1].expression); } #line 3224 "grammar.c" /* yacc.c:1646 */ break; case 98: #line 1564 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_emit( yyscanner, OP_FILESIZE, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } #line 3238 "grammar.c" /* yacc.c:1646 */ break; case 99: #line 1574 "grammar.y" /* yacc.c:1646 */ { yywarning(yyscanner, "Using deprecated \"entrypoint\" keyword. Use the \"entry_point\" " "function from PE module instead."); compiler->last_result = yr_parser_emit( yyscanner, OP_ENTRYPOINT, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } #line 3256 "grammar.c" /* yacc.c:1646 */ break; case 100: #line 1588 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-1].expression), EXPRESSION_TYPE_INTEGER, "intXXXX or uintXXXX"); compiler->last_result = yr_parser_emit( yyscanner, (uint8_t) (OP_READ_INT + (yyvsp[-3].integer)), NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } #line 3276 "grammar.c" /* yacc.c:1646 */ break; case 101: #line 1604 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_emit_with_arg( yyscanner, OP_PUSH, (yyvsp[0].integer), NULL, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = (yyvsp[0].integer); } #line 3290 "grammar.c" /* yacc.c:1646 */ break; case 102: #line 1614 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_emit_with_arg_double( yyscanner, OP_PUSH, (yyvsp[0].double_), NULL, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_FLOAT; } #line 3303 "grammar.c" /* yacc.c:1646 */ break; case 103: #line 1623 "grammar.y" /* yacc.c:1646 */ { SIZED_STRING* sized_string; compiler->last_result = yr_arena_write_data( compiler->sz_arena, (yyvsp[0].sized_string), (yyvsp[0].sized_string)->length + sizeof(SIZED_STRING), (void**) &sized_string); yr_free((yyvsp[0].sized_string)); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_parser_emit_with_arg_reloc( yyscanner, OP_PUSH, sized_string, NULL, NULL); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_STRING; (yyval.expression).value.sized_string = sized_string; } #line 3332 "grammar.c" /* yacc.c:1646 */ break; case 104: #line 1648 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_string_identifier( yyscanner, (yyvsp[0].c_string), OP_COUNT, UNDEFINED); yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } #line 3348 "grammar.c" /* yacc.c:1646 */ break; case 105: #line 1660 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_string_identifier( yyscanner, (yyvsp[-3].c_string), OP_OFFSET, UNDEFINED); yr_free((yyvsp[-3].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } #line 3364 "grammar.c" /* yacc.c:1646 */ break; case 106: #line 1672 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_emit_with_arg( yyscanner, OP_PUSH, 1, NULL, NULL); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_parser_reduce_string_identifier( yyscanner, (yyvsp[0].c_string), OP_OFFSET, UNDEFINED); yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } #line 3384 "grammar.c" /* yacc.c:1646 */ break; case 107: #line 1688 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_string_identifier( yyscanner, (yyvsp[-3].c_string), OP_LENGTH, UNDEFINED); yr_free((yyvsp[-3].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } #line 3400 "grammar.c" /* yacc.c:1646 */ break; case 108: #line 1700 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_emit_with_arg( yyscanner, OP_PUSH, 1, NULL, NULL); if (compiler->last_result == ERROR_SUCCESS) compiler->last_result = yr_parser_reduce_string_identifier( yyscanner, (yyvsp[0].c_string), OP_LENGTH, UNDEFINED); yr_free((yyvsp[0].c_string)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } #line 3420 "grammar.c" /* yacc.c:1646 */ break; case 109: #line 1716 "grammar.y" /* yacc.c:1646 */ { if ((yyvsp[0].expression).type == EXPRESSION_TYPE_INTEGER) // loop identifier { (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; } else if ((yyvsp[0].expression).type == EXPRESSION_TYPE_BOOLEAN) // rule identifier { (yyval.expression).type = EXPRESSION_TYPE_BOOLEAN; (yyval.expression).value.integer = UNDEFINED; } else if ((yyvsp[0].expression).type == EXPRESSION_TYPE_OBJECT) { compiler->last_result = yr_parser_emit( yyscanner, OP_OBJ_VALUE, NULL); switch((yyvsp[0].expression).value.object->type) { case OBJECT_TYPE_INTEGER: (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = UNDEFINED; break; case OBJECT_TYPE_FLOAT: (yyval.expression).type = EXPRESSION_TYPE_FLOAT; break; case OBJECT_TYPE_STRING: (yyval.expression).type = EXPRESSION_TYPE_STRING; (yyval.expression).value.sized_string = NULL; break; default: yr_compiler_set_error_extra_info_fmt( compiler, "wrong usage of identifier \"%s\"", (yyvsp[0].expression).identifier); compiler->last_result = ERROR_WRONG_TYPE; } } else { assert(FALSE); } ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 3469 "grammar.c" /* yacc.c:1646 */ break; case 110: #line 1761 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER | EXPRESSION_TYPE_FLOAT, "-"); if ((yyvsp[0].expression).type == EXPRESSION_TYPE_INTEGER) { (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = ((yyvsp[0].expression).value.integer == UNDEFINED) ? UNDEFINED : -((yyvsp[0].expression).value.integer); compiler->last_result = yr_parser_emit(yyscanner, OP_INT_MINUS, NULL); } else if ((yyvsp[0].expression).type == EXPRESSION_TYPE_FLOAT) { (yyval.expression).type = EXPRESSION_TYPE_FLOAT; compiler->last_result = yr_parser_emit(yyscanner, OP_DBL_MINUS, NULL); } ERROR_IF(compiler->last_result != ERROR_SUCCESS); } #line 3492 "grammar.c" /* yacc.c:1646 */ break; case 111: #line 1780 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, "+", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); if ((yyvsp[-2].expression).type == EXPRESSION_TYPE_INTEGER && (yyvsp[0].expression).type == EXPRESSION_TYPE_INTEGER) { (yyval.expression).value.integer = OPERATION(+, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; } else { (yyval.expression).type = EXPRESSION_TYPE_FLOAT; } } #line 3514 "grammar.c" /* yacc.c:1646 */ break; case 112: #line 1798 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, "-", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); if ((yyvsp[-2].expression).type == EXPRESSION_TYPE_INTEGER && (yyvsp[0].expression).type == EXPRESSION_TYPE_INTEGER) { (yyval.expression).value.integer = OPERATION(-, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; } else { (yyval.expression).type = EXPRESSION_TYPE_FLOAT; } } #line 3536 "grammar.c" /* yacc.c:1646 */ break; case 113: #line 1816 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, "*", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); if ((yyvsp[-2].expression).type == EXPRESSION_TYPE_INTEGER && (yyvsp[0].expression).type == EXPRESSION_TYPE_INTEGER) { (yyval.expression).value.integer = OPERATION(*, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; } else { (yyval.expression).type = EXPRESSION_TYPE_FLOAT; } } #line 3558 "grammar.c" /* yacc.c:1646 */ break; case 114: #line 1834 "grammar.y" /* yacc.c:1646 */ { compiler->last_result = yr_parser_reduce_operation( yyscanner, "\\", (yyvsp[-2].expression), (yyvsp[0].expression)); ERROR_IF(compiler->last_result != ERROR_SUCCESS); if ((yyvsp[-2].expression).type == EXPRESSION_TYPE_INTEGER && (yyvsp[0].expression).type == EXPRESSION_TYPE_INTEGER) { if ((yyvsp[0].expression).value.integer != 0) { (yyval.expression).value.integer = OPERATION(/, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; } else { compiler->last_result = ERROR_DIVISION_BY_ZERO; ERROR_IF(compiler->last_result != ERROR_SUCCESS); } } else { (yyval.expression).type = EXPRESSION_TYPE_FLOAT; } } #line 3588 "grammar.c" /* yacc.c:1646 */ break; case 115: #line 1860 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-2].expression), EXPRESSION_TYPE_INTEGER, "%"); CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER, "%"); yr_parser_emit(yyscanner, OP_MOD, NULL); if ((yyvsp[0].expression).value.integer != 0) { (yyval.expression).value.integer = OPERATION(%, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; } else { compiler->last_result = ERROR_DIVISION_BY_ZERO; ERROR_IF(compiler->last_result != ERROR_SUCCESS); } } #line 3610 "grammar.c" /* yacc.c:1646 */ break; case 116: #line 1878 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-2].expression), EXPRESSION_TYPE_INTEGER, "^"); CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER, "^"); yr_parser_emit(yyscanner, OP_BITWISE_XOR, NULL); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = OPERATION(^, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); } #line 3624 "grammar.c" /* yacc.c:1646 */ break; case 117: #line 1888 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-2].expression), EXPRESSION_TYPE_INTEGER, "^"); CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER, "^"); yr_parser_emit(yyscanner, OP_BITWISE_AND, NULL); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = OPERATION(&, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); } #line 3638 "grammar.c" /* yacc.c:1646 */ break; case 118: #line 1898 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-2].expression), EXPRESSION_TYPE_INTEGER, "|"); CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER, "|"); yr_parser_emit(yyscanner, OP_BITWISE_OR, NULL); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = OPERATION(|, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); } #line 3652 "grammar.c" /* yacc.c:1646 */ break; case 119: #line 1908 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER, "~"); yr_parser_emit(yyscanner, OP_BITWISE_NOT, NULL); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = ((yyvsp[0].expression).value.integer == UNDEFINED) ? UNDEFINED : ~((yyvsp[0].expression).value.integer); } #line 3666 "grammar.c" /* yacc.c:1646 */ break; case 120: #line 1918 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-2].expression), EXPRESSION_TYPE_INTEGER, "<<"); CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER, "<<"); yr_parser_emit(yyscanner, OP_SHL, NULL); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = OPERATION(<<, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); } #line 3680 "grammar.c" /* yacc.c:1646 */ break; case 121: #line 1928 "grammar.y" /* yacc.c:1646 */ { CHECK_TYPE((yyvsp[-2].expression), EXPRESSION_TYPE_INTEGER, ">>"); CHECK_TYPE((yyvsp[0].expression), EXPRESSION_TYPE_INTEGER, ">>"); yr_parser_emit(yyscanner, OP_SHR, NULL); (yyval.expression).type = EXPRESSION_TYPE_INTEGER; (yyval.expression).value.integer = OPERATION(>>, (yyvsp[-2].expression).value.integer, (yyvsp[0].expression).value.integer); } #line 3694 "grammar.c" /* yacc.c:1646 */ break; case 122: #line 1938 "grammar.y" /* yacc.c:1646 */ { (yyval.expression) = (yyvsp[0].expression); } #line 3702 "grammar.c" /* yacc.c:1646 */ break; #line 3706 "grammar.c" /* yacc.c:1646 */ default: break; } /* User semantic actions sometimes alter yychar, and that requires that yytoken be updated with the new translation. We take the approach of translating immediately before every use of yytoken. One alternative is translating here after every semantic action, but that translation would be missed if the semantic action invokes YYABORT, YYACCEPT, or YYERROR immediately after altering yychar or if it invokes YYBACKUP. In the case of YYABORT or YYACCEPT, an incorrect destructor might then be invoked immediately. In the case of YYERROR or YYBACKUP, subsequent parser actions might lead to an incorrect destructor call or verbose syntax error message before the lookahead is translated. */ YY_SYMBOL_PRINT ("-> $$ =", yyr1[yyn], &yyval, &yyloc); YYPOPSTACK (yylen); yylen = 0; YY_STACK_PRINT (yyss, yyssp); *++yyvsp = yyval; /* Now 'shift' the result of the reduction. Determine what state that goes to, based on the state we popped back to and the rule number reduced by. */ yyn = yyr1[yyn]; yystate = yypgoto[yyn - YYNTOKENS] + *yyssp; if (0 <= yystate && yystate <= YYLAST && yycheck[yystate] == *yyssp) yystate = yytable[yystate]; else yystate = yydefgoto[yyn - YYNTOKENS]; goto yynewstate; /*--------------------------------------. | yyerrlab -- here on detecting error. | `--------------------------------------*/ yyerrlab: /* Make sure we have latest lookahead translation. See comments at user semantic actions for why this is necessary. */ yytoken = yychar == YYEMPTY ? YYEMPTY : YYTRANSLATE (yychar); /* If not already recovering from an error, report this error. */ if (!yyerrstatus) { ++yynerrs; #if ! YYERROR_VERBOSE yyerror (yyscanner, compiler, YY_("syntax error")); #else # define YYSYNTAX_ERROR yysyntax_error (&yymsg_alloc, &yymsg, \ yyssp, yytoken) { char const *yymsgp = YY_("syntax error"); int yysyntax_error_status; yysyntax_error_status = YYSYNTAX_ERROR; if (yysyntax_error_status == 0) yymsgp = yymsg; else if (yysyntax_error_status == 1) { if (yymsg != yymsgbuf) YYSTACK_FREE (yymsg); yymsg = (char *) YYSTACK_ALLOC (yymsg_alloc); if (!yymsg) { yymsg = yymsgbuf; yymsg_alloc = sizeof yymsgbuf; yysyntax_error_status = 2; } else { yysyntax_error_status = YYSYNTAX_ERROR; yymsgp = yymsg; } } yyerror (yyscanner, compiler, yymsgp); if (yysyntax_error_status == 2) goto yyexhaustedlab; } # undef YYSYNTAX_ERROR #endif } if (yyerrstatus == 3) { /* If just tried and failed to reuse lookahead token after an error, discard it. */ if (yychar <= YYEOF) { /* Return failure if at end of input. */ if (yychar == YYEOF) YYABORT; } else { yydestruct ("Error: discarding", yytoken, &yylval, yyscanner, compiler); yychar = YYEMPTY; } } /* Else will try to reuse lookahead token after shifting the error token. */ goto yyerrlab1; /*---------------------------------------------------. | yyerrorlab -- error raised explicitly by YYERROR. | `---------------------------------------------------*/ yyerrorlab: /* Pacify compilers like GCC when the user code never invokes YYERROR and the label yyerrorlab therefore never appears in user code. */ if (/*CONSTCOND*/ 0) goto yyerrorlab; /* Do not reclaim the symbols of the rule whose action triggered this YYERROR. */ YYPOPSTACK (yylen); yylen = 0; YY_STACK_PRINT (yyss, yyssp); yystate = *yyssp; goto yyerrlab1; /*-------------------------------------------------------------. | yyerrlab1 -- common code for both syntax error and YYERROR. | `-------------------------------------------------------------*/ yyerrlab1: yyerrstatus = 3; /* Each real token shifted decrements this. */ for (;;) { yyn = yypact[yystate]; if (!yypact_value_is_default (yyn)) { yyn += YYTERROR; if (0 <= yyn && yyn <= YYLAST && yycheck[yyn] == YYTERROR) { yyn = yytable[yyn]; if (0 < yyn) break; } } /* Pop the current state because it cannot handle the error token. */ if (yyssp == yyss) YYABORT; yydestruct ("Error: popping", yystos[yystate], yyvsp, yyscanner, compiler); YYPOPSTACK (1); yystate = *yyssp; YY_STACK_PRINT (yyss, yyssp); } YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN *++yyvsp = yylval; YY_IGNORE_MAYBE_UNINITIALIZED_END /* Shift the error token. */ YY_SYMBOL_PRINT ("Shifting", yystos[yyn], yyvsp, yylsp); yystate = yyn; goto yynewstate; /*-------------------------------------. | yyacceptlab -- YYACCEPT comes here. | `-------------------------------------*/ yyacceptlab: yyresult = 0; goto yyreturn; /*-----------------------------------. | yyabortlab -- YYABORT comes here. | `-----------------------------------*/ yyabortlab: yyresult = 1; goto yyreturn; #if !defined yyoverflow || YYERROR_VERBOSE /*-------------------------------------------------. | yyexhaustedlab -- memory exhaustion comes here. | `-------------------------------------------------*/ yyexhaustedlab: yyerror (yyscanner, compiler, YY_("memory exhausted")); yyresult = 2; /* Fall through. */ #endif yyreturn: if (yychar != YYEMPTY) { /* Make sure we have latest lookahead translation. See comments at user semantic actions for why this is necessary. */ yytoken = YYTRANSLATE (yychar); yydestruct ("Cleanup: discarding lookahead", yytoken, &yylval, yyscanner, compiler); } /* Do not reclaim the symbols of the rule whose action triggered this YYABORT or YYACCEPT. */ YYPOPSTACK (yylen); YY_STACK_PRINT (yyss, yyssp); while (yyssp != yyss) { yydestruct ("Cleanup: popping", yystos[*yyssp], yyvsp, yyscanner, compiler); YYPOPSTACK (1); } #ifndef yyoverflow if (yyss != yyssa) YYSTACK_FREE (yyss); #endif #if YYERROR_VERBOSE if (yymsg != yymsgbuf) YYSTACK_FREE (yymsg); #endif return yyresult; }
120,882,024,126,152,870,000,000,000,000,000,000,000
None
null
[ "CWE-125" ]
CVE-2017-5923
libyara/grammar.y in YARA 3.5.0 allows remote attackers to cause a denial of service (heap-based out-of-bounds read and application crash) via a crafted rule that is mishandled in the yara_yyparse function.
https://nvd.nist.gov/vuln/detail/CVE-2017-5923
9,480
libarchive
98dcbbf0bf4854bf987557e55e55fff7abbf3ea9
https://github.com/libarchive/libarchive
https://github.com/libarchive/libarchive/commit/98dcbbf0bf4854bf987557e55e55fff7abbf3ea9
Fail with negative lha->compsize in lha_read_file_header_1() Fixes a heap buffer overflow reported in Secunia SA74169
1
lha_read_file_header_1(struct archive_read *a, struct lha *lha) { const unsigned char *p; size_t extdsize; int i, err, err2; int namelen, padding; unsigned char headersum, sum_calculated; err = ARCHIVE_OK; if ((p = __archive_read_ahead(a, H1_FIXED_SIZE, NULL)) == NULL) return (truncated_error(a)); lha->header_size = p[H1_HEADER_SIZE_OFFSET] + 2; headersum = p[H1_HEADER_SUM_OFFSET]; /* Note: An extended header size is included in a compsize. */ lha->compsize = archive_le32dec(p + H1_COMP_SIZE_OFFSET); lha->origsize = archive_le32dec(p + H1_ORIG_SIZE_OFFSET); lha->mtime = lha_dos_time(p + H1_DOS_TIME_OFFSET); namelen = p[H1_NAME_LEN_OFFSET]; /* Calculate a padding size. The result will be normally 0 only(?) */ padding = ((int)lha->header_size) - H1_FIXED_SIZE - namelen; if (namelen > 230 || padding < 0) goto invalid; if ((p = __archive_read_ahead(a, lha->header_size, NULL)) == NULL) return (truncated_error(a)); for (i = 0; i < namelen; i++) { if (p[i + H1_FILE_NAME_OFFSET] == 0xff) goto invalid;/* Invalid filename. */ } archive_strncpy(&lha->filename, p + H1_FILE_NAME_OFFSET, namelen); lha->crc = archive_le16dec(p + H1_FILE_NAME_OFFSET + namelen); lha->setflag |= CRC_IS_SET; sum_calculated = lha_calcsum(0, p, 2, lha->header_size - 2); /* Consume used bytes but not include `next header size' data * since it will be consumed in lha_read_file_extended_header(). */ __archive_read_consume(a, lha->header_size - 2); /* Read extended headers */ err2 = lha_read_file_extended_header(a, lha, NULL, 2, (size_t)(lha->compsize + 2), &extdsize); if (err2 < ARCHIVE_WARN) return (err2); if (err2 < err) err = err2; /* Get a real compressed file size. */ lha->compsize -= extdsize - 2; if (sum_calculated != headersum) { archive_set_error(&a->archive, ARCHIVE_ERRNO_MISC, "LHa header sum error"); return (ARCHIVE_FATAL); } return (err); invalid: archive_set_error(&a->archive, ARCHIVE_ERRNO_FILE_FORMAT, "Invalid LHa header"); return (ARCHIVE_FATAL); }
48,954,952,310,354,580,000,000,000,000,000,000,000
archive_read_support_format_lha.c
200,846,349,839,688,150,000,000,000,000,000,000,000
[ "CWE-125" ]
CVE-2017-5601
An error in the lha_read_file_header_1() function (archive_read_support_format_lha.c) in libarchive 3.2.2 allows remote attackers to trigger an out-of-bounds read memory access and subsequently cause a crash via a specially crafted archive.
https://nvd.nist.gov/vuln/detail/CVE-2017-5601
9,481
linux
6b8ac63847bc2f958dd93c09edc941a0118992d9
https://github.com/torvalds/linux
https://github.com/torvalds/linux/commit/6b8ac63847bc2f958dd93c09edc941a0118992d9
drm/vc4: Return -EINVAL on the overflow checks failing. By failing to set the errno, we'd continue on to trying to set up the RCL, and then oops on trying to dereference the tile_bo that binning validation should have set up. Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Eric Anholt <eric@anholt.net> Fixes: d5b1a78a772f ("drm/vc4: Add support for drawing 3D frames.")
1
vc4_get_bcl(struct drm_device *dev, struct vc4_exec_info *exec) { struct drm_vc4_submit_cl *args = exec->args; void *temp = NULL; void *bin; int ret = 0; uint32_t bin_offset = 0; uint32_t shader_rec_offset = roundup(bin_offset + args->bin_cl_size, 16); uint32_t uniforms_offset = shader_rec_offset + args->shader_rec_size; uint32_t exec_size = uniforms_offset + args->uniforms_size; uint32_t temp_size = exec_size + (sizeof(struct vc4_shader_state) * args->shader_rec_count); struct vc4_bo *bo; if (shader_rec_offset < args->bin_cl_size || uniforms_offset < shader_rec_offset || exec_size < uniforms_offset || args->shader_rec_count >= (UINT_MAX / sizeof(struct vc4_shader_state)) || temp_size < exec_size) { DRM_ERROR("overflow in exec arguments\n"); goto fail; } /* Allocate space where we'll store the copied in user command lists * and shader records. * * We don't just copy directly into the BOs because we need to * read the contents back for validation, and I think the * bo->vaddr is uncached access. */ temp = drm_malloc_ab(temp_size, 1); if (!temp) { DRM_ERROR("Failed to allocate storage for copying " "in bin/render CLs.\n"); ret = -ENOMEM; goto fail; } bin = temp + bin_offset; exec->shader_rec_u = temp + shader_rec_offset; exec->uniforms_u = temp + uniforms_offset; exec->shader_state = temp + exec_size; exec->shader_state_size = args->shader_rec_count; if (copy_from_user(bin, (void __user *)(uintptr_t)args->bin_cl, args->bin_cl_size)) { ret = -EFAULT; goto fail; } if (copy_from_user(exec->shader_rec_u, (void __user *)(uintptr_t)args->shader_rec, args->shader_rec_size)) { ret = -EFAULT; goto fail; } if (copy_from_user(exec->uniforms_u, (void __user *)(uintptr_t)args->uniforms, args->uniforms_size)) { ret = -EFAULT; goto fail; } bo = vc4_bo_create(dev, exec_size, true); if (IS_ERR(bo)) { DRM_ERROR("Couldn't allocate BO for binning\n"); ret = PTR_ERR(bo); goto fail; } exec->exec_bo = &bo->base; list_add_tail(&to_vc4_bo(&exec->exec_bo->base)->unref_head, &exec->unref_list); exec->ct0ca = exec->exec_bo->paddr + bin_offset; exec->bin_u = bin; exec->shader_rec_v = exec->exec_bo->vaddr + shader_rec_offset; exec->shader_rec_p = exec->exec_bo->paddr + shader_rec_offset; exec->shader_rec_size = args->shader_rec_size; exec->uniforms_v = exec->exec_bo->vaddr + uniforms_offset; exec->uniforms_p = exec->exec_bo->paddr + uniforms_offset; exec->uniforms_size = args->uniforms_size; ret = vc4_validate_bin_cl(dev, exec->exec_bo->vaddr + bin_offset, bin, exec); if (ret) goto fail; ret = vc4_validate_shader_recs(dev, exec); if (ret) goto fail; /* Block waiting on any previous rendering into the CS's VBO, * IB, or textures, so that pixels are actually written by the * time we try to read them. */ ret = vc4_wait_for_seqno(dev, exec->bin_dep_seqno, ~0ull, true); fail: drm_free_large(temp); return ret; }
231,214,910,622,587,800,000,000,000,000,000,000,000
vc4_gem.c
130,965,582,702,580,360,000,000,000,000,000,000,000
[ "CWE-388" ]
CVE-2017-5577
The vc4_get_bcl function in drivers/gpu/drm/vc4/vc4_gem.c in the VideoCore DRM driver in the Linux kernel before 4.9.7 does not set an errno value upon certain overflow detections, which allows local users to cause a denial of service (incorrect pointer dereference and OOPS) via inconsistent size values in a VC4_SUBMIT_CL ioctl call.
https://nvd.nist.gov/vuln/detail/CVE-2017-5577
9,482
ImageMagick
91cc3f36f2ccbd485a0456bab9aebe63b635da88
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/91cc3f36f2ccbd485a0456bab9aebe63b635da88
https://github.com/ImageMagick/ImageMagick/issues/348
1
static size_t WritePSDChannel(const PSDInfo *psd_info, const ImageInfo *image_info,Image *image,Image *next_image, const QuantumType quantum_type, unsigned char *compact_pixels, MagickOffsetType size_offset,const MagickBooleanType separate, ExceptionInfo *exception) { int y; MagickBooleanType monochrome; QuantumInfo *quantum_info; register const Quantum *p; register ssize_t i; size_t count, length; unsigned char *pixels; #ifdef MAGICKCORE_ZLIB_DELEGATE #define CHUNK 16384 int flush, level; unsigned char *compressed_pixels; z_stream stream; compressed_pixels=(unsigned char *) NULL; flush=Z_NO_FLUSH; #endif count=0; if (separate != MagickFalse) { size_offset=TellBlob(image)+2; count+=WriteCompressionStart(psd_info,image,next_image,1); } if (next_image->depth > 8) next_image->depth=16; monochrome=IsImageMonochrome(image) && (image->depth == 1) ? MagickTrue : MagickFalse; quantum_info=AcquireQuantumInfo(image_info,image); if (quantum_info == (QuantumInfo *) NULL) return(0); pixels=(unsigned char *) GetQuantumPixels(quantum_info); #ifdef MAGICKCORE_ZLIB_DELEGATE if (next_image->compression == ZipCompression) { compressed_pixels=(unsigned char *) AcquireQuantumMemory(CHUNK, sizeof(*compressed_pixels)); if (compressed_pixels == (unsigned char *) NULL) { quantum_info=DestroyQuantumInfo(quantum_info); return(0); } ResetMagickMemory(&stream,0,sizeof(stream)); stream.data_type=Z_BINARY; level=Z_DEFAULT_COMPRESSION; if ((image_info->quality > 0 && image_info->quality < 10)) level=(int) image_info->quality; if (deflateInit(&stream,level) != Z_OK) { quantum_info=DestroyQuantumInfo(quantum_info); return(0); } } #endif for (y=0; y < (ssize_t) next_image->rows; y++) { p=GetVirtualPixels(next_image,0,y,next_image->columns,1,exception); if (p == (const Quantum *) NULL) break; length=ExportQuantumPixels(next_image,(CacheView *) NULL,quantum_info, quantum_type,pixels,exception); if (monochrome != MagickFalse) for (i=0; i < (ssize_t) length; i++) pixels[i]=(~pixels[i]); if (next_image->compression == RLECompression) { length=PSDPackbitsEncodeImage(image,length,pixels,compact_pixels, exception); count+=WriteBlob(image,length,compact_pixels); size_offset+=WritePSDOffset(psd_info,image,length,size_offset); } #ifdef MAGICKCORE_ZLIB_DELEGATE else if (next_image->compression == ZipCompression) { stream.avail_in=(uInt) length; stream.next_in=(Bytef *) pixels; if (y == (ssize_t) next_image->rows-1) flush=Z_FINISH; do { stream.avail_out=(uInt) CHUNK; stream.next_out=(Bytef *) compressed_pixels; if (deflate(&stream,flush) == Z_STREAM_ERROR) break; length=(size_t) CHUNK-stream.avail_out; if (length > 0) count+=WriteBlob(image,length,compressed_pixels); } while (stream.avail_out == 0); } #endif else count+=WriteBlob(image,length,pixels); } #ifdef MAGICKCORE_ZLIB_DELEGATE if (next_image->compression == ZipCompression) { (void) deflateEnd(&stream); compressed_pixels=(unsigned char *) RelinquishMagickMemory( compressed_pixels); } #endif quantum_info=DestroyQuantumInfo(quantum_info); return(count); }
118,935,701,709,361,370,000,000,000,000,000,000,000
psd.c
178,433,933,849,726,840,000,000,000,000,000,000,000
[ "CWE-787" ]
CVE-2017-5510
coders/psd.c in ImageMagick allows remote attackers to have unspecified impact via a crafted PSD file, which triggers an out-of-bounds write.
https://nvd.nist.gov/vuln/detail/CVE-2017-5510
9,489
openjpeg
d27ccf01c68a31ad62b33d2dc1ba2bb1eeaafe7b
https://github.com/uclouvain/openjpeg
https://github.com/uclouvain/openjpeg/commit/d27ccf01c68a31ad62b33d2dc1ba2bb1eeaafe7b
Avoid division by zero in opj_pi_next_rpcl, opj_pi_next_pcrl and opj_pi_next_cprl (#938) Fixes issues with id:000026,sig:08,src:002419,op:int32,pos:60,val:+32 and id:000019,sig:08,src:001098,op:flip1,pos:49
1
static OPJ_BOOL opj_pi_next_cprl(opj_pi_iterator_t * pi) { opj_pi_comp_t *comp = NULL; opj_pi_resolution_t *res = NULL; OPJ_UINT32 index = 0; if (!pi->first) { comp = &pi->comps[pi->compno]; goto LABEL_SKIP; } else { pi->first = 0; } for (pi->compno = pi->poc.compno0; pi->compno < pi->poc.compno1; pi->compno++) { OPJ_UINT32 resno; comp = &pi->comps[pi->compno]; pi->dx = 0; pi->dy = 0; for (resno = 0; resno < comp->numresolutions; resno++) { OPJ_UINT32 dx, dy; res = &comp->resolutions[resno]; dx = comp->dx * (1u << (res->pdx + comp->numresolutions - 1 - resno)); dy = comp->dy * (1u << (res->pdy + comp->numresolutions - 1 - resno)); pi->dx = !pi->dx ? dx : opj_uint_min(pi->dx, dx); pi->dy = !pi->dy ? dy : opj_uint_min(pi->dy, dy); } if (!pi->tp_on) { pi->poc.ty0 = pi->ty0; pi->poc.tx0 = pi->tx0; pi->poc.ty1 = pi->ty1; pi->poc.tx1 = pi->tx1; } for (pi->y = pi->poc.ty0; pi->y < pi->poc.ty1; pi->y += (OPJ_INT32)(pi->dy - (OPJ_UINT32)(pi->y % (OPJ_INT32)pi->dy))) { for (pi->x = pi->poc.tx0; pi->x < pi->poc.tx1; pi->x += (OPJ_INT32)(pi->dx - (OPJ_UINT32)(pi->x % (OPJ_INT32)pi->dx))) { for (pi->resno = pi->poc.resno0; pi->resno < opj_uint_min(pi->poc.resno1, comp->numresolutions); pi->resno++) { OPJ_UINT32 levelno; OPJ_INT32 trx0, try0; OPJ_INT32 trx1, try1; OPJ_UINT32 rpx, rpy; OPJ_INT32 prci, prcj; res = &comp->resolutions[pi->resno]; levelno = comp->numresolutions - 1 - pi->resno; trx0 = opj_int_ceildiv(pi->tx0, (OPJ_INT32)(comp->dx << levelno)); try0 = opj_int_ceildiv(pi->ty0, (OPJ_INT32)(comp->dy << levelno)); trx1 = opj_int_ceildiv(pi->tx1, (OPJ_INT32)(comp->dx << levelno)); try1 = opj_int_ceildiv(pi->ty1, (OPJ_INT32)(comp->dy << levelno)); rpx = res->pdx + levelno; rpy = res->pdy + levelno; if (!((pi->y % (OPJ_INT32)(comp->dy << rpy) == 0) || ((pi->y == pi->ty0) && ((try0 << levelno) % (1 << rpy))))) { continue; } if (!((pi->x % (OPJ_INT32)(comp->dx << rpx) == 0) || ((pi->x == pi->tx0) && ((trx0 << levelno) % (1 << rpx))))) { continue; } if ((res->pw == 0) || (res->ph == 0)) { continue; } if ((trx0 == trx1) || (try0 == try1)) { continue; } prci = opj_int_floordivpow2(opj_int_ceildiv(pi->x, (OPJ_INT32)(comp->dx << levelno)), (OPJ_INT32)res->pdx) - opj_int_floordivpow2(trx0, (OPJ_INT32)res->pdx); prcj = opj_int_floordivpow2(opj_int_ceildiv(pi->y, (OPJ_INT32)(comp->dy << levelno)), (OPJ_INT32)res->pdy) - opj_int_floordivpow2(try0, (OPJ_INT32)res->pdy); pi->precno = (OPJ_UINT32)(prci + prcj * (OPJ_INT32)res->pw); for (pi->layno = pi->poc.layno0; pi->layno < pi->poc.layno1; pi->layno++) { index = pi->layno * pi->step_l + pi->resno * pi->step_r + pi->compno * pi->step_c + pi->precno * pi->step_p; if (!pi->include[index]) { pi->include[index] = 1; return OPJ_TRUE; } LABEL_SKIP: ; } } } } } return OPJ_FALSE; }
271,427,417,647,316,780,000,000,000,000,000,000,000
pi.c
270,132,454,060,460,450,000,000,000,000,000,000,000
[ "CWE-369" ]
CVE-2016-10506
Division-by-zero vulnerabilities in the functions opj_pi_next_cprl, opj_pi_next_pcrl, and opj_pi_next_rpcl in pi.c in OpenJPEG before 2.2.0 allow remote attackers to cause a denial of service (application crash) via crafted j2k files.
https://nvd.nist.gov/vuln/detail/CVE-2016-10506
9,496
libgd
69d2fd2c597ffc0c217de1238b9bf4d4bceba8e6
https://github.com/libgd/libgd
https://github.com/libgd/libgd/commit/69d2fd2c597ffc0c217de1238b9bf4d4bceba8e6
Fix #354: Signed Integer Overflow gd_io.c GD2 stores the number of horizontal and vertical chunks as words (i.e. 2 byte unsigned). These values are multiplied and assigned to an int when reading the image, what can cause integer overflows. We have to avoid that, and also make sure that either chunk count is actually greater than zero. If illegal chunk counts are detected, we bail out from reading the image.
1
_gd2GetHeader (gdIOCtxPtr in, int *sx, int *sy, int *cs, int *vers, int *fmt, int *ncx, int *ncy, t_chunk_info ** chunkIdx) { int i; int ch; char id[5]; t_chunk_info *cidx; int sidx; int nc; GD2_DBG (printf ("Reading gd2 header info\n")); for (i = 0; i < 4; i++) { ch = gdGetC (in); if (ch == EOF) { goto fail1; }; id[i] = ch; }; id[4] = 0; GD2_DBG (printf ("Got file code: %s\n", id)); /* Equiv. of 'magick'. */ if (strcmp (id, GD2_ID) != 0) { GD2_DBG (printf ("Not a valid gd2 file\n")); goto fail1; }; /* Version */ if (gdGetWord (vers, in) != 1) { goto fail1; }; GD2_DBG (printf ("Version: %d\n", *vers)); if ((*vers != 1) && (*vers != 2)) { GD2_DBG (printf ("Bad version: %d\n", *vers)); goto fail1; }; /* Image Size */ if (!gdGetWord (sx, in)) { GD2_DBG (printf ("Could not get x-size\n")); goto fail1; } if (!gdGetWord (sy, in)) { GD2_DBG (printf ("Could not get y-size\n")); goto fail1; } GD2_DBG (printf ("Image is %dx%d\n", *sx, *sy)); /* Chunk Size (pixels, not bytes!) */ if (gdGetWord (cs, in) != 1) { goto fail1; }; GD2_DBG (printf ("ChunkSize: %d\n", *cs)); if ((*cs < GD2_CHUNKSIZE_MIN) || (*cs > GD2_CHUNKSIZE_MAX)) { GD2_DBG (printf ("Bad chunk size: %d\n", *cs)); goto fail1; }; /* Data Format */ if (gdGetWord (fmt, in) != 1) { goto fail1; }; GD2_DBG (printf ("Format: %d\n", *fmt)); if ((*fmt != GD2_FMT_RAW) && (*fmt != GD2_FMT_COMPRESSED) && (*fmt != GD2_FMT_TRUECOLOR_RAW) && (*fmt != GD2_FMT_TRUECOLOR_COMPRESSED)) { GD2_DBG (printf ("Bad data format: %d\n", *fmt)); goto fail1; }; /* # of chunks wide */ if (gdGetWord (ncx, in) != 1) { goto fail1; }; GD2_DBG (printf ("%d Chunks Wide\n", *ncx)); /* # of chunks high */ if (gdGetWord (ncy, in) != 1) { goto fail1; }; GD2_DBG (printf ("%d Chunks vertically\n", *ncy)); if (gd2_compressed (*fmt)) { nc = (*ncx) * (*ncy); GD2_DBG (printf ("Reading %d chunk index entries\n", nc)); if (overflow2(sizeof(t_chunk_info), nc)) { goto fail1; } sidx = sizeof (t_chunk_info) * nc; if (sidx <= 0) { goto fail1; } cidx = gdCalloc (sidx, 1); if (cidx == NULL) { goto fail1; } for (i = 0; i < nc; i++) { if (gdGetInt (&cidx[i].offset, in) != 1) { goto fail2; }; if (gdGetInt (&cidx[i].size, in) != 1) { goto fail2; }; if (cidx[i].offset < 0 || cidx[i].size < 0) goto fail2; }; *chunkIdx = cidx; }; GD2_DBG (printf ("gd2 header complete\n")); return 1; fail2: gdFree(cidx); fail1: return 0; }
204,162,205,224,826,550,000,000,000,000,000,000,000
None
null
[ "CWE-190" ]
CVE-2016-10168
Integer overflow in gd_io.c in the GD Graphics Library (aka libgd) before 2.2.4 allows remote attackers to have unspecified impact via vectors involving the number of horizontal and vertical chunks in an image.
https://nvd.nist.gov/vuln/detail/CVE-2016-10168
9,510
ImageMagick
134463b926fa965571aa4febd61b810be5e7da05
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/134463b926fa965571aa4febd61b810be5e7da05
https://github.com/ImageMagick/ImageMagick/issues/129
1
static Image *ReadVIFFImage(const ImageInfo *image_info, ExceptionInfo *exception) { #define VFF_CM_genericRGB 15 #define VFF_CM_ntscRGB 1 #define VFF_CM_NONE 0 #define VFF_DEP_DECORDER 0x4 #define VFF_DEP_NSORDER 0x8 #define VFF_DES_RAW 0 #define VFF_LOC_IMPLICIT 1 #define VFF_MAPTYP_NONE 0 #define VFF_MAPTYP_1_BYTE 1 #define VFF_MAPTYP_2_BYTE 2 #define VFF_MAPTYP_4_BYTE 4 #define VFF_MAPTYP_FLOAT 5 #define VFF_MAPTYP_DOUBLE 7 #define VFF_MS_NONE 0 #define VFF_MS_ONEPERBAND 1 #define VFF_MS_SHARED 3 #define VFF_TYP_BIT 0 #define VFF_TYP_1_BYTE 1 #define VFF_TYP_2_BYTE 2 #define VFF_TYP_4_BYTE 4 #define VFF_TYP_FLOAT 5 #define VFF_TYP_DOUBLE 9 typedef struct _ViffInfo { unsigned char identifier, file_type, release, version, machine_dependency, reserve[3]; char comment[512]; unsigned int rows, columns, subrows; int x_offset, y_offset; float x_bits_per_pixel, y_bits_per_pixel; unsigned int location_type, location_dimension, number_of_images, number_data_bands, data_storage_type, data_encode_scheme, map_scheme, map_storage_type, map_rows, map_columns, map_subrows, map_enable, maps_per_cycle, color_space_model; } ViffInfo; double min_value, scale_factor, value; Image *image; int bit; MagickBooleanType status; MagickSizeType number_pixels; register ssize_t x; register Quantum *q; register ssize_t i; register unsigned char *p; size_t bytes_per_pixel, max_packets, quantum; ssize_t count, y; unsigned char *pixels; unsigned long lsb_first; ViffInfo viff_info; /* Open image file. */ assert(image_info != (const ImageInfo *) NULL); assert(image_info->signature == MagickCoreSignature); if (image_info->debug != MagickFalse) (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s", image_info->filename); assert(exception != (ExceptionInfo *) NULL); assert(exception->signature == MagickCoreSignature); image=AcquireImage(image_info,exception); status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception); if (status == MagickFalse) { image=DestroyImageList(image); return((Image *) NULL); } /* Read VIFF header (1024 bytes). */ count=ReadBlob(image,1,&viff_info.identifier); do { /* Verify VIFF identifier. */ if ((count != 1) || ((unsigned char) viff_info.identifier != 0xab)) ThrowReaderException(CorruptImageError,"NotAVIFFImage"); /* Initialize VIFF image. */ (void) ReadBlob(image,sizeof(viff_info.file_type),&viff_info.file_type); (void) ReadBlob(image,sizeof(viff_info.release),&viff_info.release); (void) ReadBlob(image,sizeof(viff_info.version),&viff_info.version); (void) ReadBlob(image,sizeof(viff_info.machine_dependency), &viff_info.machine_dependency); (void) ReadBlob(image,sizeof(viff_info.reserve),viff_info.reserve); count=ReadBlob(image,512,(unsigned char *) viff_info.comment); viff_info.comment[511]='\0'; if (strlen(viff_info.comment) > 4) (void) SetImageProperty(image,"comment",viff_info.comment,exception); if ((viff_info.machine_dependency == VFF_DEP_DECORDER) || (viff_info.machine_dependency == VFF_DEP_NSORDER)) image->endian=LSBEndian; else image->endian=MSBEndian; viff_info.rows=ReadBlobLong(image); viff_info.columns=ReadBlobLong(image); viff_info.subrows=ReadBlobLong(image); viff_info.x_offset=(int) ReadBlobLong(image); viff_info.y_offset=(int) ReadBlobLong(image); viff_info.x_bits_per_pixel=(float) ReadBlobLong(image); viff_info.y_bits_per_pixel=(float) ReadBlobLong(image); viff_info.location_type=ReadBlobLong(image); viff_info.location_dimension=ReadBlobLong(image); viff_info.number_of_images=ReadBlobLong(image); viff_info.number_data_bands=ReadBlobLong(image); viff_info.data_storage_type=ReadBlobLong(image); viff_info.data_encode_scheme=ReadBlobLong(image); viff_info.map_scheme=ReadBlobLong(image); viff_info.map_storage_type=ReadBlobLong(image); viff_info.map_rows=ReadBlobLong(image); viff_info.map_columns=ReadBlobLong(image); viff_info.map_subrows=ReadBlobLong(image); viff_info.map_enable=ReadBlobLong(image); viff_info.maps_per_cycle=ReadBlobLong(image); viff_info.color_space_model=ReadBlobLong(image); for (i=0; i < 420; i++) (void) ReadBlobByte(image); if (EOFBlob(image) != MagickFalse) ThrowReaderException(CorruptImageError,"UnexpectedEndOfFile"); image->columns=viff_info.rows; image->rows=viff_info.columns; image->depth=viff_info.x_bits_per_pixel <= 8 ? 8UL : MAGICKCORE_QUANTUM_DEPTH; /* Verify that we can read this VIFF image. */ number_pixels=(MagickSizeType) viff_info.columns*viff_info.rows; if (number_pixels != (size_t) number_pixels) ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); if (number_pixels == 0) ThrowReaderException(CoderError,"ImageColumnOrRowSizeIsNotSupported"); if ((viff_info.number_data_bands < 1) || (viff_info.number_data_bands > 4)) ThrowReaderException(CorruptImageError,"ImproperImageHeader"); if ((viff_info.data_storage_type != VFF_TYP_BIT) && (viff_info.data_storage_type != VFF_TYP_1_BYTE) && (viff_info.data_storage_type != VFF_TYP_2_BYTE) && (viff_info.data_storage_type != VFF_TYP_4_BYTE) && (viff_info.data_storage_type != VFF_TYP_FLOAT) && (viff_info.data_storage_type != VFF_TYP_DOUBLE)) ThrowReaderException(CoderError,"DataStorageTypeIsNotSupported"); if (viff_info.data_encode_scheme != VFF_DES_RAW) ThrowReaderException(CoderError,"DataEncodingSchemeIsNotSupported"); if ((viff_info.map_storage_type != VFF_MAPTYP_NONE) && (viff_info.map_storage_type != VFF_MAPTYP_1_BYTE) && (viff_info.map_storage_type != VFF_MAPTYP_2_BYTE) && (viff_info.map_storage_type != VFF_MAPTYP_4_BYTE) && (viff_info.map_storage_type != VFF_MAPTYP_FLOAT) && (viff_info.map_storage_type != VFF_MAPTYP_DOUBLE)) ThrowReaderException(CoderError,"MapStorageTypeIsNotSupported"); if ((viff_info.color_space_model != VFF_CM_NONE) && (viff_info.color_space_model != VFF_CM_ntscRGB) && (viff_info.color_space_model != VFF_CM_genericRGB)) ThrowReaderException(CoderError,"ColorspaceModelIsNotSupported"); if (viff_info.location_type != VFF_LOC_IMPLICIT) ThrowReaderException(CoderError,"LocationTypeIsNotSupported"); if (viff_info.number_of_images != 1) ThrowReaderException(CoderError,"NumberOfImagesIsNotSupported"); if (viff_info.map_rows == 0) viff_info.map_scheme=VFF_MS_NONE; switch ((int) viff_info.map_scheme) { case VFF_MS_NONE: { if (viff_info.number_data_bands < 3) { /* Create linear color ramp. */ if (viff_info.data_storage_type == VFF_TYP_BIT) image->colors=2; else if (viff_info.data_storage_type == VFF_MAPTYP_1_BYTE) image->colors=256UL; else image->colors=image->depth <= 8 ? 256UL : 65536UL; status=AcquireImageColormap(image,image->colors,exception); if (status == MagickFalse) ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } break; } case VFF_MS_ONEPERBAND: case VFF_MS_SHARED: { unsigned char *viff_colormap; /* Allocate VIFF colormap. */ switch ((int) viff_info.map_storage_type) { case VFF_MAPTYP_1_BYTE: bytes_per_pixel=1; break; case VFF_MAPTYP_2_BYTE: bytes_per_pixel=2; break; case VFF_MAPTYP_4_BYTE: bytes_per_pixel=4; break; case VFF_MAPTYP_FLOAT: bytes_per_pixel=4; break; case VFF_MAPTYP_DOUBLE: bytes_per_pixel=8; break; default: bytes_per_pixel=1; break; } image->colors=viff_info.map_columns; if (AcquireImageColormap(image,image->colors,exception) == MagickFalse) ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); if (viff_info.map_rows > (viff_info.map_rows*bytes_per_pixel*sizeof(*viff_colormap))) ThrowReaderException(CorruptImageError,"ImproperImageHeader"); viff_colormap=(unsigned char *) AcquireQuantumMemory(image->colors, viff_info.map_rows*bytes_per_pixel*sizeof(*viff_colormap)); if (viff_colormap == (unsigned char *) NULL) ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); /* Read VIFF raster colormap. */ count=ReadBlob(image,bytes_per_pixel*image->colors*viff_info.map_rows, viff_colormap); lsb_first=1; if (*(char *) &lsb_first && ((viff_info.machine_dependency != VFF_DEP_DECORDER) && (viff_info.machine_dependency != VFF_DEP_NSORDER))) switch ((int) viff_info.map_storage_type) { case VFF_MAPTYP_2_BYTE: { MSBOrderShort(viff_colormap,(bytes_per_pixel*image->colors* viff_info.map_rows)); break; } case VFF_MAPTYP_4_BYTE: case VFF_MAPTYP_FLOAT: { MSBOrderLong(viff_colormap,(bytes_per_pixel*image->colors* viff_info.map_rows)); break; } default: break; } for (i=0; i < (ssize_t) (viff_info.map_rows*image->colors); i++) { switch ((int) viff_info.map_storage_type) { case VFF_MAPTYP_2_BYTE: value=1.0*((short *) viff_colormap)[i]; break; case VFF_MAPTYP_4_BYTE: value=1.0*((int *) viff_colormap)[i]; break; case VFF_MAPTYP_FLOAT: value=((float *) viff_colormap)[i]; break; case VFF_MAPTYP_DOUBLE: value=((double *) viff_colormap)[i]; break; default: value=1.0*viff_colormap[i]; break; } if (i < (ssize_t) image->colors) { image->colormap[i].red=ScaleCharToQuantum((unsigned char) value); image->colormap[i].green= ScaleCharToQuantum((unsigned char) value); image->colormap[i].blue=ScaleCharToQuantum((unsigned char) value); } else if (i < (ssize_t) (2*image->colors)) image->colormap[i % image->colors].green= ScaleCharToQuantum((unsigned char) value); else if (i < (ssize_t) (3*image->colors)) image->colormap[i % image->colors].blue= ScaleCharToQuantum((unsigned char) value); } viff_colormap=(unsigned char *) RelinquishMagickMemory(viff_colormap); break; } default: ThrowReaderException(CoderError,"ColormapTypeNotSupported"); } /* Initialize image structure. */ image->alpha_trait=viff_info.number_data_bands == 4 ? BlendPixelTrait : UndefinedPixelTrait; image->storage_class=(viff_info.number_data_bands < 3 ? PseudoClass : DirectClass); image->columns=viff_info.rows; image->rows=viff_info.columns; if ((image_info->ping != MagickFalse) && (image_info->number_scenes != 0)) if (image->scene >= (image_info->scene+image_info->number_scenes-1)) break; status=SetImageExtent(image,image->columns,image->rows,exception); if (status == MagickFalse) return(DestroyImageList(image)); /* Allocate VIFF pixels. */ switch ((int) viff_info.data_storage_type) { case VFF_TYP_2_BYTE: bytes_per_pixel=2; break; case VFF_TYP_4_BYTE: bytes_per_pixel=4; break; case VFF_TYP_FLOAT: bytes_per_pixel=4; break; case VFF_TYP_DOUBLE: bytes_per_pixel=8; break; default: bytes_per_pixel=1; break; } if (viff_info.data_storage_type == VFF_TYP_BIT) max_packets=((image->columns+7UL) >> 3UL)*image->rows; else max_packets=(size_t) (number_pixels*viff_info.number_data_bands); pixels=(unsigned char *) AcquireQuantumMemory(MagickMax(number_pixels, max_packets),bytes_per_pixel*sizeof(*pixels)); if (pixels == (unsigned char *) NULL) ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); count=ReadBlob(image,bytes_per_pixel*max_packets,pixels); lsb_first=1; if (*(char *) &lsb_first && ((viff_info.machine_dependency != VFF_DEP_DECORDER) && (viff_info.machine_dependency != VFF_DEP_NSORDER))) switch ((int) viff_info.data_storage_type) { case VFF_TYP_2_BYTE: { MSBOrderShort(pixels,bytes_per_pixel*max_packets); break; } case VFF_TYP_4_BYTE: case VFF_TYP_FLOAT: { MSBOrderLong(pixels,bytes_per_pixel*max_packets); break; } default: break; } min_value=0.0; scale_factor=1.0; if ((viff_info.data_storage_type != VFF_TYP_1_BYTE) && (viff_info.map_scheme == VFF_MS_NONE)) { double max_value; /* Determine scale factor. */ switch ((int) viff_info.data_storage_type) { case VFF_TYP_2_BYTE: value=1.0*((short *) pixels)[0]; break; case VFF_TYP_4_BYTE: value=1.0*((int *) pixels)[0]; break; case VFF_TYP_FLOAT: value=((float *) pixels)[0]; break; case VFF_TYP_DOUBLE: value=((double *) pixels)[0]; break; default: value=1.0*pixels[0]; break; } max_value=value; min_value=value; for (i=0; i < (ssize_t) max_packets; i++) { switch ((int) viff_info.data_storage_type) { case VFF_TYP_2_BYTE: value=1.0*((short *) pixels)[i]; break; case VFF_TYP_4_BYTE: value=1.0*((int *) pixels)[i]; break; case VFF_TYP_FLOAT: value=((float *) pixels)[i]; break; case VFF_TYP_DOUBLE: value=((double *) pixels)[i]; break; default: value=1.0*pixels[i]; break; } if (value > max_value) max_value=value; else if (value < min_value) min_value=value; } if ((min_value == 0) && (max_value == 0)) scale_factor=0; else if (min_value == max_value) { scale_factor=(double) QuantumRange/min_value; min_value=0; } else scale_factor=(double) QuantumRange/(max_value-min_value); } /* Convert pixels to Quantum size. */ p=(unsigned char *) pixels; for (i=0; i < (ssize_t) max_packets; i++) { switch ((int) viff_info.data_storage_type) { case VFF_TYP_2_BYTE: value=1.0*((short *) pixels)[i]; break; case VFF_TYP_4_BYTE: value=1.0*((int *) pixels)[i]; break; case VFF_TYP_FLOAT: value=((float *) pixels)[i]; break; case VFF_TYP_DOUBLE: value=((double *) pixels)[i]; break; default: value=1.0*pixels[i]; break; } if (viff_info.map_scheme == VFF_MS_NONE) { value=(value-min_value)*scale_factor; if (value > QuantumRange) value=QuantumRange; else if (value < 0) value=0; } *p=(unsigned char) ((Quantum) value); p++; } /* Convert VIFF raster image to pixel packets. */ p=(unsigned char *) pixels; if (viff_info.data_storage_type == VFF_TYP_BIT) { /* Convert bitmap scanline. */ for (y=0; y < (ssize_t) image->rows; y++) { q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; for (x=0; x < (ssize_t) (image->columns-7); x+=8) { for (bit=0; bit < 8; bit++) { quantum=(size_t) ((*p) & (0x01 << bit) ? 0 : 1); SetPixelRed(image,quantum == 0 ? 0 : QuantumRange,q); SetPixelGreen(image,quantum == 0 ? 0 : QuantumRange,q); SetPixelBlue(image,quantum == 0 ? 0 : QuantumRange,q); if (image->storage_class == PseudoClass) SetPixelIndex(image,(Quantum) quantum,q); q+=GetPixelChannels(image); } p++; } if ((image->columns % 8) != 0) { for (bit=0; bit < (int) (image->columns % 8); bit++) { quantum=(size_t) ((*p) & (0x01 << bit) ? 0 : 1); SetPixelRed(image,quantum == 0 ? 0 : QuantumRange,q); SetPixelGreen(image,quantum == 0 ? 0 : QuantumRange,q); SetPixelBlue(image,quantum == 0 ? 0 : QuantumRange,q); if (image->storage_class == PseudoClass) SetPixelIndex(image,(Quantum) quantum,q); q+=GetPixelChannels(image); } p++; } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } } else if (image->storage_class == PseudoClass) for (y=0; y < (ssize_t) image->rows; y++) { q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; for (x=0; x < (ssize_t) image->columns; x++) { SetPixelIndex(image,*p++,q); q+=GetPixelChannels(image); } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } else { /* Convert DirectColor scanline. */ number_pixels=(MagickSizeType) image->columns*image->rows; for (y=0; y < (ssize_t) image->rows; y++) { q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; for (x=0; x < (ssize_t) image->columns; x++) { SetPixelRed(image,ScaleCharToQuantum(*p),q); SetPixelGreen(image,ScaleCharToQuantum(*(p+number_pixels)),q); SetPixelBlue(image,ScaleCharToQuantum(*(p+2*number_pixels)),q); if (image->colors != 0) { ssize_t index; index=(ssize_t) GetPixelRed(image,q); SetPixelRed(image,image->colormap[ ConstrainColormapIndex(image,index,exception)].red,q); index=(ssize_t) GetPixelGreen(image,q); SetPixelGreen(image,image->colormap[ ConstrainColormapIndex(image,index,exception)].green,q); index=(ssize_t) GetPixelBlue(image,q); SetPixelBlue(image,image->colormap[ ConstrainColormapIndex(image,index,exception)].blue,q); } SetPixelAlpha(image,image->alpha_trait != UndefinedPixelTrait ? ScaleCharToQuantum(*(p+number_pixels*3)) : OpaqueAlpha,q); p++; q+=GetPixelChannels(image); } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } } pixels=(unsigned char *) RelinquishMagickMemory(pixels); if (image->storage_class == PseudoClass) (void) SyncImage(image,exception); if (EOFBlob(image) != MagickFalse) { ThrowFileException(exception,CorruptImageError,"UnexpectedEndOfFile", image->filename); break; } /* Proceed to next image. */ if (image_info->number_scenes != 0) if (image->scene >= (image_info->scene+image_info->number_scenes-1)) break; count=ReadBlob(image,1,&viff_info.identifier); if ((count != 0) && (viff_info.identifier == 0xab)) { /* Allocate next image structure. */ AcquireNextImage(image_info,image,exception); if (GetNextImageInList(image) == (Image *) NULL) { image=DestroyImageList(image); return((Image *) NULL); } image=SyncNextImageInList(image); status=SetImageProgress(image,LoadImagesTag,TellBlob(image), GetBlobSize(image)); if (status == MagickFalse) break; } } while ((count != 0) && (viff_info.identifier == 0xab)); (void) CloseBlob(image); return(GetFirstImageInList(image)); }
103,434,257,928,059,800,000,000,000,000,000,000,000
viff.c
5,265,972,500,401,776,000,000,000,000,000,000,000
[ "CWE-284" ]
CVE-2016-10065
The ReadVIFFImage function in coders/viff.c in ImageMagick before 7.0.1-0 allows remote attackers to cause a denial of service (application crash) or have other unspecified impact via a crafted file.
https://nvd.nist.gov/vuln/detail/CVE-2016-10065
9,511
ImageMagick
10b3823a7619ed22d42764733eb052c4159bc8c1
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/10b3823a7619ed22d42764733eb052c4159bc8c1
None
1
MagickBooleanType sixel_decode(unsigned char /* in */ *p, /* sixel bytes */ unsigned char /* out */ **pixels, /* decoded pixels */ size_t /* out */ *pwidth, /* image width */ size_t /* out */ *pheight, /* image height */ unsigned char /* out */ **palette, /* ARGB palette */ size_t /* out */ *ncolors /* palette size (<= 256) */) { int n, i, r, g, b, sixel_vertical_mask, c; int posision_x, posision_y; int max_x, max_y; int attributed_pan, attributed_pad; int attributed_ph, attributed_pv; int repeat_count, color_index, max_color_index = 2, background_color_index; int param[10]; int sixel_palet[SIXEL_PALETTE_MAX]; unsigned char *imbuf, *dmbuf; int imsx, imsy; int dmsx, dmsy; int y; posision_x = posision_y = 0; max_x = max_y = 0; attributed_pan = 2; attributed_pad = 1; attributed_ph = attributed_pv = 0; repeat_count = 1; color_index = 0; background_color_index = 0; imsx = 2048; imsy = 2048; imbuf = (unsigned char *) AcquireQuantumMemory(imsx * imsy,1); if (imbuf == NULL) { return(MagickFalse); } for (n = 0; n < 16; n++) { sixel_palet[n] = sixel_default_color_table[n]; } /* colors 16-231 are a 6x6x6 color cube */ for (r = 0; r < 6; r++) { for (g = 0; g < 6; g++) { for (b = 0; b < 6; b++) { sixel_palet[n++] = SIXEL_RGB(r * 51, g * 51, b * 51); } } } /* colors 232-255 are a grayscale ramp, intentionally leaving out */ for (i = 0; i < 24; i++) { sixel_palet[n++] = SIXEL_RGB(i * 11, i * 11, i * 11); } for (; n < SIXEL_PALETTE_MAX; n++) { sixel_palet[n] = SIXEL_RGB(255, 255, 255); } (void) ResetMagickMemory(imbuf, background_color_index, imsx * imsy); while (*p != '\0') { if ((p[0] == '\033' && p[1] == 'P') || *p == 0x90) { if (*p == '\033') { p++; } p = get_params(++p, param, &n); if (*p == 'q') { p++; if (n > 0) { /* Pn1 */ switch(param[0]) { case 0: case 1: attributed_pad = 2; break; case 2: attributed_pad = 5; break; case 3: attributed_pad = 4; break; case 4: attributed_pad = 4; break; case 5: attributed_pad = 3; break; case 6: attributed_pad = 3; break; case 7: attributed_pad = 2; break; case 8: attributed_pad = 2; break; case 9: attributed_pad = 1; break; } } if (n > 2) { /* Pn3 */ if (param[2] == 0) { param[2] = 10; } attributed_pan = attributed_pan * param[2] / 10; attributed_pad = attributed_pad * param[2] / 10; if (attributed_pan <= 0) attributed_pan = 1; if (attributed_pad <= 0) attributed_pad = 1; } } } else if ((p[0] == '\033' && p[1] == '\\') || *p == 0x9C) { break; } else if (*p == '"') { /* DECGRA Set Raster Attributes " Pan; Pad; Ph; Pv */ p = get_params(++p, param, &n); if (n > 0) attributed_pad = param[0]; if (n > 1) attributed_pan = param[1]; if (n > 2 && param[2] > 0) attributed_ph = param[2]; if (n > 3 && param[3] > 0) attributed_pv = param[3]; if (attributed_pan <= 0) attributed_pan = 1; if (attributed_pad <= 0) attributed_pad = 1; if (imsx < attributed_ph || imsy < attributed_pv) { dmsx = imsx > attributed_ph ? imsx : attributed_ph; dmsy = imsy > attributed_pv ? imsy : attributed_pv; dmbuf = (unsigned char *) AcquireQuantumMemory(dmsx * dmsy,1); if (dmbuf == (unsigned char *) NULL) { imbuf = (unsigned char *) RelinquishMagickMemory(imbuf); return (MagickFalse); } (void) ResetMagickMemory(dmbuf, background_color_index, dmsx * dmsy); for (y = 0; y < imsy; ++y) { (void) CopyMagickMemory(dmbuf + dmsx * y, imbuf + imsx * y, imsx); } imbuf = (unsigned char *) RelinquishMagickMemory(imbuf); imsx = dmsx; imsy = dmsy; imbuf = dmbuf; } } else if (*p == '!') { /* DECGRI Graphics Repeat Introducer ! Pn Ch */ p = get_params(++p, param, &n); if (n > 0) { repeat_count = param[0]; } } else if (*p == '#') { /* DECGCI Graphics Color Introducer # Pc; Pu; Px; Py; Pz */ p = get_params(++p, param, &n); if (n > 0) { if ((color_index = param[0]) < 0) { color_index = 0; } else if (color_index >= SIXEL_PALETTE_MAX) { color_index = SIXEL_PALETTE_MAX - 1; } } if (n > 4) { if (param[1] == 1) { /* HLS */ if (param[2] > 360) param[2] = 360; if (param[3] > 100) param[3] = 100; if (param[4] > 100) param[4] = 100; sixel_palet[color_index] = hls_to_rgb(param[2] * 100 / 360, param[3], param[4]); } else if (param[1] == 2) { /* RGB */ if (param[2] > 100) param[2] = 100; if (param[3] > 100) param[3] = 100; if (param[4] > 100) param[4] = 100; sixel_palet[color_index] = SIXEL_XRGB(param[2], param[3], param[4]); } } } else if (*p == '$') { /* DECGCR Graphics Carriage Return */ p++; posision_x = 0; repeat_count = 1; } else if (*p == '-') { /* DECGNL Graphics Next Line */ p++; posision_x = 0; posision_y += 6; repeat_count = 1; } else if (*p >= '?' && *p <= '\177') { if (imsx < (posision_x + repeat_count) || imsy < (posision_y + 6)) { int nx = imsx * 2; int ny = imsy * 2; while (nx < (posision_x + repeat_count) || ny < (posision_y + 6)) { nx *= 2; ny *= 2; } dmsx = nx; dmsy = ny; dmbuf = (unsigned char *) AcquireQuantumMemory(dmsx * dmsy,1); if (dmbuf == (unsigned char *) NULL) { imbuf = (unsigned char *) RelinquishMagickMemory(imbuf); return (MagickFalse); } (void) ResetMagickMemory(dmbuf, background_color_index, dmsx * dmsy); for (y = 0; y < imsy; ++y) { (void) CopyMagickMemory(dmbuf + dmsx * y, imbuf + imsx * y, imsx); } imbuf = (unsigned char *) RelinquishMagickMemory(imbuf); imsx = dmsx; imsy = dmsy; imbuf = dmbuf; } if (color_index > max_color_index) { max_color_index = color_index; } if ((b = *(p++) - '?') == 0) { posision_x += repeat_count; } else { sixel_vertical_mask = 0x01; if (repeat_count <= 1) { for (i = 0; i < 6; i++) { if ((b & sixel_vertical_mask) != 0) { imbuf[imsx * (posision_y + i) + posision_x] = color_index; if (max_x < posision_x) { max_x = posision_x; } if (max_y < (posision_y + i)) { max_y = posision_y + i; } } sixel_vertical_mask <<= 1; } posision_x += 1; } else { /* repeat_count > 1 */ for (i = 0; i < 6; i++) { if ((b & sixel_vertical_mask) != 0) { c = sixel_vertical_mask << 1; for (n = 1; (i + n) < 6; n++) { if ((b & c) == 0) { break; } c <<= 1; } for (y = posision_y + i; y < posision_y + i + n; ++y) { (void) ResetMagickMemory(imbuf + imsx * y + posision_x, color_index, repeat_count); } if (max_x < (posision_x + repeat_count - 1)) { max_x = posision_x + repeat_count - 1; } if (max_y < (posision_y + i + n - 1)) { max_y = posision_y + i + n - 1; } i += (n - 1); sixel_vertical_mask <<= (n - 1); } sixel_vertical_mask <<= 1; } posision_x += repeat_count; } } repeat_count = 1; } else { p++; } } if (++max_x < attributed_ph) { max_x = attributed_ph; } if (++max_y < attributed_pv) { max_y = attributed_pv; } if (imsx > max_x || imsy > max_y) { dmsx = max_x; dmsy = max_y; if ((dmbuf = (unsigned char *) AcquireQuantumMemory(dmsx * dmsy,1)) == NULL) { imbuf = (unsigned char *) RelinquishMagickMemory(imbuf); return (MagickFalse); } for (y = 0; y < dmsy; ++y) { (void) CopyMagickMemory(dmbuf + dmsx * y, imbuf + imsx * y, dmsx); } imbuf = (unsigned char *) RelinquishMagickMemory(imbuf); imsx = dmsx; imsy = dmsy; imbuf = dmbuf; } *pixels = imbuf; *pwidth = imsx; *pheight = imsy; *ncolors = max_color_index + 1; *palette = (unsigned char *) AcquireQuantumMemory(*ncolors,4); for (n = 0; n < (ssize_t) *ncolors; ++n) { (*palette)[n * 4 + 0] = sixel_palet[n] >> 16 & 0xff; (*palette)[n * 4 + 1] = sixel_palet[n] >> 8 & 0xff; (*palette)[n * 4 + 2] = sixel_palet[n] & 0xff; (*palette)[n * 4 + 3] = 0xff; } return(MagickTrue); }
35,991,042,325,313,126,000,000,000,000,000,000,000
sixel.c
213,717,042,796,677,070,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2016-10054
Buffer overflow in the WriteMAPImage function in coders/map.c in ImageMagick before 6.9.5-8 allows remote attackers to cause a denial of service (application crash) or have other unspecified impact via a crafted file.
https://nvd.nist.gov/vuln/detail/CVE-2016-10054
9,512
ImageMagick
9e187b73a8a1290bb0e1a1c878f8be1917aa8742
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/9e187b73a8a1290bb0e1a1c878f8be1917aa8742
None
1
static void WriteProfile(j_compress_ptr jpeg_info,Image *image) { const char *name; const StringInfo *profile; MagickBooleanType iptc; register ssize_t i; size_t length, tag_length; StringInfo *custom_profile; /* Save image profile as a APP marker. */ iptc=MagickFalse; custom_profile=AcquireStringInfo(65535L); ResetImageProfileIterator(image); for (name=GetNextImageProfile(image); name != (const char *) NULL; ) { register unsigned char *p; profile=GetImageProfile(image,name); p=GetStringInfoDatum(custom_profile); if (LocaleCompare(name,"EXIF") == 0) for (i=0; i < (ssize_t) GetStringInfoLength(profile); i+=65533L) { length=MagickMin(GetStringInfoLength(profile)-i,65533L); jpeg_write_marker(jpeg_info,XML_MARKER,GetStringInfoDatum(profile)+i, (unsigned int) length); } if (LocaleCompare(name,"ICC") == 0) { register unsigned char *p; tag_length=strlen(ICC_PROFILE); p=GetStringInfoDatum(custom_profile); (void) CopyMagickMemory(p,ICC_PROFILE,tag_length); p[tag_length]='\0'; for (i=0; i < (ssize_t) GetStringInfoLength(profile); i+=65519L) { length=MagickMin(GetStringInfoLength(profile)-i,65519L); p[12]=(unsigned char) ((i/65519L)+1); p[13]=(unsigned char) (GetStringInfoLength(profile)/65519L+1); (void) CopyMagickMemory(p+tag_length+3,GetStringInfoDatum(profile)+i, length); jpeg_write_marker(jpeg_info,ICC_MARKER,GetStringInfoDatum( custom_profile),(unsigned int) (length+tag_length+3)); } } if (((LocaleCompare(name,"IPTC") == 0) || (LocaleCompare(name,"8BIM") == 0)) && (iptc == MagickFalse)) { size_t roundup; iptc=MagickTrue; for (i=0; i < (ssize_t) GetStringInfoLength(profile); i+=65500L) { length=MagickMin(GetStringInfoLength(profile)-i,65500L); roundup=(size_t) (length & 0x01); if (LocaleNCompare((char *) GetStringInfoDatum(profile),"8BIM",4) == 0) { (void) memcpy(p,"Photoshop 3.0 ",14); tag_length=14; } else { (void) CopyMagickMemory(p,"Photoshop 3.0 8BIM\04\04\0\0\0\0",24); tag_length=26; p[24]=(unsigned char) (length >> 8); p[25]=(unsigned char) (length & 0xff); } p[13]=0x00; (void) memcpy(p+tag_length,GetStringInfoDatum(profile)+i,length); if (roundup != 0) p[length+tag_length]='\0'; jpeg_write_marker(jpeg_info,IPTC_MARKER,GetStringInfoDatum( custom_profile),(unsigned int) (length+tag_length+roundup)); } } if (LocaleCompare(name,"XMP") == 0) { StringInfo *xmp_profile; /* Add namespace to XMP profile. */ xmp_profile=StringToStringInfo("http://ns.adobe.com/xap/1.0/ "); if (xmp_profile != (StringInfo *) NULL) { if (profile != (StringInfo *) NULL) ConcatenateStringInfo(xmp_profile,profile); GetStringInfoDatum(xmp_profile)[28]='\0'; for (i=0; i < (ssize_t) GetStringInfoLength(xmp_profile); i+=65533L) { length=MagickMin(GetStringInfoLength(xmp_profile)-i,65533L); jpeg_write_marker(jpeg_info,XML_MARKER, GetStringInfoDatum(xmp_profile)+i,(unsigned int) length); } xmp_profile=DestroyStringInfo(xmp_profile); } } (void) LogMagickEvent(CoderEvent,GetMagickModule(), "%s profile: %.20g bytes",name,(double) GetStringInfoLength(profile)); name=GetNextImageProfile(image); } custom_profile=DestroyStringInfo(custom_profile); }
203,461,117,257,884,900,000,000,000,000,000,000,000
None
null
[ "CWE-119" ]
CVE-2016-10052
Buffer overflow in the WriteProfile function in coders/jpeg.c in ImageMagick before 6.9.5-6 allows remote attackers to cause a denial of service (application crash) or have other unspecified impact via a crafted file.
https://nvd.nist.gov/vuln/detail/CVE-2016-10052
9,516
php-src
863d37ea66d5c960db08d6f4a2cbd2518f0f80d1
https://github.com/php/php-src
https://github.com/php/php-src/commit/863d37ea66d5c960db08d6f4a2cbd2518f0f80d1
Fix #72696: imagefilltoborder stackoverflow on truecolor images We must not allow negative color values be passed to gdImageFillToBorder(), because that can lead to infinite recursion since the recursion termination condition will not necessarily be met.
1
void gdImageFillToBorder (gdImagePtr im, int x, int y, int border, int color) { int lastBorder; /* Seek left */ int leftLimit = -1, rightLimit; int i, restoreAlphaBlending = 0; if (border < 0) { /* Refuse to fill to a non-solid border */ return; } if (!im->trueColor) { if ((color > (im->colorsTotal - 1)) || (border > (im->colorsTotal - 1)) || (color < 0)) { return; } } restoreAlphaBlending = im->alphaBlendingFlag; im->alphaBlendingFlag = 0; if (x >= im->sx) { x = im->sx - 1; } else if (x < 0) { x = 0; } if (y >= im->sy) { y = im->sy - 1; } else if (y < 0) { y = 0; } for (i = x; i >= 0; i--) { if (gdImageGetPixel(im, i, y) == border) { break; } gdImageSetPixel(im, i, y, color); leftLimit = i; } if (leftLimit == -1) { im->alphaBlendingFlag = restoreAlphaBlending; return; } /* Seek right */ rightLimit = x; for (i = (x + 1); i < im->sx; i++) { if (gdImageGetPixel(im, i, y) == border) { break; } gdImageSetPixel(im, i, y, color); rightLimit = i; } /* Look at lines above and below and start paints */ /* Above */ if (y > 0) { lastBorder = 1; for (i = leftLimit; i <= rightLimit; i++) { int c = gdImageGetPixel(im, i, y - 1); if (lastBorder) { if ((c != border) && (c != color)) { gdImageFillToBorder(im, i, y - 1, border, color); lastBorder = 0; } } else if ((c == border) || (c == color)) { lastBorder = 1; } } } /* Below */ if (y < ((im->sy) - 1)) { lastBorder = 1; for (i = leftLimit; i <= rightLimit; i++) { int c = gdImageGetPixel(im, i, y + 1); if (lastBorder) { if ((c != border) && (c != color)) { gdImageFillToBorder(im, i, y + 1, border, color); lastBorder = 0; } } else if ((c == border) || (c == color)) { lastBorder = 1; } } } im->alphaBlendingFlag = restoreAlphaBlending; }
199,950,444,190,713,360,000,000,000,000,000,000,000
None
null
[ "CWE-119" ]
CVE-2016-9933
Stack consumption vulnerability in the gdImageFillToBorder function in gd.c in the GD Graphics Library (aka libgd) before 2.2.2, as used in PHP before 5.6.28 and 7.x before 7.0.13, allows remote attackers to cause a denial of service (segmentation violation) via a crafted imagefilltoborder call that triggers use of a negative color value.
https://nvd.nist.gov/vuln/detail/CVE-2016-9933
9,517
jasper
1abc2e5a401a4bf1d5ca4df91358ce5df111f495
https://github.com/mdadams/jasper
https://github.com/mdadams/jasper/commit/1abc2e5a401a4bf1d5ca4df91358ce5df111f495
Fixed an array overflow problem in the JPC decoder.
1
static int jpc_dec_tileinit(jpc_dec_t *dec, jpc_dec_tile_t *tile) { jpc_dec_tcomp_t *tcomp; int compno; int rlvlno; jpc_dec_rlvl_t *rlvl; jpc_dec_band_t *band; jpc_dec_prc_t *prc; int bndno; jpc_tsfb_band_t *bnd; int bandno; jpc_dec_ccp_t *ccp; int prccnt; jpc_dec_cblk_t *cblk; int cblkcnt; uint_fast32_t tlprcxstart; uint_fast32_t tlprcystart; uint_fast32_t brprcxend; uint_fast32_t brprcyend; uint_fast32_t tlcbgxstart; uint_fast32_t tlcbgystart; uint_fast32_t brcbgxend; uint_fast32_t brcbgyend; uint_fast32_t cbgxstart; uint_fast32_t cbgystart; uint_fast32_t cbgxend; uint_fast32_t cbgyend; uint_fast32_t tlcblkxstart; uint_fast32_t tlcblkystart; uint_fast32_t brcblkxend; uint_fast32_t brcblkyend; uint_fast32_t cblkxstart; uint_fast32_t cblkystart; uint_fast32_t cblkxend; uint_fast32_t cblkyend; uint_fast32_t tmpxstart; uint_fast32_t tmpystart; uint_fast32_t tmpxend; uint_fast32_t tmpyend; jpc_dec_cp_t *cp; jpc_tsfb_band_t bnds[64]; jpc_pchg_t *pchg; int pchgno; jpc_dec_cmpt_t *cmpt; cp = tile->cp; tile->realmode = 0; if (cp->mctid == JPC_MCT_ICT) { tile->realmode = 1; } for (compno = 0, tcomp = tile->tcomps, cmpt = dec->cmpts; compno < dec->numcomps; ++compno, ++tcomp, ++cmpt) { ccp = &tile->cp->ccps[compno]; if (ccp->qmfbid == JPC_COX_INS) { tile->realmode = 1; } tcomp->numrlvls = ccp->numrlvls; if (!(tcomp->rlvls = jas_alloc2(tcomp->numrlvls, sizeof(jpc_dec_rlvl_t)))) { return -1; } if (!(tcomp->data = jas_seq2d_create(JPC_CEILDIV(tile->xstart, cmpt->hstep), JPC_CEILDIV(tile->ystart, cmpt->vstep), JPC_CEILDIV(tile->xend, cmpt->hstep), JPC_CEILDIV(tile->yend, cmpt->vstep)))) { return -1; } if (!(tcomp->tsfb = jpc_cod_gettsfb(ccp->qmfbid, tcomp->numrlvls - 1))) { return -1; } { jpc_tsfb_getbands(tcomp->tsfb, jas_seq2d_xstart(tcomp->data), jas_seq2d_ystart(tcomp->data), jas_seq2d_xend(tcomp->data), jas_seq2d_yend(tcomp->data), bnds); } for (rlvlno = 0, rlvl = tcomp->rlvls; rlvlno < tcomp->numrlvls; ++rlvlno, ++rlvl) { rlvl->bands = 0; rlvl->xstart = JPC_CEILDIVPOW2(tcomp->xstart, tcomp->numrlvls - 1 - rlvlno); rlvl->ystart = JPC_CEILDIVPOW2(tcomp->ystart, tcomp->numrlvls - 1 - rlvlno); rlvl->xend = JPC_CEILDIVPOW2(tcomp->xend, tcomp->numrlvls - 1 - rlvlno); rlvl->yend = JPC_CEILDIVPOW2(tcomp->yend, tcomp->numrlvls - 1 - rlvlno); rlvl->prcwidthexpn = ccp->prcwidthexpns[rlvlno]; rlvl->prcheightexpn = ccp->prcheightexpns[rlvlno]; tlprcxstart = JPC_FLOORDIVPOW2(rlvl->xstart, rlvl->prcwidthexpn) << rlvl->prcwidthexpn; tlprcystart = JPC_FLOORDIVPOW2(rlvl->ystart, rlvl->prcheightexpn) << rlvl->prcheightexpn; brprcxend = JPC_CEILDIVPOW2(rlvl->xend, rlvl->prcwidthexpn) << rlvl->prcwidthexpn; brprcyend = JPC_CEILDIVPOW2(rlvl->yend, rlvl->prcheightexpn) << rlvl->prcheightexpn; rlvl->numhprcs = (brprcxend - tlprcxstart) >> rlvl->prcwidthexpn; rlvl->numvprcs = (brprcyend - tlprcystart) >> rlvl->prcheightexpn; rlvl->numprcs = rlvl->numhprcs * rlvl->numvprcs; if (rlvl->xstart >= rlvl->xend || rlvl->ystart >= rlvl->yend) { rlvl->bands = 0; rlvl->numprcs = 0; rlvl->numhprcs = 0; rlvl->numvprcs = 0; continue; } if (!rlvlno) { tlcbgxstart = tlprcxstart; tlcbgystart = tlprcystart; brcbgxend = brprcxend; brcbgyend = brprcyend; rlvl->cbgwidthexpn = rlvl->prcwidthexpn; rlvl->cbgheightexpn = rlvl->prcheightexpn; } else { tlcbgxstart = JPC_CEILDIVPOW2(tlprcxstart, 1); tlcbgystart = JPC_CEILDIVPOW2(tlprcystart, 1); brcbgxend = JPC_CEILDIVPOW2(brprcxend, 1); brcbgyend = JPC_CEILDIVPOW2(brprcyend, 1); rlvl->cbgwidthexpn = rlvl->prcwidthexpn - 1; rlvl->cbgheightexpn = rlvl->prcheightexpn - 1; } rlvl->cblkwidthexpn = JAS_MIN(ccp->cblkwidthexpn, rlvl->cbgwidthexpn); rlvl->cblkheightexpn = JAS_MIN(ccp->cblkheightexpn, rlvl->cbgheightexpn); rlvl->numbands = (!rlvlno) ? 1 : 3; if (!(rlvl->bands = jas_alloc2(rlvl->numbands, sizeof(jpc_dec_band_t)))) { return -1; } for (bandno = 0, band = rlvl->bands; bandno < rlvl->numbands; ++bandno, ++band) { bndno = (!rlvlno) ? 0 : (3 * (rlvlno - 1) + bandno + 1); bnd = &bnds[bndno]; band->orient = bnd->orient; band->stepsize = ccp->stepsizes[bndno]; band->analgain = JPC_NOMINALGAIN(ccp->qmfbid, tcomp->numrlvls - 1, rlvlno, band->orient); band->absstepsize = jpc_calcabsstepsize(band->stepsize, cmpt->prec + band->analgain); band->numbps = ccp->numguardbits + JPC_QCX_GETEXPN(band->stepsize) - 1; band->roishift = (ccp->roishift + band->numbps >= JPC_PREC) ? (JPC_PREC - 1 - band->numbps) : ccp->roishift; band->data = 0; band->prcs = 0; if (bnd->xstart == bnd->xend || bnd->ystart == bnd->yend) { continue; } if (!(band->data = jas_seq2d_create(0, 0, 0, 0))) { return -1; } jas_seq2d_bindsub(band->data, tcomp->data, bnd->locxstart, bnd->locystart, bnd->locxend, bnd->locyend); jas_seq2d_setshift(band->data, bnd->xstart, bnd->ystart); assert(rlvl->numprcs); if (!(band->prcs = jas_alloc2(rlvl->numprcs, sizeof(jpc_dec_prc_t)))) { return -1; } /************************************************/ cbgxstart = tlcbgxstart; cbgystart = tlcbgystart; for (prccnt = rlvl->numprcs, prc = band->prcs; prccnt > 0; --prccnt, ++prc) { cbgxend = cbgxstart + (1 << rlvl->cbgwidthexpn); cbgyend = cbgystart + (1 << rlvl->cbgheightexpn); prc->xstart = JAS_MAX(cbgxstart, JAS_CAST(uint_fast32_t, jas_seq2d_xstart(band->data))); prc->ystart = JAS_MAX(cbgystart, JAS_CAST(uint_fast32_t, jas_seq2d_ystart(band->data))); prc->xend = JAS_MIN(cbgxend, JAS_CAST(uint_fast32_t, jas_seq2d_xend(band->data))); prc->yend = JAS_MIN(cbgyend, JAS_CAST(uint_fast32_t, jas_seq2d_yend(band->data))); if (prc->xend > prc->xstart && prc->yend > prc->ystart) { tlcblkxstart = JPC_FLOORDIVPOW2(prc->xstart, rlvl->cblkwidthexpn) << rlvl->cblkwidthexpn; tlcblkystart = JPC_FLOORDIVPOW2(prc->ystart, rlvl->cblkheightexpn) << rlvl->cblkheightexpn; brcblkxend = JPC_CEILDIVPOW2(prc->xend, rlvl->cblkwidthexpn) << rlvl->cblkwidthexpn; brcblkyend = JPC_CEILDIVPOW2(prc->yend, rlvl->cblkheightexpn) << rlvl->cblkheightexpn; prc->numhcblks = (brcblkxend - tlcblkxstart) >> rlvl->cblkwidthexpn; prc->numvcblks = (brcblkyend - tlcblkystart) >> rlvl->cblkheightexpn; prc->numcblks = prc->numhcblks * prc->numvcblks; assert(prc->numcblks > 0); if (!(prc->incltagtree = jpc_tagtree_create( prc->numhcblks, prc->numvcblks))) { return -1; } if (!(prc->numimsbstagtree = jpc_tagtree_create( prc->numhcblks, prc->numvcblks))) { return -1; } if (!(prc->cblks = jas_alloc2(prc->numcblks, sizeof(jpc_dec_cblk_t)))) { return -1; } cblkxstart = cbgxstart; cblkystart = cbgystart; for (cblkcnt = prc->numcblks, cblk = prc->cblks; cblkcnt > 0;) { cblkxend = cblkxstart + (1 << rlvl->cblkwidthexpn); cblkyend = cblkystart + (1 << rlvl->cblkheightexpn); tmpxstart = JAS_MAX(cblkxstart, prc->xstart); tmpystart = JAS_MAX(cblkystart, prc->ystart); tmpxend = JAS_MIN(cblkxend, prc->xend); tmpyend = JAS_MIN(cblkyend, prc->yend); if (tmpxend > tmpxstart && tmpyend > tmpystart) { cblk->firstpassno = -1; cblk->mqdec = 0; cblk->nulldec = 0; cblk->flags = 0; cblk->numpasses = 0; cblk->segs.head = 0; cblk->segs.tail = 0; cblk->curseg = 0; cblk->numimsbs = 0; cblk->numlenbits = 3; cblk->flags = 0; if (!(cblk->data = jas_seq2d_create(0, 0, 0, 0))) { return -1; } jas_seq2d_bindsub(cblk->data, band->data, tmpxstart, tmpystart, tmpxend, tmpyend); ++cblk; --cblkcnt; } cblkxstart += 1 << rlvl->cblkwidthexpn; if (cblkxstart >= cbgxend) { cblkxstart = cbgxstart; cblkystart += 1 << rlvl->cblkheightexpn; } } } else { prc->cblks = 0; prc->incltagtree = 0; prc->numimsbstagtree = 0; } cbgxstart += 1 << rlvl->cbgwidthexpn; if (cbgxstart >= brcbgxend) { cbgxstart = tlcbgxstart; cbgystart += 1 << rlvl->cbgheightexpn; } } /********************************************/ } } } if (!(tile->pi = jpc_dec_pi_create(dec, tile))) { return -1; } for (pchgno = 0; pchgno < jpc_pchglist_numpchgs(tile->cp->pchglist); ++pchgno) { pchg = jpc_pchg_copy(jpc_pchglist_get(tile->cp->pchglist, pchgno)); assert(pchg); jpc_pi_addpchg(tile->pi, pchg); } jpc_pi_init(tile->pi); return 0; }
152,611,857,864,829,500,000,000,000,000,000,000,000
jpc_dec.c
9,904,405,345,613,609,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2016-9560
Stack-based buffer overflow in the jpc_tsfb_getbands2 function in jpc_tsfb.c in JasPer before 1.900.30 allows remote attackers to have unspecified impact via a crafted image.
https://nvd.nist.gov/vuln/detail/CVE-2016-9560
9,524
jasper
d8c2604cd438c41ec72aff52c16ebd8183068020
https://github.com/mdadams/jasper
https://github.com/mdadams/jasper/commit/d8c2604cd438c41ec72aff52c16ebd8183068020
Added range check on XRsiz and YRsiz fields of SIZ marker segment.
1
static int jpc_siz_getparms(jpc_ms_t *ms, jpc_cstate_t *cstate, jas_stream_t *in) { jpc_siz_t *siz = &ms->parms.siz; unsigned int i; uint_fast8_t tmp; /* Eliminate compiler warning about unused variables. */ cstate = 0; if (jpc_getuint16(in, &siz->caps) || jpc_getuint32(in, &siz->width) || jpc_getuint32(in, &siz->height) || jpc_getuint32(in, &siz->xoff) || jpc_getuint32(in, &siz->yoff) || jpc_getuint32(in, &siz->tilewidth) || jpc_getuint32(in, &siz->tileheight) || jpc_getuint32(in, &siz->tilexoff) || jpc_getuint32(in, &siz->tileyoff) || jpc_getuint16(in, &siz->numcomps)) { return -1; } if (!siz->width || !siz->height || !siz->tilewidth || !siz->tileheight || !siz->numcomps) { return -1; } if (!(siz->comps = jas_alloc2(siz->numcomps, sizeof(jpc_sizcomp_t)))) { return -1; } for (i = 0; i < siz->numcomps; ++i) { if (jpc_getuint8(in, &tmp) || jpc_getuint8(in, &siz->comps[i].hsamp) || jpc_getuint8(in, &siz->comps[i].vsamp)) { jas_free(siz->comps); return -1; } siz->comps[i].sgnd = (tmp >> 7) & 1; siz->comps[i].prec = (tmp & 0x7f) + 1; } if (jas_stream_eof(in)) { jas_free(siz->comps); return -1; } return 0; }
10,991,172,644,452,336,000,000,000,000,000,000,000
jpc_cs.c
109,966,816,698,492,500,000,000,000,000,000,000,000
[ "CWE-369" ]
CVE-2016-8692
The jpc_dec_process_siz function in libjasper/jpc/jpc_dec.c in JasPer before 1.900.4 allows remote attackers to cause a denial of service (divide-by-zero error and application crash) via a crafted YRsiz value in a BMP image to the imginfo command.
https://nvd.nist.gov/vuln/detail/CVE-2016-8692
9,525
libarchive
eec077f52bfa2d3f7103b4b74d52572ba8a15aca
https://github.com/libarchive/libarchive
https://github.com/libarchive/libarchive/commit/eec077f52bfa2d3f7103b4b74d52572ba8a15aca
Issue 747 (and others?): Avoid OOB read when parsing multiple long lines The mtree bidder needs to look several lines ahead in the input. It does this by extending the read-ahead and parsing subsequent lines from the same growing buffer. A bookkeeping error when extending the read-ahead would sometimes lead it to significantly over-count the size of the line being read.
1
next_line(struct archive_read *a, const char **b, ssize_t *avail, ssize_t *ravail, ssize_t *nl) { ssize_t len; int quit; quit = 0; if (*avail == 0) { *nl = 0; len = 0; } else len = get_line_size(*b, *avail, nl); /* * Read bytes more while it does not reach the end of line. */ while (*nl == 0 && len == *avail && !quit) { ssize_t diff = *ravail - *avail; size_t nbytes_req = (*ravail+1023) & ~1023U; ssize_t tested; /* Increase reading bytes if it is not enough to at least * new two lines. */ if (nbytes_req < (size_t)*ravail + 160) nbytes_req <<= 1; *b = __archive_read_ahead(a, nbytes_req, avail); if (*b == NULL) { if (*ravail >= *avail) return (0); /* Reading bytes reaches the end of file. */ *b = __archive_read_ahead(a, *avail, avail); quit = 1; } *ravail = *avail; *b += diff; *avail -= diff; tested = len;/* Skip some bytes we already determinated. */ len = get_line_size(*b, *avail, nl); if (len >= 0) len += tested; } return (len); }
62,295,156,973,029,730,000,000,000,000,000,000,000
None
null
[ "CWE-125" ]
CVE-2016-8688
The mtree bidder in libarchive 3.2.1 does not keep track of line sizes when extending the read-ahead, which allows remote attackers to cause a denial of service (crash) via a crafted file, which triggers an invalid read in the (1) detect_form or (2) bid_entry function in libarchive/archive_read_support_format_mtree.c.
https://nvd.nist.gov/vuln/detail/CVE-2016-8688
9,526
ImageMagick
6e48aa92ff4e6e95424300ecd52a9ea453c19c60
https://github.com/ImageMagick/ImageMagick
https://github.com/ImageMagick/ImageMagick/commit/6e48aa92ff4e6e95424300ecd52a9ea453c19c60
https://github.com/ImageMagick/ImageMagick/issues/268
1
static Image *ReadTIFFImage(const ImageInfo *image_info, ExceptionInfo *exception) { const char *option; float *chromaticity, x_position, y_position, x_resolution, y_resolution; Image *image; int tiff_status; MagickBooleanType status; MagickSizeType number_pixels; QuantumInfo *quantum_info; QuantumType quantum_type; register ssize_t i; size_t pad; ssize_t y; TIFF *tiff; TIFFMethodType method; uint16 compress_tag, bits_per_sample, endian, extra_samples, interlace, max_sample_value, min_sample_value, orientation, pages, photometric, *sample_info, sample_format, samples_per_pixel, units, value; uint32 height, rows_per_strip, width; unsigned char *pixels; /* Open image. */ assert(image_info != (const ImageInfo *) NULL); assert(image_info->signature == MagickCoreSignature); if (image_info->debug != MagickFalse) (void) LogMagickEvent(TraceEvent,GetMagickModule(),"%s", image_info->filename); assert(exception != (ExceptionInfo *) NULL); assert(exception->signature == MagickCoreSignature); image=AcquireImage(image_info,exception); status=OpenBlob(image_info,image,ReadBinaryBlobMode,exception); if (status == MagickFalse) { image=DestroyImageList(image); return((Image *) NULL); } (void) SetMagickThreadValue(tiff_exception,exception); tiff=TIFFClientOpen(image->filename,"rb",(thandle_t) image,TIFFReadBlob, TIFFWriteBlob,TIFFSeekBlob,TIFFCloseBlob,TIFFGetBlobSize,TIFFMapBlob, TIFFUnmapBlob); if (tiff == (TIFF *) NULL) { image=DestroyImageList(image); return((Image *) NULL); } if (image_info->number_scenes != 0) { /* Generate blank images for subimage specification (e.g. image.tif[4]. We need to check the number of directores because it is possible that the subimage(s) are stored in the photoshop profile. */ if (image_info->scene < (size_t) TIFFNumberOfDirectories(tiff)) { for (i=0; i < (ssize_t) image_info->scene; i++) { status=TIFFReadDirectory(tiff) != 0 ? MagickTrue : MagickFalse; if (status == MagickFalse) { TIFFClose(tiff); image=DestroyImageList(image); return((Image *) NULL); } AcquireNextImage(image_info,image,exception); if (GetNextImageInList(image) == (Image *) NULL) { TIFFClose(tiff); image=DestroyImageList(image); return((Image *) NULL); } image=SyncNextImageInList(image); } } } do { DisableMSCWarning(4127) if (0 && (image_info->verbose != MagickFalse)) TIFFPrintDirectory(tiff,stdout,MagickFalse); RestoreMSCWarning if ((TIFFGetField(tiff,TIFFTAG_IMAGEWIDTH,&width) != 1) || (TIFFGetField(tiff,TIFFTAG_IMAGELENGTH,&height) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_COMPRESSION,&compress_tag) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_FILLORDER,&endian) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_PLANARCONFIG,&interlace) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_SAMPLESPERPIXEL,&samples_per_pixel) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_BITSPERSAMPLE,&bits_per_sample) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_SAMPLEFORMAT,&sample_format) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_MINSAMPLEVALUE,&min_sample_value) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_MAXSAMPLEVALUE,&max_sample_value) != 1) || (TIFFGetFieldDefaulted(tiff,TIFFTAG_PHOTOMETRIC,&photometric) != 1)) { TIFFClose(tiff); ThrowReaderException(CorruptImageError,"ImproperImageHeader"); } if (sample_format == SAMPLEFORMAT_IEEEFP) (void) SetImageProperty(image,"quantum:format","floating-point", exception); switch (photometric) { case PHOTOMETRIC_MINISBLACK: { (void) SetImageProperty(image,"tiff:photometric","min-is-black", exception); break; } case PHOTOMETRIC_MINISWHITE: { (void) SetImageProperty(image,"tiff:photometric","min-is-white", exception); break; } case PHOTOMETRIC_PALETTE: { (void) SetImageProperty(image,"tiff:photometric","palette",exception); break; } case PHOTOMETRIC_RGB: { (void) SetImageProperty(image,"tiff:photometric","RGB",exception); break; } case PHOTOMETRIC_CIELAB: { (void) SetImageProperty(image,"tiff:photometric","CIELAB",exception); break; } case PHOTOMETRIC_LOGL: { (void) SetImageProperty(image,"tiff:photometric","CIE Log2(L)", exception); break; } case PHOTOMETRIC_LOGLUV: { (void) SetImageProperty(image,"tiff:photometric","LOGLUV",exception); break; } #if defined(PHOTOMETRIC_MASK) case PHOTOMETRIC_MASK: { (void) SetImageProperty(image,"tiff:photometric","MASK",exception); break; } #endif case PHOTOMETRIC_SEPARATED: { (void) SetImageProperty(image,"tiff:photometric","separated",exception); break; } case PHOTOMETRIC_YCBCR: { (void) SetImageProperty(image,"tiff:photometric","YCBCR",exception); break; } default: { (void) SetImageProperty(image,"tiff:photometric","unknown",exception); break; } } if (image->debug != MagickFalse) { (void) LogMagickEvent(CoderEvent,GetMagickModule(),"Geometry: %ux%u", (unsigned int) width,(unsigned int) height); (void) LogMagickEvent(CoderEvent,GetMagickModule(),"Interlace: %u", interlace); (void) LogMagickEvent(CoderEvent,GetMagickModule(), "Bits per sample: %u",bits_per_sample); (void) LogMagickEvent(CoderEvent,GetMagickModule(), "Min sample value: %u",min_sample_value); (void) LogMagickEvent(CoderEvent,GetMagickModule(), "Max sample value: %u",max_sample_value); (void) LogMagickEvent(CoderEvent,GetMagickModule(),"Photometric " "interpretation: %s",GetImageProperty(image,"tiff:photometric", exception)); } image->columns=(size_t) width; image->rows=(size_t) height; image->depth=(size_t) bits_per_sample; if (image->debug != MagickFalse) (void) LogMagickEvent(CoderEvent,GetMagickModule(),"Image depth: %.20g", (double) image->depth); image->endian=MSBEndian; if (endian == FILLORDER_LSB2MSB) image->endian=LSBEndian; #if defined(MAGICKCORE_HAVE_TIFFISBIGENDIAN) if (TIFFIsBigEndian(tiff) == 0) { (void) SetImageProperty(image,"tiff:endian","lsb",exception); image->endian=LSBEndian; } else { (void) SetImageProperty(image,"tiff:endian","msb",exception); image->endian=MSBEndian; } #endif if ((photometric == PHOTOMETRIC_MINISBLACK) || (photometric == PHOTOMETRIC_MINISWHITE)) SetImageColorspace(image,GRAYColorspace,exception); if (photometric == PHOTOMETRIC_SEPARATED) SetImageColorspace(image,CMYKColorspace,exception); if (photometric == PHOTOMETRIC_CIELAB) SetImageColorspace(image,LabColorspace,exception); TIFFGetProfiles(tiff,image,image_info->ping,exception); TIFFGetProperties(tiff,image,exception); option=GetImageOption(image_info,"tiff:exif-properties"); if (IsStringFalse(option) == MagickFalse) /* enabled by default */ TIFFGetEXIFProperties(tiff,image,exception); (void) TIFFGetFieldDefaulted(tiff,TIFFTAG_SAMPLESPERPIXEL, &samples_per_pixel); if ((TIFFGetFieldDefaulted(tiff,TIFFTAG_XRESOLUTION,&x_resolution) == 1) && (TIFFGetFieldDefaulted(tiff,TIFFTAG_YRESOLUTION,&y_resolution) == 1)) { image->resolution.x=x_resolution; image->resolution.y=y_resolution; } if (TIFFGetFieldDefaulted(tiff,TIFFTAG_RESOLUTIONUNIT,&units) == 1) { if (units == RESUNIT_INCH) image->units=PixelsPerInchResolution; if (units == RESUNIT_CENTIMETER) image->units=PixelsPerCentimeterResolution; } if ((TIFFGetFieldDefaulted(tiff,TIFFTAG_XPOSITION,&x_position) == 1) && (TIFFGetFieldDefaulted(tiff,TIFFTAG_YPOSITION,&y_position) == 1)) { image->page.x=(ssize_t) ceil(x_position*image->resolution.x-0.5); image->page.y=(ssize_t) ceil(y_position*image->resolution.y-0.5); } if (TIFFGetFieldDefaulted(tiff,TIFFTAG_ORIENTATION,&orientation) == 1) image->orientation=(OrientationType) orientation; if (TIFFGetField(tiff,TIFFTAG_WHITEPOINT,&chromaticity) == 1) { if (chromaticity != (float *) NULL) { image->chromaticity.white_point.x=chromaticity[0]; image->chromaticity.white_point.y=chromaticity[1]; } } if (TIFFGetField(tiff,TIFFTAG_PRIMARYCHROMATICITIES,&chromaticity) == 1) { if (chromaticity != (float *) NULL) { image->chromaticity.red_primary.x=chromaticity[0]; image->chromaticity.red_primary.y=chromaticity[1]; image->chromaticity.green_primary.x=chromaticity[2]; image->chromaticity.green_primary.y=chromaticity[3]; image->chromaticity.blue_primary.x=chromaticity[4]; image->chromaticity.blue_primary.y=chromaticity[5]; } } #if defined(MAGICKCORE_HAVE_TIFFISCODECCONFIGURED) || (TIFFLIB_VERSION > 20040919) if ((compress_tag != COMPRESSION_NONE) && (TIFFIsCODECConfigured(compress_tag) == 0)) { TIFFClose(tiff); ThrowReaderException(CoderError,"CompressNotSupported"); } #endif switch (compress_tag) { case COMPRESSION_NONE: image->compression=NoCompression; break; case COMPRESSION_CCITTFAX3: image->compression=FaxCompression; break; case COMPRESSION_CCITTFAX4: image->compression=Group4Compression; break; case COMPRESSION_JPEG: { image->compression=JPEGCompression; #if defined(JPEG_SUPPORT) { char sampling_factor[MagickPathExtent]; int tiff_status; uint16 horizontal, vertical; tiff_status=TIFFGetFieldDefaulted(tiff,TIFFTAG_YCBCRSUBSAMPLING, &horizontal,&vertical); if (tiff_status == 1) { (void) FormatLocaleString(sampling_factor,MagickPathExtent, "%dx%d",horizontal,vertical); (void) SetImageProperty(image,"jpeg:sampling-factor", sampling_factor,exception); (void) LogMagickEvent(CoderEvent,GetMagickModule(), "Sampling Factors: %s",sampling_factor); } } #endif break; } case COMPRESSION_OJPEG: image->compression=JPEGCompression; break; #if defined(COMPRESSION_LZMA) case COMPRESSION_LZMA: image->compression=LZMACompression; break; #endif case COMPRESSION_LZW: image->compression=LZWCompression; break; case COMPRESSION_DEFLATE: image->compression=ZipCompression; break; case COMPRESSION_ADOBE_DEFLATE: image->compression=ZipCompression; break; default: image->compression=RLECompression; break; } /* Allocate memory for the image and pixel buffer. */ quantum_info=AcquireQuantumInfo(image_info,image); if (quantum_info == (QuantumInfo *) NULL) { TIFFClose(tiff); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } if (sample_format == SAMPLEFORMAT_UINT) status=SetQuantumFormat(image,quantum_info,UnsignedQuantumFormat); if (sample_format == SAMPLEFORMAT_INT) status=SetQuantumFormat(image,quantum_info,SignedQuantumFormat); if (sample_format == SAMPLEFORMAT_IEEEFP) status=SetQuantumFormat(image,quantum_info,FloatingPointQuantumFormat); if (status == MagickFalse) { TIFFClose(tiff); quantum_info=DestroyQuantumInfo(quantum_info); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } status=MagickTrue; switch (photometric) { case PHOTOMETRIC_MINISBLACK: { quantum_info->min_is_white=MagickFalse; break; } case PHOTOMETRIC_MINISWHITE: { quantum_info->min_is_white=MagickTrue; break; } default: break; } tiff_status=TIFFGetFieldDefaulted(tiff,TIFFTAG_EXTRASAMPLES,&extra_samples, &sample_info); if (tiff_status == 1) { (void) SetImageProperty(image,"tiff:alpha","unspecified",exception); if (extra_samples == 0) { if ((samples_per_pixel == 4) && (photometric == PHOTOMETRIC_RGB)) image->alpha_trait=BlendPixelTrait; } else for (i=0; i < extra_samples; i++) { image->alpha_trait=BlendPixelTrait; if (sample_info[i] == EXTRASAMPLE_ASSOCALPHA) { SetQuantumAlphaType(quantum_info,DisassociatedQuantumAlpha); (void) SetImageProperty(image,"tiff:alpha","associated", exception); } else if (sample_info[i] == EXTRASAMPLE_UNASSALPHA) (void) SetImageProperty(image,"tiff:alpha","unassociated", exception); } } if ((photometric == PHOTOMETRIC_PALETTE) && (pow(2.0,1.0*bits_per_sample) <= MaxColormapSize)) { size_t colors; colors=(size_t) GetQuantumRange(bits_per_sample)+1; if (AcquireImageColormap(image,colors,exception) == MagickFalse) { TIFFClose(tiff); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } } value=(unsigned short) image->scene; if (TIFFGetFieldDefaulted(tiff,TIFFTAG_PAGENUMBER,&value,&pages) == 1) image->scene=value; if (image->storage_class == PseudoClass) { int tiff_status; size_t range; uint16 *blue_colormap, *green_colormap, *red_colormap; /* Initialize colormap. */ tiff_status=TIFFGetField(tiff,TIFFTAG_COLORMAP,&red_colormap, &green_colormap,&blue_colormap); if (tiff_status == 1) { if ((red_colormap != (uint16 *) NULL) && (green_colormap != (uint16 *) NULL) && (blue_colormap != (uint16 *) NULL)) { range=255; /* might be old style 8-bit colormap */ for (i=0; i < (ssize_t) image->colors; i++) if ((red_colormap[i] >= 256) || (green_colormap[i] >= 256) || (blue_colormap[i] >= 256)) { range=65535; break; } for (i=0; i < (ssize_t) image->colors; i++) { image->colormap[i].red=ClampToQuantum(((double) QuantumRange*red_colormap[i])/range); image->colormap[i].green=ClampToQuantum(((double) QuantumRange*green_colormap[i])/range); image->colormap[i].blue=ClampToQuantum(((double) QuantumRange*blue_colormap[i])/range); } } } if (image->alpha_trait == UndefinedPixelTrait) image->depth=GetImageDepth(image,exception); } if (image_info->ping != MagickFalse) { if (image_info->number_scenes != 0) if (image->scene >= (image_info->scene+image_info->number_scenes-1)) { quantum_info=DestroyQuantumInfo(quantum_info); break; } goto next_tiff_frame; } status=SetImageExtent(image,image->columns,image->rows,exception); if (status == MagickFalse) return(DestroyImageList(image)); method=ReadGenericMethod; if (TIFFGetField(tiff,TIFFTAG_ROWSPERSTRIP,&rows_per_strip) == 1) { char value[MagickPathExtent]; method=ReadStripMethod; (void) FormatLocaleString(value,MagickPathExtent,"%u", (unsigned int) rows_per_strip); (void) SetImageProperty(image,"tiff:rows-per-strip",value,exception); } if ((samples_per_pixel >= 2) && (interlace == PLANARCONFIG_CONTIG)) method=ReadRGBAMethod; if ((samples_per_pixel >= 2) && (interlace == PLANARCONFIG_SEPARATE)) method=ReadCMYKAMethod; if ((photometric != PHOTOMETRIC_RGB) && (photometric != PHOTOMETRIC_CIELAB) && (photometric != PHOTOMETRIC_SEPARATED)) method=ReadGenericMethod; if (image->storage_class == PseudoClass) method=ReadSingleSampleMethod; if ((photometric == PHOTOMETRIC_MINISBLACK) || (photometric == PHOTOMETRIC_MINISWHITE)) method=ReadSingleSampleMethod; if ((photometric != PHOTOMETRIC_SEPARATED) && (interlace == PLANARCONFIG_SEPARATE) && (bits_per_sample < 64)) method=ReadGenericMethod; if (image->compression == JPEGCompression) method=GetJPEGMethod(image,tiff,photometric,bits_per_sample, samples_per_pixel); if (compress_tag == COMPRESSION_JBIG) method=ReadStripMethod; if (TIFFIsTiled(tiff) != MagickFalse) method=ReadTileMethod; quantum_info->endian=LSBEndian; quantum_type=RGBQuantum; pixels=(unsigned char *) GetQuantumPixels(quantum_info); switch (method) { case ReadSingleSampleMethod: { /* Convert TIFF image to PseudoClass MIFF image. */ quantum_type=IndexQuantum; pad=(size_t) MagickMax((size_t) samples_per_pixel-1,0); if (image->alpha_trait != UndefinedPixelTrait) { if (image->storage_class != PseudoClass) { quantum_type=samples_per_pixel == 1 ? AlphaQuantum : GrayAlphaQuantum; pad=(size_t) MagickMax((size_t) samples_per_pixel-2,0); } else { quantum_type=IndexAlphaQuantum; pad=(size_t) MagickMax((size_t) samples_per_pixel-2,0); } } else if (image->storage_class != PseudoClass) { quantum_type=GrayQuantum; pad=(size_t) MagickMax((size_t) samples_per_pixel-1,0); } status=SetQuantumPad(image,quantum_info,pad*pow(2,ceil(log( bits_per_sample)/log(2)))); if (status == MagickFalse) { TIFFClose(tiff); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } pixels=(unsigned char *) GetQuantumPixels(quantum_info); for (y=0; y < (ssize_t) image->rows; y++) { int status; register Quantum *magick_restrict q; status=TIFFReadPixels(tiff,bits_per_sample,0,y,(char *) pixels); if (status == -1) break; q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; (void) ImportQuantumPixels(image,(CacheView *) NULL,quantum_info, quantum_type,pixels,exception); if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } break; } case ReadRGBAMethod: { /* Convert TIFF image to DirectClass MIFF image. */ pad=(size_t) MagickMax((size_t) samples_per_pixel-3,0); quantum_type=RGBQuantum; if (image->alpha_trait != UndefinedPixelTrait) { quantum_type=RGBAQuantum; pad=(size_t) MagickMax((size_t) samples_per_pixel-4,0); } if (image->colorspace == CMYKColorspace) { pad=(size_t) MagickMax((size_t) samples_per_pixel-4,0); quantum_type=CMYKQuantum; if (image->alpha_trait != UndefinedPixelTrait) { quantum_type=CMYKAQuantum; pad=(size_t) MagickMax((size_t) samples_per_pixel-5,0); } } status=SetQuantumPad(image,quantum_info,pad*((bits_per_sample+7) >> 3)); if (status == MagickFalse) { TIFFClose(tiff); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } pixels=(unsigned char *) GetQuantumPixels(quantum_info); for (y=0; y < (ssize_t) image->rows; y++) { int status; register Quantum *magick_restrict q; status=TIFFReadPixels(tiff,bits_per_sample,0,y,(char *) pixels); if (status == -1) break; q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; (void) ImportQuantumPixels(image,(CacheView *) NULL,quantum_info, quantum_type,pixels,exception); if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } break; } case ReadCMYKAMethod: { /* Convert TIFF image to DirectClass MIFF image. */ for (i=0; i < (ssize_t) samples_per_pixel; i++) { for (y=0; y < (ssize_t) image->rows; y++) { register Quantum *magick_restrict q; int status; status=TIFFReadPixels(tiff,bits_per_sample,(tsample_t) i,y,(char *) pixels); if (status == -1) break; q=GetAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; if (image->colorspace != CMYKColorspace) switch (i) { case 0: quantum_type=RedQuantum; break; case 1: quantum_type=GreenQuantum; break; case 2: quantum_type=BlueQuantum; break; case 3: quantum_type=AlphaQuantum; break; default: quantum_type=UndefinedQuantum; break; } else switch (i) { case 0: quantum_type=CyanQuantum; break; case 1: quantum_type=MagentaQuantum; break; case 2: quantum_type=YellowQuantum; break; case 3: quantum_type=BlackQuantum; break; case 4: quantum_type=AlphaQuantum; break; default: quantum_type=UndefinedQuantum; break; } (void) ImportQuantumPixels(image,(CacheView *) NULL,quantum_info, quantum_type,pixels,exception); if (SyncAuthenticPixels(image,exception) == MagickFalse) break; } if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } break; } case ReadYCCKMethod: { pixels=(unsigned char *) GetQuantumPixels(quantum_info); for (y=0; y < (ssize_t) image->rows; y++) { int status; register Quantum *magick_restrict q; register ssize_t x; unsigned char *p; status=TIFFReadPixels(tiff,bits_per_sample,0,y,(char *) pixels); if (status == -1) break; q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; p=pixels; for (x=0; x < (ssize_t) image->columns; x++) { SetPixelCyan(image,ScaleCharToQuantum(ClampYCC((double) *p+ (1.402*(double) *(p+2))-179.456)),q); SetPixelMagenta(image,ScaleCharToQuantum(ClampYCC((double) *p- (0.34414*(double) *(p+1))-(0.71414*(double ) *(p+2))+ 135.45984)),q); SetPixelYellow(image,ScaleCharToQuantum(ClampYCC((double) *p+ (1.772*(double) *(p+1))-226.816)),q); SetPixelBlack(image,ScaleCharToQuantum((unsigned char) *(p+3)),q); q+=GetPixelChannels(image); p+=4; } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } break; } case ReadStripMethod: { register uint32 *p; /* Convert stripped TIFF image to DirectClass MIFF image. */ i=0; p=(uint32 *) NULL; for (y=0; y < (ssize_t) image->rows; y++) { register ssize_t x; register Quantum *magick_restrict q; q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; if (i == 0) { if (TIFFReadRGBAStrip(tiff,(tstrip_t) y,(uint32 *) pixels) == 0) break; i=(ssize_t) MagickMin((ssize_t) rows_per_strip,(ssize_t) image->rows-y); } i--; p=((uint32 *) pixels)+image->columns*i; for (x=0; x < (ssize_t) image->columns; x++) { SetPixelRed(image,ScaleCharToQuantum((unsigned char) (TIFFGetR(*p))),q); SetPixelGreen(image,ScaleCharToQuantum((unsigned char) (TIFFGetG(*p))),q); SetPixelBlue(image,ScaleCharToQuantum((unsigned char) (TIFFGetB(*p))),q); if (image->alpha_trait != UndefinedPixelTrait) SetPixelAlpha(image,ScaleCharToQuantum((unsigned char) (TIFFGetA(*p))),q); p++; q+=GetPixelChannels(image); } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } break; } case ReadTileMethod: { register uint32 *p; uint32 *tile_pixels, columns, rows; /* Convert tiled TIFF image to DirectClass MIFF image. */ if ((TIFFGetField(tiff,TIFFTAG_TILEWIDTH,&columns) != 1) || (TIFFGetField(tiff,TIFFTAG_TILELENGTH,&rows) != 1)) { TIFFClose(tiff); ThrowReaderException(CoderError,"ImageIsNotTiled"); } (void) SetImageStorageClass(image,DirectClass,exception); number_pixels=(MagickSizeType) columns*rows; if (HeapOverflowSanityCheck(rows,sizeof(*tile_pixels)) != MagickFalse) { TIFFClose(tiff); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } tile_pixels=(uint32 *) AcquireQuantumMemory(columns,rows* sizeof(*tile_pixels)); if (tile_pixels == (uint32 *) NULL) { TIFFClose(tiff); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } for (y=0; y < (ssize_t) image->rows; y+=rows) { register ssize_t x; register Quantum *magick_restrict q, *magick_restrict tile; size_t columns_remaining, rows_remaining; rows_remaining=image->rows-y; if ((ssize_t) (y+rows) < (ssize_t) image->rows) rows_remaining=rows; tile=QueueAuthenticPixels(image,0,y,image->columns,rows_remaining, exception); if (tile == (Quantum *) NULL) break; for (x=0; x < (ssize_t) image->columns; x+=columns) { size_t column, row; if (TIFFReadRGBATile(tiff,(uint32) x,(uint32) y,tile_pixels) == 0) break; columns_remaining=image->columns-x; if ((ssize_t) (x+columns) < (ssize_t) image->columns) columns_remaining=columns; p=tile_pixels+(rows-rows_remaining)*columns; q=tile+GetPixelChannels(image)*(image->columns*(rows_remaining-1)+ x); for (row=rows_remaining; row > 0; row--) { if (image->alpha_trait != UndefinedPixelTrait) for (column=columns_remaining; column > 0; column--) { SetPixelRed(image,ScaleCharToQuantum((unsigned char) TIFFGetR(*p)),q); SetPixelGreen(image,ScaleCharToQuantum((unsigned char) TIFFGetG(*p)),q); SetPixelBlue(image,ScaleCharToQuantum((unsigned char) TIFFGetB(*p)),q); SetPixelAlpha(image,ScaleCharToQuantum((unsigned char) TIFFGetA(*p)),q); p++; q+=GetPixelChannels(image); } else for (column=columns_remaining; column > 0; column--) { SetPixelRed(image,ScaleCharToQuantum((unsigned char) TIFFGetR(*p)),q); SetPixelGreen(image,ScaleCharToQuantum((unsigned char) TIFFGetG(*p)),q); SetPixelBlue(image,ScaleCharToQuantum((unsigned char) TIFFGetB(*p)),q); p++; q+=GetPixelChannels(image); } p+=columns-columns_remaining; q-=GetPixelChannels(image)*(image->columns+columns_remaining); } } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } tile_pixels=(uint32 *) RelinquishMagickMemory(tile_pixels); break; } case ReadGenericMethod: default: { MemoryInfo *pixel_info; register uint32 *p; uint32 *pixels; /* Convert TIFF image to DirectClass MIFF image. */ number_pixels=(MagickSizeType) image->columns*image->rows; if (HeapOverflowSanityCheck(image->rows,sizeof(*pixels)) != MagickFalse) { TIFFClose(tiff); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } pixel_info=AcquireVirtualMemory(image->columns,image->rows* sizeof(uint32)); if (pixel_info == (MemoryInfo *) NULL) { TIFFClose(tiff); ThrowReaderException(ResourceLimitError,"MemoryAllocationFailed"); } pixels=(uint32 *) GetVirtualMemoryBlob(pixel_info); (void) TIFFReadRGBAImage(tiff,(uint32) image->columns,(uint32) image->rows,(uint32 *) pixels,0); /* Convert image to DirectClass pixel packets. */ p=pixels+number_pixels-1; for (y=0; y < (ssize_t) image->rows; y++) { register ssize_t x; register Quantum *magick_restrict q; q=QueueAuthenticPixels(image,0,y,image->columns,1,exception); if (q == (Quantum *) NULL) break; q+=GetPixelChannels(image)*(image->columns-1); for (x=0; x < (ssize_t) image->columns; x++) { SetPixelRed(image,ScaleCharToQuantum((unsigned char) TIFFGetR(*p)),q); SetPixelGreen(image,ScaleCharToQuantum((unsigned char) TIFFGetG(*p)),q); SetPixelBlue(image,ScaleCharToQuantum((unsigned char) TIFFGetB(*p)),q); if (image->alpha_trait != UndefinedPixelTrait) SetPixelAlpha(image,ScaleCharToQuantum((unsigned char) TIFFGetA(*p)),q); p--; q-=GetPixelChannels(image); } if (SyncAuthenticPixels(image,exception) == MagickFalse) break; if (image->previous == (Image *) NULL) { status=SetImageProgress(image,LoadImageTag,(MagickOffsetType) y, image->rows); if (status == MagickFalse) break; } } pixel_info=RelinquishVirtualMemory(pixel_info); break; } } SetQuantumImageType(image,quantum_type); next_tiff_frame: quantum_info=DestroyQuantumInfo(quantum_info); if (photometric == PHOTOMETRIC_CIELAB) DecodeLabImage(image,exception); if ((photometric == PHOTOMETRIC_LOGL) || (photometric == PHOTOMETRIC_MINISBLACK) || (photometric == PHOTOMETRIC_MINISWHITE)) { image->type=GrayscaleType; if (bits_per_sample == 1) image->type=BilevelType; } /* Proceed to next image. */ if (image_info->number_scenes != 0) if (image->scene >= (image_info->scene+image_info->number_scenes-1)) break; status=TIFFReadDirectory(tiff) != 0 ? MagickTrue : MagickFalse; if (status != MagickFalse) { /* Allocate next image structure. */ AcquireNextImage(image_info,image,exception); if (GetNextImageInList(image) == (Image *) NULL) { image=DestroyImageList(image); return((Image *) NULL); } image=SyncNextImageInList(image); status=SetImageProgress(image,LoadImagesTag,image->scene-1, image->scene); if (status == MagickFalse) break; } } while (status != MagickFalse); TIFFClose(tiff); TIFFReadPhotoshopLayers(image,image_info,exception); if (image_info->number_scenes != 0) { if (image_info->scene >= GetImageListLength(image)) { /* Subimage was not found in the Photoshop layer */ image=DestroyImageList(image); return((Image *)NULL); } } return(GetFirstImageInList(image)); }
280,343,953,088,806,580,000,000,000,000,000,000,000
tiff.c
223,604,198,693,155,800,000,000,000,000,000,000,000
[ "CWE-119" ]
CVE-2016-8677
The AcquireQuantumPixels function in MagickCore/quantum.c in ImageMagick before 7.0.3-1 allows remote attackers to have unspecified impact via a crafted image file, which triggers a memory allocation failure.
https://nvd.nist.gov/vuln/detail/CVE-2016-8677
9,527
libav
e5b019725f53b79159931d3a7317107cbbfd0860
https://github.com/libav/libav
https://github.com/libav/libav/commit/e5b019725f53b79159931d3a7317107cbbfd0860
m4vdec: Check for non-startcode 00 00 00 sequences in probe This makes the m4v detection less trigger-happy. Bug-Id: 949 Signed-off-by: Diego Biurrun <diego@biurrun.de>
1
static int mpeg4video_probe(AVProbeData *probe_packet) { uint32_t temp_buffer = -1; int VO = 0, VOL = 0, VOP = 0, VISO = 0, res = 0; int i; for (i = 0; i < probe_packet->buf_size; i++) { temp_buffer = (temp_buffer << 8) + probe_packet->buf[i]; if ((temp_buffer & 0xffffff00) != 0x100) continue; if (temp_buffer == VOP_START_CODE) VOP++; else if (temp_buffer == VISUAL_OBJECT_START_CODE) VISO++; else if (temp_buffer < 0x120) VO++; else if (temp_buffer < 0x130) VOL++; else if (!(0x1AF < temp_buffer && temp_buffer < 0x1B7) && !(0x1B9 < temp_buffer && temp_buffer < 0x1C4)) res++; } if (VOP >= VISO && VOP >= VOL && VO >= VOL && VOL > 0 && res == 0) return AVPROBE_SCORE_EXTENSION; return 0; }
296,504,819,383,989,060,000,000,000,000,000,000,000
m4vdec.c
292,188,987,403,181,880,000,000,000,000,000,000,000
[ "CWE-476" ]
CVE-2016-8675
The get_vlc2 function in get_bits.h in Libav before 11.9 allows remote attackers to cause a denial of service (NULL pointer dereference and crash) via a crafted mp3 file, possibly related to startcode sequences during m4v detection.
https://nvd.nist.gov/vuln/detail/CVE-2016-8675